SeaweedFs: how to set max size for each volume? - seaweedfs

When I start the master service I can specify the maximum volume size with the parameter -volumeSizeLimitMB.
./weed master -mdir="." -volumeSizeLimitMB=1000.
Then, I start two volume services.
./weed volume -mserver="localhost:9333" -dir="." -max=10
./weed volume -mserver="localhost:9333" -dir="." -max=10 -port=8081
I have some questions:
How can I specify the maximum volume size when I start the volume service? Does it know it straight away because the master service communicate this to the volume service when it connects? I understand max is for the max. number of volumes, but how about the size of each single volume?
The master creates volume 8 and 11 on one volume service and 9 and 10 on the other, then it states "No space left". I don't understand why. Here is the whole log: https://pastebin.com/cLRd8gvt

The master controls which volume is writable. When it reaches the volumeSizeLimitMB, no writes will be sent over to the volume servers. So the volume server does not need to know the volumeSizeLimitMB.

Related

Calculations of NTFS Partition Table Starting Points

I have a disk image. I'm able to see partition start and end values with gparted or another tools. However, I want to calculate them manually. I inserted an image , which showing my disk image partition start and end values. Also, I inserted $MFT file with link. As you see in the picture, my starter point for partition table 2 is : 7968240. How can I determine this number with real calculation ? I tried to dived this value with sector size which is 512. However, results are not fit. I'll appriciate for a formula for it. Start and End Points of Partitions.
$MFT File : https://file.io/r7sy2A7itdur
How can I determine this number with real calculation ?
The information about how a hard disk has been partitioned is stored in its first sector (that is, the first sector of the first track on the first disk surface). The first sector is the master boot record (MBR) of the disk; this is the sector that the BIOS reads in and starts when the machine is first booted.
For the current partition system (gpt) you can get more information here. The MFT is only a part on the NTFS in question, which is calculated via GPT or MBR.

Cloudwatch dashboard insight graphs - can I set binsize dynamically?

I'm using dashboards to monitor various output stats on AWS.
Lets say it looks something like this:
stats avg(myfield1), min(myfield2), max(myfield3) by bin(1m)
This works fine - however I am by default using a bin size of 1 minute - so the data retention period is only 3 days. If I want to look at a week or a month I have to use a separate widget with a larger bin size - I still want the 1 minute resolution for the shorter time periods and I'd rather not have to double up the graphs as the dashboard is already very busy.
Obviously all the built in metrics graphs adjust the bin size they are querying dynamically as the data range being viewed is changed.
Is it possible to do this within a cloudwatch insights query and if so what is the syntax?

Event hub input size data is three time more in output size when reading using stream analytics

when i ingest 100 KB data file in event hub, when i read data from event hub using streaming analytic output file size is three times bigger than input file.
Please confirm
I had a same issue and as was investigating that, the input count (by byte and size) was multiplied by partition factors.
the partitions were created and mapped as inputs (we have only 2 inputs), but when you will see your job diagram - click ... and select expand partitions.
Per attached picture our IoT hub input is expanded to 20 partitions.

How to calculate throughput for individual links (there by fairness index) using old wireless trace format file in ns2?

I am stuck as in how to identify the different connections(flows) in trace file.
The following is the format in which the trace file is being created
event
time
Source node
Destination node
Packet type
Packet size
flags
fid
Source address
Dest. address
Seq. number
Packet id
If you take a look at the frame format of a trace file, the 8th column is the flow id which you can extract using an awkfile.
Read more on awk file and how you can isolate or count sent and received packets along with flow id. Once you have that just divide the two.

SADD only if SCARD below a value

Node.js & Redis:
I have a LIST (users:waiting) storing a queue of users waiting to join games.
I have SORTED SET (games:waiting) of games waiting for users. This is updated by the servers every 30s with a new date. This way I can ensure if a server crashes, the game is no longer used. If the server is running and fills up, it'll remove itself from the sorted set.
Each game has a SET (game:id:users) containing the users that are in it. Each game can accept no more than 6 players.
Multiple servers are using BRPOP to pick up users from the LIST (users:waiting).
Once a server has a user id, it gets the waiting games ids, then proceeds to run SCARD on their game:id:users SET. If the result of this is less than 6, it adds them to the set.
The problem:
If multiple servers are doing this at once, we could end up with more than 6 users being added to a set at a time. For example if one server requests SCARD and immediately after another runs SADD, the number in the set will have increased but the first server won't know.
Is there anyway of preventing this?
You need transactions, which redis supports: http://redis.io/topics/transactions
in your case in particular, you want to pay attention to the watch command: http://redis.io/topics/transactions#cas