I have been trying to get the harddisk size on linux systems using fseek().
I have a function which should return the correct size:
unsigned long long getsize(FILE *fp){
unsigned long long prev=ftell(fp);
fseek(fp,0,SEEK_END);
unsigned long long size=ftell(fp);
fseek(fp,prev,SEEK_SET);
return size;
}
But when I use it on a harddisk it returns 18446744073709551615 or 2^64-1...
it doesn't seem to always return it. as I can use it on files fine. I also has worked before on harddisks.
You're getting this sort of results because that's not the right way to get the disk size.
If you want to check the size of disk you should issue an ioctl() on disk fd with a request of BLKGETSIZE64 (and argument of pointer to long long).
long long disk_size;
ioctl(<disk_fd>, BLKGETSIZE64, &disk_size);
If you're rather interested in filesystem size (which may be different) or empty space on a filesystem then use statvfs() call.
Related
I am new to VHDL and have perhaps a basic question, but here goes:
When declaring a variable, say an integer, what is the benefit of
variable count_baud : integer range 0 to clk_freq/baud_rate - 1 := 0;
vs.
variable count_baud : integer := 0;
Is the point of using range (only) to limit the size of the synthesized real estate in the CPLD/FPGA?
There are two very good reasons:
Debugging. If you know that your integer shall stay in the [min..max] range, tell it to the simulator with a proper range declaration. In case there is a bug in your code and you try to assign an out-of-range value, the simulator will let you know with a very useful message. While if you just declared an integer the error could happen long after the bogus assignment.
Synthesis quality. A logic synthesizer, by default, will allocate 32 bits for an integer. Depending on the surrounding it may discover that less bits are sufficient... or not. So, telling the synthesizer what the real range is frequently saves hardware, power and increases the final performance (speed), especially if the real range can be represented on much less than 32 bits.
I have been using an earlier version of webrtc code. Today, I fetched the latest code and it broke my build:-(. It appears that WebRtcNs_Process now takes a "float" type buffer instead of "int16" type buffer. There may be a good reason to do so. However, this also seems to have broken the chain of operations.
Typically, you first call WebRtcNs_Process and feed the output of this method to WebRtcAecm_Process. In the latest version, the output of WebRtcNs_Process is a float type buffer but the input to WebRtcAecm_Process is a int16 buffer. Seems like now I have to write extra code to convert float buffer to int16 buffer.
Also, on most platforms, the output from the mic is int16 type buffer. There is additional code I have to write to convert this int16 value to float so that I can pass it to WebRtcNs_Process.
I am wondering if I missed something. Regards.
I am trying to store a file size in bytes as an NSNumber. I am reading the file download size from the NSURLResponse and that gives me a long long value and I then create an NSNumber object from that value and store it. When I go to retrieve that value later, it comes back with all the higher bytes set as FFFFFFFF.
For example, I read in the size as 2196772870 bytes (0x82f01806) and then store it into the NSNumber. When I get it back, I get -2098194426 bytes (0xffffffff82f01806). I tried doing a binary AND with 0x00000000FFFFFFFF before storing the value in NSNumber but it still comes back as negative. Code below:
long long bytesTotal = response.expectedContentLength;
NSLog(#"bytesTotal = %llx",bytesTotal);
[downloadInfo setFileTotalSize:[NSNumber numberWithInt:bytesTotal]];
//[downloadInfo setFileTotalSize:[NSNumber numberWithLongLong:bytesTotal]];
long long fileTotalSize = [[downloadInfo fileTotalSize] longLongValue];
NSLog(#"fileTotalSize = %llx",fileTotalSize);
Output:
bytesTotal = 82f01806
fileTotalSize = ffffffff82f01806
Any suggestions?
Edit: Completely forgot the setter for the downloadInfo object.
The problem is this line:
[downloadInfo setFileTotalSize:[NSNumber numberWithInt:bytesTotal]];
bytesTotal is not an int, it's a long long, so you should be using numberWithLongLong:, not numberWithInt:. Change it to:
[downloadInfo setFileTotalSize:[NSNumber numberWithLongLong:bytesTotal]];
The conversion is causing it to be sign extended to 64 bits, and the number starting with 8 appears to be a negative number so that bit gets extended all the way thru the upper long, causing it to be ffffffff.
I have a program on which I was trying to perform some loop optimization It's written in C++ and compiled using gcc
eventually using a profiler I tracked down more than half the execution time of the loop to the line
double x_component = in.input_vector[in.dimension_to_process] - \
(center_of_bin_0 + (double) nn * grid_distance);
Everything on this line is of type double with the exception of the loop index nn which is of type long unsigned int
the cast from long unsigned int to double generates the assembly instruction fxtod which the profiler flagged
As a test I removed the reference to nn from the line, thus removing the cast from unsigned int to double and the execution time of the loop reduced by almost a half, in a loop that performs about a dozen floating point operations on an Ultrasparc IV processor. I confirmed that this is also the case on an Ultrasparc II,
Is it normal for a cast from int to double to be much more expensive that a cache miss, let alone a floating point multiply ? And if so what does everyone else normally do about it ?
A lookup table for all possible values of nn (which in this case have a known limited range) would be faster than this
I have a system which deals with keys that have been turned into unsigned long integers (by packing short sequences into byte strings). I want to try storing these in Redis, and I want to do it in the best way possible. My concern is mainly memory efficiency.
From playing with the online REPL I notice that the two following are identical
zadd myset 1.0 "123"
zadd myset 1.0 123
This means that even if I know I want to store an integer, it has to be set as a string. I notice from the documentation that keys are just stored as char*s and that commands like SETBIT indicate that Redis is not averse to treating strings as bytestrings in the client. This hints at a slightly more efficient way of storing unsigned longs than as their string representation.
What is the best way to store unsigned longs in sorted sets?
Thanks to Andre for his answer. Here are my findings.
Storing ints directly
Redis keys must be strings. If you want to pass an integer, it has to be some kind of string. For small, well-defined sets of values, Redis will parse the string into an integer, if it is one. My guess is that it will use this int to tailor its hash function (or even statically dimension a hash table based on the value). This works for small values (examples being the default values of 64 entries of a value of up to 512). I will test for larger values during my investigation.
http://redis.io/topics/memory-optimization
Storing as strings
The alternative is squashing the integer so it looks like a string.
It looks like it is possible to use any byte string as a key.
For my application's case it actually didn't make that much difference storing the strings or the integers. I imagine that the structure in Redis undergoes some kind of alignment anyway, so there may be some pre-wasted bytes anyway. The value is hashed in any case.
Using Python for my testing, so I was able to create the values using the struct.pack. long longs weigh in at 8 bytes, which is quite large. Given the distribution of integer values, I discovered that it could actually be advantageous to store the strings, especially when coded in hex.
As redis strings are "Pascal-style":
struct sdshdr {
long len;
long free;
char buf[];
};
and given that we can store anything in there, I did a bit of extra Python to code the type into the shortest possible type:
def do_pack(prefix, number):
"""
Pack the number into the best possible string. With a prefix char.
"""
# char
if number < (1 << 8*1):
return pack("!cB", prefix, number)
# ushort
elif number < (1 << 8*2):
return pack("!cH", prefix, number)
# uint
elif number < (1 << 8*4):
return pack("!cI", prefix, number)
# ulonglong
elif number < (1 << 8*8):
return pack("!cQ", prefix, number)
This appears to make an insignificant saving (or none at all). Probably due to struct padding in Redis. This also drives Python CPU through the roof, making it somewhat unattractive.
The data I was working with was 200000 zsets of consecutive integer => (weight, random integer) × 100, plus some inverted index (based on random data). dbsize yields 1,200,001 keys.
Final memory use of server: 1.28 GB RAM, 1.32 Virtual. Various tweaks made a difference of no more than 10 megabytes either way.
So my conclusion:
Don't bother encoding into fixed-size data types. Just store the integer as a string, in hex if you want. It won't make all that much difference.
References:
http://docs.python.org/library/struct.html
http://redis.io/topics/internals-sds
I'm not sure of this answer, it's more of a suggestion than anything else. I'd have to give it a try and see if it works.
As far as I can tell, Redis only supports UTF-8 strings.
I would suggest grabbing a bit representation of your long integer and pad it accordingly to fill up the nearest byte. Encode each set of 8 bytes to a UTF-8 string (ending up with 8x*utf8_char* string) and store that in Redis. The fact that they're unsigned means that you don't care about that first bit but if you did, you could add a flag to the string.
Upon retrieving the data, you have to remember to pad each character to 8 bytes again as UTF-8 will use less bytes for the representation if the character can be stored with less bytes.
End result is that you store a maximum of 8 x 8 byte characters instead of (possibly) a maximum of 64 x 8 byte characters.