Using Doubles in Unix? - objective-c

I am calling sysctl() to retrieve mem stats and for the void* oldVal argument, I am passing in a pointer to a double. However instead of setting the double to the correct value, it just sets it to 0.00000
However, when I try doing the exact same thing with a long, it sets it to the correct stat. Why is the double being set to 0.00000 while long is being set to the correct stat?
int systemInfoNeeded[2] = {CTL_HW, HW_PHYSMEM};
size_t sizeOfBuffer = sizeof(totalAmount);
if (sysctl(systemInfoNeeded, 2, &totalAmount, &sizeOfBuffer, NULL, 0))
{
NSLog(#"Total memory stat retrieval failed.\n");
exit (EXIT_FAILURE);
}
totalAmount is a double. The second I change the type of totalAmount to long, it works perfectly. Is there anyway I can get the double to work? I want to directly send in totalAmount rather than sending a long and then assigning the value to totalAmount.
I am using Objective-C/C, on Mac OS X Snowleopard with Xcode 3.2.6

You can't just choose your favorite data type and pass a pointer to it; the sysctl call expects a pointer to an integer, and so that's what you have to provide. If you pass a pointer to a double, then you get a double with bits that represent a value as a integer -- the result is gibberish.

sysctl() accepts a pointer to the type specified, in the manpage, for the property you are querying. The parameter is declared as a void* so that the same generic interface can work with the different types expected by the various properties. That does not mean that you can use any type you want. In the case of HW_PHYSMEM, it is an integer, i.e. an int, not a long or anything else.
The only reason it works if you pass a long is because macs are little endian, thus the first four bytes of a value as a long are the same as the value as an int, but you should of course not depend on this.
If you want to read a double, convert the integer.
You should take a good look at sysctl(3). Look in particular at the example with KERN_MAXPROC.

Related

Delphi Double to Objective C double

I am looking a few hours for some solution of this problem, but I don't get how it works. I have a hex string from delphi double value : 0X3FF0000000000000. That value should be 1.0. It is 8 byte long, first bit is sign, next 11 are exponent and the rest is mantissa. So for me is this hex value equals 0 x 10^(1023). Maybe I am wrong somewhere, but it doesn't matter. The point is, I need this hex value to convert into objective c double value. If I do: (double)strtoll(hexString.UTF8String, NULL,16); I get: 4.607...x 10 ^18. What am I doing wrong?
It seems that trying to cast in this way ends up with a call to an implicit type conversion (calls _ultod3 or _ltod3) that alters the underlying data. In fact, even trying to do this seems to do the same thing :
UINT64 temp1 = strtoull(hexString, NULL, 16);
double val = *&temp1;
But if you cast the uint pointer to a double* it semes to suppress the compiler's desire to try to perform a conversion. Something like this should work :
UINT64 temp1 = strtoull(hexString, NULL, 16);
double val = *(double*)&temp1;
At least this works with the MS C++ compiler... I imagine the objective C compiler would cooperate as well.

Objective-C / C constant scalar value declarations

I do not see how adding something like L to a number makes a difference. For example if I took the number 23.54 and made it 23.54L what difference does that actually make and when should I use the L and not use the L or other add ons like that? Doesn't objective-c already know 23.54 is a long so why would I make it 23.54L in any case?
If someone could explain that'd be great thanks!
Actually, when a number has a decimal point like 23.54 the default interpretation is that it's a double, and it's encoded as a 64-bit floating point number. If you put an f at the end 23.54f, then it's encoded as a 32-bit floating pointer number. Putting an L at the end declares that the number is a long double, which is encoded as a 128-bit floating point number.
In most cases, you don't need to add a suffix to a number because the compiler will determine the correct size based on context. For example, in the line
float x = 23.54;
the compiler will interpret 23.54 as a 64-bit double, but in the process of assigning that number to x, the compiler will automatically demote the number to a 32-bit float.
Here's some code to play around with
NSLog( #"%lu %lu %lu", sizeof(typeof(25.43f)), sizeof(typeof(25.43)), sizeof(typeof(25.43L)) );
int x = 100;
float y = x / 200;
NSLog( #"%f", y );
y = x / 200.0;
NSLog( #"%f", y );
The first NSLog displays the number of bytes for the various types of numeric constants. The second NSLog should print 0.000000 since the number 200 is interpreted as in integer, and integer division truncates to an integer. The last NSLog should print 0.500000 since 200.0 is interpreted as a double.
It's a way to force the compiler to treat a constant with a specific type.
23.45 is double, 23.54L is long double, and 23.54f is float.
Use a suffix when you need to specify the type of a constant. Or, create a variable of a specific type: float foo = 23.54;. Most of the time you don't need a suffix.
This is all plain C.

Objective C - NSNumber storing and retrieving longLongValue always prefixes with 0xFFFFFFFF

I am trying to store a file size in bytes as an NSNumber. I am reading the file download size from the NSURLResponse and that gives me a long long value and I then create an NSNumber object from that value and store it. When I go to retrieve that value later, it comes back with all the higher bytes set as FFFFFFFF.
For example, I read in the size as 2196772870 bytes (0x82f01806) and then store it into the NSNumber. When I get it back, I get -2098194426 bytes (0xffffffff82f01806). I tried doing a binary AND with 0x00000000FFFFFFFF before storing the value in NSNumber but it still comes back as negative. Code below:
long long bytesTotal = response.expectedContentLength;
NSLog(#"bytesTotal = %llx",bytesTotal);
[downloadInfo setFileTotalSize:[NSNumber numberWithInt:bytesTotal]];
//[downloadInfo setFileTotalSize:[NSNumber numberWithLongLong:bytesTotal]];
long long fileTotalSize = [[downloadInfo fileTotalSize] longLongValue];
NSLog(#"fileTotalSize = %llx",fileTotalSize);
Output:
bytesTotal = 82f01806
fileTotalSize = ffffffff82f01806
Any suggestions?
Edit: Completely forgot the setter for the downloadInfo object.
The problem is this line:
[downloadInfo setFileTotalSize:[NSNumber numberWithInt:bytesTotal]];
bytesTotal is not an int, it's a long long, so you should be using numberWithLongLong:, not numberWithInt:. Change it to:
[downloadInfo setFileTotalSize:[NSNumber numberWithLongLong:bytesTotal]];
The conversion is causing it to be sign extended to 64 bits, and the number starting with 8 appears to be a negative number so that bit gets extended all the way thru the upper long, causing it to be ffffffff.

Difference between Objective-C primitive numbers

What is the difference between objective-c C primitive numbers? I know what they are and how to use them (somewhat), but I'm not sure what the capabilities and uses of each one is. Could anyone clear up which ones are best for some scenarios and not others?
int
float
double
long
short
What can I store with each one? I know that some can store more precise numbers and some can only store whole numbers. Say for example I wanted to store a latitude (possibly retrieved from a CLLocation object), which one should I use to avoid loosing any data?
I also noticed that there are unsigned variants of each one. What does that mean and how is it different from a primitive number that is not unsigned?
Apple has some interesting documentation on this, however it doesn't fully satisfy my question.
Well, first off types like int, float, double, long, and short are C primitives, not Objective-C. As you may be aware, Objective-C is sort of a superset of C. The Objective-C NSNumber is a wrapper class for all of these types.
So I'll answer your question with respect to these C primitives, and how Objective-C interprets them. Basically, each numeric type can be placed in one of two categories: Integer Types and Floating-Point Types.
Integer Types
short
int
long
long long
These can only store, well, integers (whole numbers), and are characterized by two traits: size and signedness.
Size means how much physical memory in the computer a type requires for storage, that is, how many bytes. Technically, the exact memory allocated for each type is implementation-dependendant, but there are a few guarantees: (1) char will always be 1 byte (2) sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long).
Signedness means, simply whether or not the type can represent negative values. So a signed integer, or int, can represent a certain range of negative or positive numbers (traditionally –2,147,483,648 to 2,147,483,647), and an unsigned integer, or unsigned int can represent the same range of numbers, but all positive (0 to 4,294,967,295).
Floating-Point Types
float
double
long double
These are used to store decimal values (aka fractions) and are also categorized by size. Again the only real guarantee you have is that sizeof(float) <= sizeof(double) <= sizeof (long double). Floating-point types are stored using a rather peculiar memory model that can be difficult to understand, and that I won't go into, but there is an excellent guide here.
There's a fantastic blog post about C primitives in an Objective-C context over at RyPress. Lots of intro CPS textbooks also have good resources.
Firstly I would like to specify the difference between au unsigned int and an int. Say that you have a very high number, and that you write a loop iterating with an unsigned int:
for(unsigned int i=0; i< N; i++)
{ ... }
If N is a number defined with #define, it may be higher that the maximum value storable with an int instead of an unsigned int. If you overflow i will start again from zero and you'll go in an infinite loop, that's why I prefer to use an int for loops.
The same happens if for mistake you iterate with an int, comparing it to a long. If N is a long you should iterate with a long, but if N is an int you can still safely iterate with a long.
Another pitfail that may occur is when using the shift operator with an integer constant, then assigning it to an int or long. Maybe you also log sizeof(long) and you notice that it returns 8 and you don't care about portability, so you think that you wouldn't lose precision here:
long i= 1 << 34;
Bit instead 1 isn't a long, so it will overflow and when you cast it to a long you have already lost precision. Instead you should type:
long i= 1l << 34;
Newer compilers will warn you about this.
Taken from this question: Converting Long 64-bit Decimal to Binary.
About float and double there is a thing to considerate: they use a mantissa and an exponent to represent the number. It's something like:
value= 2^exponent * mantissa
So the more the exponent is high, the more the floating point number doesn't have an exact representation. It may also happen that a number is too high, so that it will have a so inaccurate representation, that surprisingly if you print it you get a different number:
float f= 9876543219124567;
NSLog("%.0f",f); // On my machine it prints 9876543585124352
If I use a double it prints 9876543219124568, and if I use a long double with the .0Lf format it prints the correct value. Always be careful when using floating points numbers, unexpected things may happen.
For example it may also happen that two floating point numbers have almost the same value, that you expect they have the same value but there is a subtle difference, so that the equality comparison fails. But this has been treated hundreds of times on Stack Overflow, so I will just post this link: What is the most effective way for float and double comparison?.

Dividing variables of type long long to get a percentage?

So I am trying to divide two variables that are type long long, totalBytesWritten and totalBytesExpected
Basically I am trying to figure out the percentage complete my file upload is and update the progressbar accordingly.
For example, I am sending 262144 of 1839948 bytes
But when I do double progress = totalBytesWritten/totalBytesExpected it gives me some unexpected numbers. When I NSLog progress I get only 0s and then finally 1.
Thanks!
You're performing an integer division, then the result is getting casted to a double but it's too late: you already lost precision.If you just cast one of the two operands to a double, the other one will be also promoted to a double and you'll get a floating point value as result:
NSLog(#"%f,"(double)totalBytesWritten/totalBytesExpected);
long long is a non-decimal data type, so it uses integer division. Thus, since totalBytesWritten is probably (hopefully) less than totalBytesExpected, you'll always get 0. Try converting them to double first, then divide.