Delphi Double to Objective C double - objective-c

I am looking a few hours for some solution of this problem, but I don't get how it works. I have a hex string from delphi double value : 0X3FF0000000000000. That value should be 1.0. It is 8 byte long, first bit is sign, next 11 are exponent and the rest is mantissa. So for me is this hex value equals 0 x 10^(1023). Maybe I am wrong somewhere, but it doesn't matter. The point is, I need this hex value to convert into objective c double value. If I do: (double)strtoll(hexString.UTF8String, NULL,16); I get: 4.607...x 10 ^18. What am I doing wrong?

It seems that trying to cast in this way ends up with a call to an implicit type conversion (calls _ultod3 or _ltod3) that alters the underlying data. In fact, even trying to do this seems to do the same thing :
UINT64 temp1 = strtoull(hexString, NULL, 16);
double val = *&temp1;
But if you cast the uint pointer to a double* it semes to suppress the compiler's desire to try to perform a conversion. Something like this should work :
UINT64 temp1 = strtoull(hexString, NULL, 16);
double val = *(double*)&temp1;
At least this works with the MS C++ compiler... I imagine the objective C compiler would cooperate as well.

Related

Objective-C / C constant scalar value declarations

I do not see how adding something like L to a number makes a difference. For example if I took the number 23.54 and made it 23.54L what difference does that actually make and when should I use the L and not use the L or other add ons like that? Doesn't objective-c already know 23.54 is a long so why would I make it 23.54L in any case?
If someone could explain that'd be great thanks!
Actually, when a number has a decimal point like 23.54 the default interpretation is that it's a double, and it's encoded as a 64-bit floating point number. If you put an f at the end 23.54f, then it's encoded as a 32-bit floating pointer number. Putting an L at the end declares that the number is a long double, which is encoded as a 128-bit floating point number.
In most cases, you don't need to add a suffix to a number because the compiler will determine the correct size based on context. For example, in the line
float x = 23.54;
the compiler will interpret 23.54 as a 64-bit double, but in the process of assigning that number to x, the compiler will automatically demote the number to a 32-bit float.
Here's some code to play around with
NSLog( #"%lu %lu %lu", sizeof(typeof(25.43f)), sizeof(typeof(25.43)), sizeof(typeof(25.43L)) );
int x = 100;
float y = x / 200;
NSLog( #"%f", y );
y = x / 200.0;
NSLog( #"%f", y );
The first NSLog displays the number of bytes for the various types of numeric constants. The second NSLog should print 0.000000 since the number 200 is interpreted as in integer, and integer division truncates to an integer. The last NSLog should print 0.500000 since 200.0 is interpreted as a double.
It's a way to force the compiler to treat a constant with a specific type.
23.45 is double, 23.54L is long double, and 23.54f is float.
Use a suffix when you need to specify the type of a constant. Or, create a variable of a specific type: float foo = 23.54;. Most of the time you don't need a suffix.
This is all plain C.

Difference between Objective-C primitive numbers

What is the difference between objective-c C primitive numbers? I know what they are and how to use them (somewhat), but I'm not sure what the capabilities and uses of each one is. Could anyone clear up which ones are best for some scenarios and not others?
int
float
double
long
short
What can I store with each one? I know that some can store more precise numbers and some can only store whole numbers. Say for example I wanted to store a latitude (possibly retrieved from a CLLocation object), which one should I use to avoid loosing any data?
I also noticed that there are unsigned variants of each one. What does that mean and how is it different from a primitive number that is not unsigned?
Apple has some interesting documentation on this, however it doesn't fully satisfy my question.
Well, first off types like int, float, double, long, and short are C primitives, not Objective-C. As you may be aware, Objective-C is sort of a superset of C. The Objective-C NSNumber is a wrapper class for all of these types.
So I'll answer your question with respect to these C primitives, and how Objective-C interprets them. Basically, each numeric type can be placed in one of two categories: Integer Types and Floating-Point Types.
Integer Types
short
int
long
long long
These can only store, well, integers (whole numbers), and are characterized by two traits: size and signedness.
Size means how much physical memory in the computer a type requires for storage, that is, how many bytes. Technically, the exact memory allocated for each type is implementation-dependendant, but there are a few guarantees: (1) char will always be 1 byte (2) sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long).
Signedness means, simply whether or not the type can represent negative values. So a signed integer, or int, can represent a certain range of negative or positive numbers (traditionally –2,147,483,648 to 2,147,483,647), and an unsigned integer, or unsigned int can represent the same range of numbers, but all positive (0 to 4,294,967,295).
Floating-Point Types
float
double
long double
These are used to store decimal values (aka fractions) and are also categorized by size. Again the only real guarantee you have is that sizeof(float) <= sizeof(double) <= sizeof (long double). Floating-point types are stored using a rather peculiar memory model that can be difficult to understand, and that I won't go into, but there is an excellent guide here.
There's a fantastic blog post about C primitives in an Objective-C context over at RyPress. Lots of intro CPS textbooks also have good resources.
Firstly I would like to specify the difference between au unsigned int and an int. Say that you have a very high number, and that you write a loop iterating with an unsigned int:
for(unsigned int i=0; i< N; i++)
{ ... }
If N is a number defined with #define, it may be higher that the maximum value storable with an int instead of an unsigned int. If you overflow i will start again from zero and you'll go in an infinite loop, that's why I prefer to use an int for loops.
The same happens if for mistake you iterate with an int, comparing it to a long. If N is a long you should iterate with a long, but if N is an int you can still safely iterate with a long.
Another pitfail that may occur is when using the shift operator with an integer constant, then assigning it to an int or long. Maybe you also log sizeof(long) and you notice that it returns 8 and you don't care about portability, so you think that you wouldn't lose precision here:
long i= 1 << 34;
Bit instead 1 isn't a long, so it will overflow and when you cast it to a long you have already lost precision. Instead you should type:
long i= 1l << 34;
Newer compilers will warn you about this.
Taken from this question: Converting Long 64-bit Decimal to Binary.
About float and double there is a thing to considerate: they use a mantissa and an exponent to represent the number. It's something like:
value= 2^exponent * mantissa
So the more the exponent is high, the more the floating point number doesn't have an exact representation. It may also happen that a number is too high, so that it will have a so inaccurate representation, that surprisingly if you print it you get a different number:
float f= 9876543219124567;
NSLog("%.0f",f); // On my machine it prints 9876543585124352
If I use a double it prints 9876543219124568, and if I use a long double with the .0Lf format it prints the correct value. Always be careful when using floating points numbers, unexpected things may happen.
For example it may also happen that two floating point numbers have almost the same value, that you expect they have the same value but there is a subtle difference, so that the equality comparison fails. But this has been treated hundreds of times on Stack Overflow, so I will just post this link: What is the most effective way for float and double comparison?.

Using Doubles in Unix?

I am calling sysctl() to retrieve mem stats and for the void* oldVal argument, I am passing in a pointer to a double. However instead of setting the double to the correct value, it just sets it to 0.00000
However, when I try doing the exact same thing with a long, it sets it to the correct stat. Why is the double being set to 0.00000 while long is being set to the correct stat?
int systemInfoNeeded[2] = {CTL_HW, HW_PHYSMEM};
size_t sizeOfBuffer = sizeof(totalAmount);
if (sysctl(systemInfoNeeded, 2, &totalAmount, &sizeOfBuffer, NULL, 0))
{
NSLog(#"Total memory stat retrieval failed.\n");
exit (EXIT_FAILURE);
}
totalAmount is a double. The second I change the type of totalAmount to long, it works perfectly. Is there anyway I can get the double to work? I want to directly send in totalAmount rather than sending a long and then assigning the value to totalAmount.
I am using Objective-C/C, on Mac OS X Snowleopard with Xcode 3.2.6
You can't just choose your favorite data type and pass a pointer to it; the sysctl call expects a pointer to an integer, and so that's what you have to provide. If you pass a pointer to a double, then you get a double with bits that represent a value as a integer -- the result is gibberish.
sysctl() accepts a pointer to the type specified, in the manpage, for the property you are querying. The parameter is declared as a void* so that the same generic interface can work with the different types expected by the various properties. That does not mean that you can use any type you want. In the case of HW_PHYSMEM, it is an integer, i.e. an int, not a long or anything else.
The only reason it works if you pass a long is because macs are little endian, thus the first four bytes of a value as a long are the same as the value as an int, but you should of course not depend on this.
If you want to read a double, convert the integer.
You should take a good look at sysctl(3). Look in particular at the example with KERN_MAXPROC.

Capacity of a uint64_t?

I have a little problem. Essentially, the code:
uint64_t myInteger = 98930 * 98930;
NSLog(#"%qu", myInteger);
...just gets it wrong. I get '1197210308' as the output, which is evidently incorrect. Why is this happening? It can't be that a uint64_t is too small, as they apparently go up to 18 and a half quintillion. Anyone have any idea?
Try casting the first number so the operation is made using that type:
uint64_t myInteger = (uint64_t)98930 *98930;
98930 is an int, so you're multiplying two ints, which gives an int. You're then assigning to a uint64_t, but it's too late, you've already lost the precision. Make sure one of the operands is of type uint64_t, so the other will be coerced to that type, and the multiplication will be done as uint64_t multiplication.
I don't know much about objective-C, but doing the same in C, integer promotions stop at the integer rank, so you get an integer overflow. Try:
uint64_t myInteger = 98930LLU * 98930;

Why system can not find the method BigInteger.ToDouble?

I am using F# Interactive, and I have added the reference of FSharp.PowerPack.dll.
When I try to convert BigNum to double as the following code,
let n = 2N
let d = double n
error comes out that
"System.MissingMethodException: Method not found: 'Double System.Numerics.BigInteger.ToDouble(System.Numerics.BigInteger)'. at Microsoft.FSharp.Math.BigNum.ToDouble(BigNum n)"
What can I do if I want to make such conversions like "BigNum to int" and "BigNum to double"? Thanks very much.
What you've written will work in the F# standalone CTP.
This error occurs in VS2010, because the BigInteger type has been moved from the F# library to the .Net4.0 core library. I'm not sure whether this issue has something to do with having both the F# CTP and the VS2010 beta installed.
Until a better solution comes along, you could roll your own conversion like this:
let numToDouble (n:bignum) = double n.Numerator / double n.Denominator
To convert a bignum to an integer you could then think of something like this:
let numToInt (n:bignum) = int n.Numerator / int n.Denominator
But this is rather dangerous: it'll overflow quite easily. A better version of numToInt would be to convert to a double first and then convert to int:
let numToInt = int << numToDouble
Still, both conversions aren't ideal for numerators/denominators of more than 308 digits, which will still overflow a double, while the fraction itself could be small.
ex: 11^300 / 13^280 ~= 3.26337, but
> numToDouble (pown 11N 300 / pown 13N 280);;
val it : float = nan