Integer division acting strangely - objective-c

I'm using the following code in xCode, to work out how much to initially zoom on a UIImage in a scroll view but it keeps returning zero.
int heightRatio = self.scrollView.bounds.size.height / self.imageView.image.size.height;
The strange thing is if I do a subtraction it works, just not with division.
XCode describes both height variables as a "GLuint", some graphics version of int I think.
I've NSLogged both numbers on the right, and neither are zero.
I've tried defining heightRatio as a GLuint as well, with no joy, and also converting to NSNumber but it's all just getting a bit messy.
Am I missing something?
Thanks for any feedback.

You should declare heightRatio as a double (and round later if you want to have an int).

Both self.scrollView.bounds.size.height and self.imageView.image.size.height are of type CGFloat. If you declare heightRatio as CGFloat as well, you would correctly capture division results between 0 and 1. If you need to make it an integer later, use ceil(), trunc(), or round() to round up, down, or to the nearest whole number.

Sorry but double may not be enough, the only way to make division without lose data is using NSDecimal number. Here a sample code, show the percentage of working in a loop
NSDecimalNumber *Percentuale = [[[NSDecimalNumber decimalNumberWithString:[NSString stringWithFormat:#"%i",ClienteNr]] decimalNumberByDividingBy:[NSDecimalNumber decimalNumberWithString:[NSString stringWithFormat:#"%i",[[Globals ClientiDaMostrareMappa] count]]]] decimalNumberByMultiplyingBy:[NSDecimalNumber decimalNumberWithString:#"100"]];
[Globals.WaitAlert setTitle:[NSString stringWithFormat:#"Localizzazione Clienti!\nAttendere... %i%%",[Percentuale intValue]]];

Related

float imprecision in printf / appendFormat

I have a float value, that I would like to show it in format string and if it corresponds to an int, showing the integer, if not, showing it with one decimal.
Like this :
3.1
3
2.9
2.8
For now I'm stuck, in the concept, I'd do something like that :
float myFloat = 3.1
float mySecondFloat = 3
[NSString stringWithFormat:#"%g %g", myFloat, mySecondFloat];
My question is:
1/ the type format "%g" works in most cases but sometimes i have result shown like "0.600001" while in reality there should be 0.6 because all I do is 0.7 - 0.1.
Is there a kind of type cast for float at 1 decimal or maybe a bitwise operation to get rid of the final imprecision, or other way to make it works in 100% of cases ?
Thanks for your answers.
If your need absolute precision when working with decimal numbers, you may consider using the NSDecimalNumber class.
Number and Value Programming Topics: Using Decimal Numbers
Otherwise, the %.1g format specifier will be OK.
You have to use this code:
float myFloat = 3.1
float mySecondFloat = 3
[NSString stringWithFormat:#"%.1f %.1f", myFloat, mySecondFloat];
EDIT:
If I really want to print the integer value of a float, i would do it this way (ugly code)
int intValue = myFloatNumber/1;
NSString *string;
if(myFloatNumber == intValue)
{
string = [NSString stringWithFormat:#"%.0f", myFloatValue];
}
else
{
string = [NSString stringWithFormat:#"%.1f", myFloatValue];
}
Doing an integer division by 1, you automatically cast your float to an int.
Once you have the NSString *string you can concat it to your string.
Seeing others answers, it seems that there is no standard C formatter to achieve this.
I went following Nicolas answer (NSDecimalNumber), hence the solve flag. It worked fine indeed, but, in my (very simple) case it might be overkill. Giving it a second thought, if I had to do it again, I think that I would go using only NSNumberFormatter on a NSNumber (using numberWithFloat: method)
If it can help someone..

Loss of precision converting 'float' to NSNumber, back to 'float'

I seem to be encountering a strange issue in Objective-C converting a float to an NSNumber (wrapping it for convenience) and then converting it back to a float.
In a nutshell, a class of mine has a property red, which is a float from 0.0 to 1.0:
#property (nonatomic, assign) float red;
This object is comparing itself to a value that is loaded from disk, for synchronization purposes. (The file can change outside the application, so it checks periodically for file changes, loads the alternate version into memory, and does a comparison, merging differences.)
Here's an interesting snippet where the two values are compared:
if (localObject.red != remoteObject.red) {
NSLog(#"Local red: %f Remote red: %f", localObject.red, remoteObject.red);
}
Here's what I see in the logs:
2011-10-28 21:07:02.356 MyApp[12826:aa63] Local red: 0.205837 Remote red: 0.205837
Weird. Right? How is this piece of code being executed?
The actual value as stored in the file:
...red="0.205837"...
Is converted to a float using:
currentObject.red = [[attributeDict valueForKey:#"red"] floatValue];
At another point in the code I was able to snag a screenshot from GDB. It was printed to NSLog as: (This is also the precision with which it appears in the file on disk.)
2011-10-28 21:21:19.894 MyApp[13214:1c03] Local red: 0.707199 Remote red: 0.707199
But appears in the debugger as:
How is this level of precision being obtained at the property level, but not stored in the file, or printed properly in NSLog? And why does it seem to be varying?
If you are converting it to/from a string at any point try using %0.16f instead of %f (or whatever precision you want instead of .16).
For more info, see IEEE Std formatting.
Also, use objectForKey instead of valueForKey (valueForKey is not intended to be used on dictionaries):
currentObject.red = [[attributeDict objectForKey:#"red"] floatValue];
See this SO answer for a better explanation of objectForKey vs valueForKey:
Difference between objectForKey and valueForKey?
The problem your are experiencing is a problem with floating point. A floating point number doesn't exactly represent the number stored (except for some specific cases which don't matter here). The example in the link craig posted is an excellent example of this.
In your code, when you write out the value to your file you write an approximation of what is stored in the floating point number. When you load it back, another approximation of it is stored in the float. However these two numbers are unlikely to be equal.
The best solution is to use a fuzzy comparison of the two floating point numbers. I'm not an objective c programmer, so I don't know if the languages includes builtin functions to preform this comparison. However this link provides a good set of examples on various ways to preform this comparison.
You can also try the other posted solution of using a bigger precision to write out to your file, but you will probably end up wasting space for the extra precision that you don't need. I'd personally recommend you use the fuzzy comparison as it is more bullet proof.
You say that the "remote" value is "loaded from disk". I'm guessing that the representation on disk is not an IEEE float bit value, but rather some sort of character representation or some such. So there are inevitable conversion errors going to and from that representation, given the way IEEE float works. You will not get an exact result, given that there's only about 6 digits of decimal precision in a float value, but it rarely maps to exactly 6 decimal digits but instead is sort of like representing 1/3 in decimal -- there is no exact mapping.
Read this: http://floating-point-gui.de/

Objective c: Parsing string to float, avoid extra decimals

Im converting some information Im receiving by a string to a float to get the sum of all of them.
The problem is when I convert the float, for example:
myString = #"13502.63"
float *f = [myString floatValue];
NSLog(#"Converted %f", f);
The result of "f" is 13502.629883
This thing is ok for some values, but when I have to add a big amount of this values, these extra decimals make the result incorrect.
Could anybody help me, please?
Thanks
If you want accuracy you should not use float. Use NSDecimalNumber.
NSString *myString = #"13502.63";
NSDecimalNumber *number = [NSDecimalNumber decimalNumberWithString:myString];
Unfortunately all floating point types in any language will have this problem, as they have to convert into an underlying binary integer format.
Have you considered using NSDecimalNumber?
These will be much slower than a float, but if that is not a problem, then they are much more accurate for such calculations.
If you need speed for some reason, would a double or long-double be accurate enough?
float numbers have no exact representation, that is the reason why "13502.63" is converted to 13502.629883; this is the closest float to the original number.
So, I don't think there is an easy solution with float. You should try NSDecimalNumber. I don't know about the performance, but it should give you an exact representation.

Why is float getting rounded off in this Objective C / Cocoa code

I am doing a simple :
float x = 151.185436;
printf("x=%f",x);
and the result is
x=151.185440
Whats wrong here? I want to retain and print my original value of 151.185436
Thanks
Amarsh
floats just aren't very accurate. Try double. And read this: http://docs.sun.com/source/806-3568/ncg_goldberg.html
A float can only hold 32 bits (4 bytes) of information about your number - it can't just store as many decimal places as you need it to. 151.18544 is as close to your value that the float could represent without running out of bits.
For the precision you want, you need to use a double instead of a float.
Floats are inaccurate, use doubles. Also as you're using Objective C and not straight C it might be better if you use Objective C functions for this:
myNumber = [NSNumber numberWithDouble:151.185436];
NSLog(#"myNumber = %#", myNumber);

"f" after number

What does the f after the numbers indicate? Is this from C or Objective-C? Is there any difference in not adding this to a constant number?
CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
Can you explain why I wouldn't just write:
CGRect frame = CGRectMake(0, 0, 320, 50);
CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
uses float constants. (The constant 0.0 usually declares a double in Objective-C; putting an f on the end - 0.0f - declares the constant as a (32-bit) float.)
CGRect frame = CGRectMake(0, 0, 320, 50);
uses ints which will be automatically converted to floats.
In this case, there's no (practical) difference between the two.
When in doubt check the assembler output. For instance write a small, minimal snippet ie like this
#import <Cocoa/Cocoa.h>
void test() {
CGRect r = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
NSLog(#"%f", r.size.width);
}
Then compile it to assembler with the -S option.
gcc -S test.m
Save the assembler output in the test.s file and remove .0f from the constants and repeat the compile command. Then do a diff of the new test.s and previous one. Think that should show if there are any real differences. I think too many have a vision of what they think the compiler does, but at the end of the day one should know how to verify any theories.
Sometimes there is a difference.
float f = 0.3; /* OK, throw away bits to convert 0.3 from double to float */
assert ( f == 0.3 ); /* not OK, f is converted from float to double
and the value of 0.3 depends on how many bits you use to represent it. */
assert ( f == 0.3f ); /* OK, comparing two floats, although == is finicky. */
It tells the computer that this is a floating point number (I assume you are talking about c/c++ here). If there is no f after the number, it is considered a double or an integer (depending on if there is a decimal or not).
3.0f -> float
3.0 -> double
3 -> integer
The f that you are talking about is probably meant to tell the compiler that it's working with a float. When you omit the f, it is usually translated to a double.
Both are floating point numbers, but a float uses less bits (thus smaller and less precise) than a double.
A floating point literal in your source code is parsed as a double. Assigning it to a variable that is of type float will lose precision. A lot of precision, you're throwing away 7 significant digits. The "f" postfix let's you tell the compiler: "I know what I'm doing, this is intentional. Don't bug me about it".
The odds of producing a bug isn't that small btw. Many a program has keeled over on an ill-conceived floating point comparison or assuming that 0.1 is exactly representable.
It's a C thing - floating point literals are double precision (double) by default. Adding an f suffix makes them single precision (float).
You can use ints to specify the values here and in this case it will make no difference, but using the correct type is a good habit to get into - consistency is a good thing in general, and if you need to change these values later you'll know at first glance what type they are.
From C. It means float literal constant. You can omit both "f" and ".0" and use ints in your example because of implicit conversion of ints to floats.
It is almost certainly from C and reflects the desire to use a 'float' rather than a 'double' type. It is similar to suffixes such as L on numbers to indicate they are long integers. You can just use integers and the compiler will auto convert as appropriate (for this specific scenario).
It usually tells the compiler that the value is a float, i.e. a floating point integer. This means that it can store integers, decimal values and exponentials, e.g. 1, 0.4 or 1.2e+22.