comparing floats gives weird behaviour ( > operator) - objective-c

I'm doing a simple comparison between two floating points. When logging however, I came across some unexpected behaviour of this rather basic code:
float balance = self.balance.floatValue;
float amount = self.amountTextField.text.floatValue;
if(amount > balance && self.amountTextField.text != nil){
allowTransfer = NO;
NSLog(#"allowtransfer: %u", allowTransfer);
}
In my testcase, I used balance as a floating point of 47.95.
All goes well with the comparison until i try 47.96 as a balance and still allowTransfer isn't called, all up to 48.00
Why is somehow the compiler not considering decimals?

Your problem is that you are casting both numbers to an int when comparing them, which will truncate both numbers and make it impossible to exactly compare them, it will only compare the integer parts.
To solve it just use float:
float balance = self.balance.floatValue;
float amount = self.amountTextField.text.floatValue;
Although when dealing with money, you should not use double or float. The reason is that they do not support arbitrary precision and you cannot represent exact values (for instance 0.1 + 0.2 as double is actually 0.30000000000000004
Have a look at NSDecimalNumber for arbitrary precision numbers.

Related

Properly subtracting float values

I am trying to create an array of values. These values should be "2.4,1.6,.8,0". I am subtracting .8 at every step.
This is how I am doing it (code snippet):
float mean = [[_scalesDictionary objectForKey:#"M1"] floatValue]; //3.2f
float sD = [[_scalesDictionary objectForKey:#"SD1"] floatValue]; //0.8f
nextRegion = mean;
hitWall = NO;
NSMutableArray *minusRegion = [NSMutableArray array];
while (!hitWall) {
nextRegion -= sD;
if(nextRegion<0.0f){
nextRegion = 0.0f;
hitWall = YES;
}
[minusRegion addObject:[NSNumber numberWithFloat:nextRegion]];
}
I am getting this output:
minusRegion = (
"2.4",
"1.6",
"0.8000001",
"1.192093e-07",
0
)
I do not want the incredibly small number between .8 and 0. Is there a standard way to truncate these values?
Neither 3.2 nor .8 is exactly representable as a 32-bit float. The representable number closest to 3.2 is 3.2000000476837158203125 (in hexadecimal floating-point, 0x1.99999ap+1). The representable number closest to .8 is 0.800000011920928955078125 (0x1.99999ap-1).
When 0.800000011920928955078125 is subtracted from 3.2000000476837158203125, the exact mathematical result is 2.400000035762786865234375 (0x1.3333338p+1). This result is also not exactly representable as a 32-bit float. (You can see this easily in the hexadecimal floating-point. A 32-bit float has a 24-bit significand. “1.3333338” has one bit in the “1”, 24 bits in the middle six digits, and another bit in the ”8”.) So the result is rounded to the nearest 32-bit float, which is 2.400000095367431640625 (0x1.333334p+1).
Subtracting 0.800000011920928955078125 from that yields 1.6000001430511474609375 (0x1.99999cp+0), which is exactly representable. (The “1” is one bit, the five nines are 20 bits, and the “c” has two significant bits. The low bits two bits in the “c” are trailing zeroes and may be neglected. So there are 23 significant bits.)
Subtracting 0.800000011920928955078125 from that yields 0.800000131130218505859375 (0x1.99999ep-1), which is also exactly representable.
Finally, subtracting 0.800000011920928955078125 from that yields 1.1920928955078125e-07 (0x1p-23).
The lesson to be learned here is the floating-point does not represent all numbers, and it rounds results to give you the closest numbers it can represent. When writing software to use floating-point arithmetic, you must understand and allow for these rounding operations. One way to allow for this is to use numbers that you know can be represented. Others have suggested using integer arithmetic. Another option is to use mostly values that you know can be represented exactly in floating-point, which includes integers up to 224. So you could start with 32 and subtract 8, yielding 24, then 16, then 8, then 0. Those would be the intermediate values you use for loop control and continuing calculations with no error. When you are ready to deliver results, then you could divide by 10, producing numbers near 3.2, 2.4, 1.6, .8, and 0 (exactly). This way, your arithmetic would introduce only one rounding error into each result, instead of accumulating rounding errors from iteration to iteration.
You're looking at good old floating-point rounding error. Fortunately, in your case it should be simple to deal with. Just clamp:
if( val < increment ){
val = 0.0;
}
Although, as Eric Postpischil explained below:
Clamping in this way is a bad idea, because sometimes rounding will cause the iteration variable to be slightly less than the increment instead of slightly more, and this clamping will effectively skip an iteration. For example, if the initial value were 3.6f (instead of 3.2f), and the step were .9f (instead of .8f), then the values in each iteration would be slightly below 3.6, 2.7, 1.8, and .9. At that point, clamping converts the value slightly below .9 to zero, and an iteration is skipped.
Therefore it might be necessary to subtract a small amount when doing the comparison.
A better option which you should consider is doing your calculations with integers rather than floats, then converting later.
int increment = 8;
int val = 32;
while( val > 0 ){
val -= increment;
float new_float_val = val / 10.0;
};
Another way to do this is to multiply the numbers you get by subtraction by 10, then convert to an integer, then divide that integer by by 10.0.
You can do this easily with the floor function (floorf) like this:
float newValue = floorf(oldVlaue*10)/10;

Objective-C - How to increase the precision of a float number

Can someone please show me the way to set the precision of a float number to desired length. Say I have a number 2504.6. As you see the precision here is only 1. I want to set it to six.I need this because I compare this value with the value obtained from
[txtInput.text floatValue]. And even if I enter 2504.6 to the text box it will add 5 more precisions and will be 2504.600098. And when I compare these two values they appear to be not equal.
Floats are approximates. The way floats are stored does not allow for arbitrary precision. Floats (and doubles) are designed to store very large (or small) values, but not precise values.
If you need a very precise non-integer number, use an int (or long) and scale it. You could even write your own object class to handle that.
They won't appear to be equal
Btw this question has been asked before
Comparing float and double data types in objective C
Objective-C Float / Double precision
Make a float only show two decimal places
You can compare the numbers using NSDecimalNumber:
NSDecimalNumber *number = [NSDecimalNumber numberWithFloat:2504.6f];
NSDecimalNumber *input = [NSDecimalNumber decimalNumberWithString:txtInput.text];
NSComparisonResult result = [number compare:input];
if (result == NSOrderedAscending) {
// number < input
} else if (result == NSOrderedDescending) {
// number > input
} else {
// number == input
}
Comparing two float variables A and B using 'equal' operator is not very good idea, cause float numbers have limited precision. The best way to compare floats is
fabs(A - B) < eps
where eps is some very small value, say 0.0001
If you're operating with strings that represent the float values you can just compare strings and not the numbers.

Objective c division of two ints

I'm trying to produce a a float by dividing two ints in my program. Here is what I'd expect:
1 / 120 = 0.00833
Here is the code I'm using:
float a = 1 / 120;
However it doesn't give me the result I'd expect. When I print it out I get the following:
inf
Do the following
float a = 1./120.
You need to specify that you want to use floating point math.
There's a few ways to do this:
If you really are interested in dividing two constants, you can specify that you want floating point math by making the first constant a float (or double). All it takes is a decimal point.
float a = 1./120;
You don't need to make the second constant a float, though it doesn't hurt anything.
Frankly, this is pretty easy to miss so I'd suggest adding a trailing zero and some spacing.
float a = 1.0 / 120;
If you really want to do the math with an integer variable, you can type cast it:
float a = (float)i/120;
float a = 1/120;
float b = 1.0/120;
float c = 1.0/120.0;
float d = 1.0f/120.0f;
NSLog(#"Value of A:%f B:%f C:%f D:%f",a,b,c,d);
Output: Value of A:0.000000 B:0.008333 C:0.008333 D:0.008333
For float variable a : int / int yields integer which you are assigning to float and printing it so 0.0000000
For float variable b: float / int yields float, assigning to float and printing it 0.008333
For float variable c: float / float yields float, so 0.008333
Last one is more precise float. Previous ones are of type double: all floating point values are stored as double data types unless the value is followed by an 'f' to specifically specify a float rather than as a double.
In C (and therefore also in Objective-C), expressions are almost always evaluated without regard to the context in which they appear.
The expression 1 / 120 is a division of two int operands, so it yields an int result. Integer division truncates, so 1 / 120 yields 0. The fact that the result is used to initialize a float object doesn't change the way 1 / 120 is evaluated.
This can be counterintuitive at times, especially if you're accustomed to the way calculators generally work (they usually store all results in floating-point).
As the other answers have said, to get a result close to 0.00833 (which can't be represented exactly, BTW), you need to do a floating-point division rather than an integer division, by making one or both of the operands floating-point. If one operand is floating-point and the other is an integer, the integer operand is converted to floating-point first; there is no direct floating-point by integer division operation.
Note that, as #0x8badf00d's comment says, the result should be 0. Something else must be going wrong for the printed result to be inf. If you can show us more code, preferably a small complete program, we can help figure that out.
(There are languages in which integer division yields a floating-point result. Even in those languages, the evaluation isn't necessarily affected by its context. Python version 3 is one such language; C, Objective-C, and Python version 2 are not.)

Objective-C: Strange calculation result

I am learning Objective-C and have completed a simple program and got an unexpected result. This program is just a multiplication table test... User inputs the number of iterations(test questions), then inputs answers. That after program displays the number of right and wrong answers, percentage and accepted/failed result.
#import <Foundation/Foundation.h>
int main (int argc, const char * argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSLog(#"Welcome to multiplication table test");
int rightAnswers; //the sum of the right answers
int wrongAnswers; //the sum of wrong answers
int combinations; //the number of combinations#
NSLog(#"Please, input the number of test combinations");
scanf("%d",&combinations);
for(int i=0; i<combinations; ++i)
{
int firstInt=rand()%8+1;
int secondInt=rand()%8+1;
int result=firstInt*secondInt;
int answer;
NSLog(#"%d*%d=",firstInt,secondInt);
scanf("%d",&answer);
if(answer==result)
{
NSLog(#"Ok");
rightAnswers++;
}
else
{
NSLog(#"Error");
wrongAnswers++;
}
}
int percent=(100/combinations)*rightAnswers;
NSLog(#"Combinations passed: %d",combinations);
NSLog(#"Answered right: %d times",rightAnswers);
NSLog(#"Answered wrong: %d times",wrongAnswers);
NSLog(#"Completed %d percent",percent);
if(percent>=70)NSLog(#"accepted");
else
NSLog(#"failed");
[pool drain];
return 0;
}
Problem (strange result)
When I input 3 iterations and answer 'em right, i am not getting of 100% right. Getting only
99%. The same count I tried on my iPhone calculator.
100 / 3 = 33.3333333... percentage for one right answer (program displays 33%. The digits after mantissa getting cut off)
33.3333333... * 3=100%
Can someone explain me where I went wrong? Thanx.
This is a result of integer division. When you perform division between two integer types, the result is automatically rounded towards 0 to form an integer. So, integer division of (100 / 3) gives a result of 33, not 33.33.... When you multiply that by 3, you get 99. To fix this, you can force floating point division by changing 100 to 100.0. The .0 tells the compiler that it should use a floating point type instead of an integer, forcing floating point division. As a result, rounding will not occur after the division. However, 33.33... cannot be represented exactly by binary numbers. Because of this, you may still see incorrect results at times. Since you store the result as an integer, rounding down will still occur after the multiplication, which will make it more obvious. If you want to use an integer type, you should use the round function on the result:
int percent = round((100.0 / combinations) * rightAnswers);
This will cause the number to be rounded to the closest integer before converting it to an integer type. Alternately, you could use a floating point storage type and specify a certain number of decimal places to display:
float percent = (100.0 / combinations) * rightAnswers;
NSLog(#"Completed %.1f percent",percent); // Display result with 1 decimal place
Finally, since floating point math will still cause rounding for numbers that can't be represented in binary, I would suggest multiplying by rightAnswers before dividing by combinations. This will increase the chances that the result is representable. For example, 100/3=33.33... is not representable and will be rounded. If you multiply by 3 first, you get 300/3=100, which is representable and will not be rounded.
Ints are integers. They can't represent an arbitrary real number like 1/3. Even floating-point numbers, which can represent reals, won't have enough precision to represent an infinitely repeating decimal like 100/3. You'll either need to use an arbitrary-precision library, use a library that includes rationals as a data type, or just store as much precision as you need and round from there (e.g. make your integer unit hundredths-of-a-percent instead of a single percentage point).
You might want to implement some sort of rounding because 33.333....*3 = 99.99999%. 3/10 is an infinite decimal therefore you need some sort of rounding to occur (maybe at the 3rd decimal place) so that the answer comes out correct. I would say if (num*1000 % 10 >= 5) num += .01 or something along those lines multiply by 100 moves decimal 3 times and then mod returns the 3rd digit (could be zero). You also might only want to round at the end once you sum everything up to avoid errors.
EDIT: Didn't realize you were using integers numbers at the end threw me off, you might want to use double or float (floats are slightly inaccurate past 2 or 3 digits which is OK with what you want).
100/3 is 33. Integer mathematics here.

NSTimeInterval to readable NSNumber

NSTimeInterval == double; (e.g. 169.12345666663)
How can I round up this double so that there are only 2 digits left after the "dot"?
It would be very good if the result is a NSNumber.
If this is for display purposes, take a look at NSNumberFormatter.
If you really want to round the double in your calculations for some reason, you can use the standard C round() function.
A NSDecimal can be rounded to a specified number of digits with NSDecimalRound().
double d = [[NSDate date] timeIntervalSince1970];
NSDecimal in = [[NSNumber numberWithDouble:d] decimalValue];
NSDecimal out;
NSDecimalRound( &out, &in, 2, NSRoundUp );
NSDecimalNumber *result = [NSDecimalNumber decimalNumberWithDecimal:out];
If you really want two digits left after the dot, multiply by 100, round it using round() function, divide it by 100 again. However, this will not guarantee that it really has only two digits after the dot, since by dividing it again, you may get a number that cannot really be expressed with floating point notation and when you expect 0.1 you may in fact get 0.09999..., that's because you cannot really express 0.1 using floating point notation.
If you just want to round it to two digits after the dot for display purposes, you can use NSNumberFormatter as has been suggested or just use:
printf("%.2f\n", yourTimeInterval);
NSLog(#"%.2f\n", yourTimeInterval);
or to get an NSString, you can also use the following, which is probably even faster than using a NumberFormatter (however, it won't be localized according to the user prefs):
NSString * intervalStr = nil;
char * intervalStrTmp = NULL;
asprintf(&intervalStrTmp, "%.2f", yourTimeInteval);
if (intervalStrTmp) {
intervalStr = [[NSString alloc] initWithUTF8String:intervalStrTmp];
free(intervalStrTmp);
}
In the vast majority of cases rounding a number is something you should only do at display time. The properties of floating-point numbers (double or not) make it impossible to store certain numbers at a fixed-precision.
For information about formatting a number so it displays to two decimal places, see this other post.
Does this HumanReadableTimeInterval help? It returns a NSString, though.
Alternatively, you can round yourself by multiplying with 100, converting to an integer and dividing through 100 again.
I would just use the ANSI C round() function.
You can always round the number using:
double round2dec(double a) { return round(a * 100) / 100; }
But chances are that the representation of the result as a double will not have only 2 decimals.
Now, if by using the == sign, you meant that the comparison of your two double numbers is made only to the second decimal. Here is what you can do:
fabs(round2dec(NSTimeInterval) - round2dec(double)) < std::numeric_limits<double>::epsilon()