Objective-C: Strange calculation result - objective-c

I am learning Objective-C and have completed a simple program and got an unexpected result. This program is just a multiplication table test... User inputs the number of iterations(test questions), then inputs answers. That after program displays the number of right and wrong answers, percentage and accepted/failed result.
#import <Foundation/Foundation.h>
int main (int argc, const char * argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSLog(#"Welcome to multiplication table test");
int rightAnswers; //the sum of the right answers
int wrongAnswers; //the sum of wrong answers
int combinations; //the number of combinations#
NSLog(#"Please, input the number of test combinations");
scanf("%d",&combinations);
for(int i=0; i<combinations; ++i)
{
int firstInt=rand()%8+1;
int secondInt=rand()%8+1;
int result=firstInt*secondInt;
int answer;
NSLog(#"%d*%d=",firstInt,secondInt);
scanf("%d",&answer);
if(answer==result)
{
NSLog(#"Ok");
rightAnswers++;
}
else
{
NSLog(#"Error");
wrongAnswers++;
}
}
int percent=(100/combinations)*rightAnswers;
NSLog(#"Combinations passed: %d",combinations);
NSLog(#"Answered right: %d times",rightAnswers);
NSLog(#"Answered wrong: %d times",wrongAnswers);
NSLog(#"Completed %d percent",percent);
if(percent>=70)NSLog(#"accepted");
else
NSLog(#"failed");
[pool drain];
return 0;
}
Problem (strange result)
When I input 3 iterations and answer 'em right, i am not getting of 100% right. Getting only
99%. The same count I tried on my iPhone calculator.
100 / 3 = 33.3333333... percentage for one right answer (program displays 33%. The digits after mantissa getting cut off)
33.3333333... * 3=100%
Can someone explain me where I went wrong? Thanx.

This is a result of integer division. When you perform division between two integer types, the result is automatically rounded towards 0 to form an integer. So, integer division of (100 / 3) gives a result of 33, not 33.33.... When you multiply that by 3, you get 99. To fix this, you can force floating point division by changing 100 to 100.0. The .0 tells the compiler that it should use a floating point type instead of an integer, forcing floating point division. As a result, rounding will not occur after the division. However, 33.33... cannot be represented exactly by binary numbers. Because of this, you may still see incorrect results at times. Since you store the result as an integer, rounding down will still occur after the multiplication, which will make it more obvious. If you want to use an integer type, you should use the round function on the result:
int percent = round((100.0 / combinations) * rightAnswers);
This will cause the number to be rounded to the closest integer before converting it to an integer type. Alternately, you could use a floating point storage type and specify a certain number of decimal places to display:
float percent = (100.0 / combinations) * rightAnswers;
NSLog(#"Completed %.1f percent",percent); // Display result with 1 decimal place
Finally, since floating point math will still cause rounding for numbers that can't be represented in binary, I would suggest multiplying by rightAnswers before dividing by combinations. This will increase the chances that the result is representable. For example, 100/3=33.33... is not representable and will be rounded. If you multiply by 3 first, you get 300/3=100, which is representable and will not be rounded.

Ints are integers. They can't represent an arbitrary real number like 1/3. Even floating-point numbers, which can represent reals, won't have enough precision to represent an infinitely repeating decimal like 100/3. You'll either need to use an arbitrary-precision library, use a library that includes rationals as a data type, or just store as much precision as you need and round from there (e.g. make your integer unit hundredths-of-a-percent instead of a single percentage point).

You might want to implement some sort of rounding because 33.333....*3 = 99.99999%. 3/10 is an infinite decimal therefore you need some sort of rounding to occur (maybe at the 3rd decimal place) so that the answer comes out correct. I would say if (num*1000 % 10 >= 5) num += .01 or something along those lines multiply by 100 moves decimal 3 times and then mod returns the 3rd digit (could be zero). You also might only want to round at the end once you sum everything up to avoid errors.
EDIT: Didn't realize you were using integers numbers at the end threw me off, you might want to use double or float (floats are slightly inaccurate past 2 or 3 digits which is OK with what you want).

100/3 is 33. Integer mathematics here.

Related

comparing floats gives weird behaviour ( > operator)

I'm doing a simple comparison between two floating points. When logging however, I came across some unexpected behaviour of this rather basic code:
float balance = self.balance.floatValue;
float amount = self.amountTextField.text.floatValue;
if(amount > balance && self.amountTextField.text != nil){
allowTransfer = NO;
NSLog(#"allowtransfer: %u", allowTransfer);
}
In my testcase, I used balance as a floating point of 47.95.
All goes well with the comparison until i try 47.96 as a balance and still allowTransfer isn't called, all up to 48.00
Why is somehow the compiler not considering decimals?
Your problem is that you are casting both numbers to an int when comparing them, which will truncate both numbers and make it impossible to exactly compare them, it will only compare the integer parts.
To solve it just use float:
float balance = self.balance.floatValue;
float amount = self.amountTextField.text.floatValue;
Although when dealing with money, you should not use double or float. The reason is that they do not support arbitrary precision and you cannot represent exact values (for instance 0.1 + 0.2 as double is actually 0.30000000000000004
Have a look at NSDecimalNumber for arbitrary precision numbers.

Understanding the output number of digits when dividing two floats [duplicate]

I am puzzled. I have no explanation to why this test passes when using the double data type but fails when using the float data type. Consider the following snippet of code.
float total = 0.00;
for ( int i = 0; i < 100; i++ ) total += 0.01;
One would anticipate total to be 1.00, however it is equal to 0.99. Why is this the case? I compiled with both GCC and clang, both compilers have the same result.
Try this:
#include <stdio.h>
int main(){
float total = 0.00;
int i;
for (i = 0; i < 100; i++)
total += 0.01;
printf("%f\n", total);
if (total == 1.0)
puts("Precise");
else
puts("Rounded");
}
At least on most machines, you'll get an output of "Rounded". In other words, the result simply happens to be close enough that when it's printed out, it's rounded so it looks like exactly 1.00, but it really isn't. Change total to a double, and you'll still get the same.
The value for 0.01 in decimal is expressed as the series: a1*(1/2) + a2*(1/2)^2 + a3*(1/2)^4 + etc. where aN is a zero or one.
I leave it to you to figure out the specific values of a1, a2 and how many fractional bits (aN) are required. In some cases a decimal fraction cannot be represented by a finite series of (1/2)^n values.
For this series to sum to 0.01 in decimal requires that aN go beyond the number of bits stored in a float (full word of bits minus the number of bits for a sign and exponent). But since double has more bits then 0.01 decimal can/might/maybe (you do the calculation) be precisely defined.

'while' Loop in Objective-C

The following program calculates and removes the remainder of a number, adds the total of the remainders calculated and displays them.
#import <Foundation/Foundation.h>
int main (int argc, char * argv[]) {
#autoreleasepool {
int number, remainder, total;
NSLog(#"Enter your number");
scanf("%i", &number);
while (number != 0)
{
remainder = number % 10;
total += remainder;
number /= 10;
}
NSLog(#"%i", total);
}
return 0;
}
My questions are:
Why is the program set to continue as long as the number is not equal to 0? Shouldn't it continue as the long as the remainder is not equal to 0?
At what point is the remainder discarded from the value of number? Why is there no number -= remainder statement before n /=10?
[Bonus question: Does Objective-C get any easier to understand?]
The reason we continue until number != 0 instead of using remainder is that if our input is divisible by 10 exactly, then we don't get the proper output (the sum of the base 10 digits).
The remainder is dropped off because of integer division. Remember, an integer cannot hold a decimal place, so when we divide 16 by 10, we don't get 1.6, we just get 1.
And yes, Objective-C does get easier over time (but, as a side-note, this uses absolutely 0 features of Objective-C, so it's basically C with a NSLog call).
Note that the output isn't quite what you would expect at all times, however, as in C / ObjC, a (unlike languages like D or JS) a variable is not always initialized to a set value (in this case, you assume 0). This could cause UB down the road.
It checks to see if number is not equal to zero because remainder very well may never become zero. If we were to input 5 as our input value, the first time through the loop remainder would be set to 5 (because 5 % 10 = 5), and number would go to zero because
5 / 10 = 0.5, and ints do not store floating point values, so the .5 will get truncated and the value of number will equal zero.
The remainder does not get removed from the value of number in this code. I think that you may be confused about what the modulo operator does (see this explanation).
Bonus answer: learning a programming language is difficult at first, but very rewarding in the long run (if you stick with it). Each new language that you learn after your first will most likely be easier to learn too, because you will understand general programming constructs and practices. The best of luck on your endeavor!

Inprecision on floating point decimals?

If the size of a float is 4 bytes then shouldn't it be able to hold digits from 8,388,607 to -8,388,608 or somewhere around there because I probably calculated it wrong.
Why does f display the extra 15 because the value of f (0.1) is still between 8,388,607 to -8,388,608 right?
int main(int argc, const char * argv[])
{
#autoreleasepool {
float f = .1;
printf("%lu", sizeof(float));
printf("%.10f", f);
}
return 0;
}
2012-08-28 20:53:38.537 prog[841:403] 4
2012-08-28 20:53:38.539 prog[841:403] 0.1000000015
The values -8,388,608 ... 8,388,607 lead me to believe that you think floats use two's complement, which they don't. In any case, the range you have indicates 24 bits, not the 32 that you'd get from four bytes.
Floats in C use IEEE754 representation, which basically has three parts:
the sign.
the exponent (sort of a scale).
the fraction (actual digits of the number).
You basically get a certain amount of precision (such as 7 decimal digits) and the exponent dictates whether you use those for a number like 0.000000001234567 or 123456700000.
The reason you get those extra digits at the end of your 0.1 is because that number cannot be represented exactly in IEEE754. See this answer for a treatise explaining why that is the case.
Numbers are only representable exactly if they can be built by adding inverse powers of two (like 1/2, 1/16, 1/65536 and so on) within the number of bits of precision (ie, number of bits in the fraction), subject to scaling.
So, for example, a number like 0.5 is okay since it's 1/2. Similarly 0.8125 is okay since that can be built from 1/2, 1/4 and 1/16.
There is no way (at least within 23 bits of precision) that you can build 0.1 from inverse powers of two, so it gives you the nearest match.

Properly subtracting float values

I am trying to create an array of values. These values should be "2.4,1.6,.8,0". I am subtracting .8 at every step.
This is how I am doing it (code snippet):
float mean = [[_scalesDictionary objectForKey:#"M1"] floatValue]; //3.2f
float sD = [[_scalesDictionary objectForKey:#"SD1"] floatValue]; //0.8f
nextRegion = mean;
hitWall = NO;
NSMutableArray *minusRegion = [NSMutableArray array];
while (!hitWall) {
nextRegion -= sD;
if(nextRegion<0.0f){
nextRegion = 0.0f;
hitWall = YES;
}
[minusRegion addObject:[NSNumber numberWithFloat:nextRegion]];
}
I am getting this output:
minusRegion = (
"2.4",
"1.6",
"0.8000001",
"1.192093e-07",
0
)
I do not want the incredibly small number between .8 and 0. Is there a standard way to truncate these values?
Neither 3.2 nor .8 is exactly representable as a 32-bit float. The representable number closest to 3.2 is 3.2000000476837158203125 (in hexadecimal floating-point, 0x1.99999ap+1). The representable number closest to .8 is 0.800000011920928955078125 (0x1.99999ap-1).
When 0.800000011920928955078125 is subtracted from 3.2000000476837158203125, the exact mathematical result is 2.400000035762786865234375 (0x1.3333338p+1). This result is also not exactly representable as a 32-bit float. (You can see this easily in the hexadecimal floating-point. A 32-bit float has a 24-bit significand. “1.3333338” has one bit in the “1”, 24 bits in the middle six digits, and another bit in the ”8”.) So the result is rounded to the nearest 32-bit float, which is 2.400000095367431640625 (0x1.333334p+1).
Subtracting 0.800000011920928955078125 from that yields 1.6000001430511474609375 (0x1.99999cp+0), which is exactly representable. (The “1” is one bit, the five nines are 20 bits, and the “c” has two significant bits. The low bits two bits in the “c” are trailing zeroes and may be neglected. So there are 23 significant bits.)
Subtracting 0.800000011920928955078125 from that yields 0.800000131130218505859375 (0x1.99999ep-1), which is also exactly representable.
Finally, subtracting 0.800000011920928955078125 from that yields 1.1920928955078125e-07 (0x1p-23).
The lesson to be learned here is the floating-point does not represent all numbers, and it rounds results to give you the closest numbers it can represent. When writing software to use floating-point arithmetic, you must understand and allow for these rounding operations. One way to allow for this is to use numbers that you know can be represented. Others have suggested using integer arithmetic. Another option is to use mostly values that you know can be represented exactly in floating-point, which includes integers up to 224. So you could start with 32 and subtract 8, yielding 24, then 16, then 8, then 0. Those would be the intermediate values you use for loop control and continuing calculations with no error. When you are ready to deliver results, then you could divide by 10, producing numbers near 3.2, 2.4, 1.6, .8, and 0 (exactly). This way, your arithmetic would introduce only one rounding error into each result, instead of accumulating rounding errors from iteration to iteration.
You're looking at good old floating-point rounding error. Fortunately, in your case it should be simple to deal with. Just clamp:
if( val < increment ){
val = 0.0;
}
Although, as Eric Postpischil explained below:
Clamping in this way is a bad idea, because sometimes rounding will cause the iteration variable to be slightly less than the increment instead of slightly more, and this clamping will effectively skip an iteration. For example, if the initial value were 3.6f (instead of 3.2f), and the step were .9f (instead of .8f), then the values in each iteration would be slightly below 3.6, 2.7, 1.8, and .9. At that point, clamping converts the value slightly below .9 to zero, and an iteration is skipped.
Therefore it might be necessary to subtract a small amount when doing the comparison.
A better option which you should consider is doing your calculations with integers rather than floats, then converting later.
int increment = 8;
int val = 32;
while( val > 0 ){
val -= increment;
float new_float_val = val / 10.0;
};
Another way to do this is to multiply the numbers you get by subtraction by 10, then convert to an integer, then divide that integer by by 10.0.
You can do this easily with the floor function (floorf) like this:
float newValue = floorf(oldVlaue*10)/10;