objective-c check equality of float and int -- does 2.0000 == 2 - objective-c

Simple question, I know there must be a correct way to do this. I have a CGFloat that increases in increments of 1/16. I want to determine when this value becomes a whole number.
For lack of knowing the right way I am coming up with ideas like having another variable to keep track of the number of iterations and mod 16 it.

While you generally can't count on fractional floating point numbers to sum up to whole numbers, your case is exception to the rule since 1/16 is 2^(-4) and this number can be represented by float precisely:
- (void)testFloat
{
float a = 0.0f;
while (a != 2.0f) {
a += 0.0625f;
}
NSLog(#"OK!");
}

It's better to do it the other way around, i.e. use an integer loop counter and convert this to a float:
for (int i = 0; i < 100; ++i)
{
float x = (float)i / 16.0f;
if (i % 16 == 0)
{
// if x is whole number...
}
}

Floating point arithmetic is inexact so you can't count on the value of your variable ever being exactly 2.0000.
"For lack of knowing the right way I am coming up with ideas like having another variable to keep track of the number of iterations andmod 16 it."
This is a wonderful idea.

Related

Get the most occuring number amongst several integers without using arrays

DISCLAIMER: Rather theoretical question here, not looking for a correct answere, just asking for some inspiration!
Consider this:
A function is called repetitively and returns integers based on seeds (the same seed returns the same integer). Your task is to find out which integer is returned most often. Easy enough, right?
But: You are not allowed to use arrays or fields to store return values of said function!
Example:
int mostFrequentNumber = 0;
int occurencesOfMostFrequentNumber = 0;
int iterations = 10000000;
for(int i = 0; i < iterations; i++)
{
int result = getNumberFromSeed(i);
int occurencesOfResult = magic();
if(occurencesOfResult > occurencesOfMostFrequentNumber)
{
mostFrequentNumber = result;
occurencesOfMostFrequentNumber = occurencesOfResult;
}
}
If getNumberFromSeed() returns 2,1,5,18,5,6 and 5 then mostFrequentNumber should be 5 and occurencesOfMostFrequentNumber should be 3 because 5 is returned 3 times.
I know this could easily be solved using a two-dimentional list to store results and occurences. But imagine for a minute that you can not use any kind of arrays, lists, dictionaries etc. (Maybe because the system that is running the code has such a limited memory, that you cannot store enough integers at once or because your prehistoric programming language has no concept of collections).
How would you find mostFrequentNumber and occurencesOfMostFrequentNumber? What does magic() do?? (Of cause you do not have to stick to the example code. Any ideas are welcome!)
EDIT: I should add that the integers returned by getNumber() should be calculated using a seed, so the same seed returns the same integer (i.e. int result = getNumber(5); this would always assign the same value to result)
Make an hypothesis: Assume that the distribution of integers is, e.g., Normal.
Start simple. Have two variables
. N the number of elements read so far
. M1 the average of said elements.
Initialize both variables to 0.
Every time you read a new value x update N to be N + 1 and M1 to be M1 + (x - M1)/N.
At the end M1 will equal the average of all values. If the distribution was Normal this value will have a high frequency.
Now improve the above. Add a third variable:
M2 the average of all (x - M1)^2 for all values of xread so far.
Initialize M2 to 0. Now get a small memory of say 10 elements or so. For every new value x that you read update N and M1 as above and M2 as:
M2 := M2 + (x - M1)^2 * (N - 1) / N
At every step M2 is the variance of the distribution and sqrt(M2) its standard deviation.
As you proceed remember the frequencies of only the values read so far whose distances to M1 are less than sqrt(M2). This requires the use of some additional array, however, the array will be very short compared to the high number of iterations you will run. This modification will allow you to guess better the most frequent value instead of simply answering the mean (or average) as above.
UPDATE
Given that this is about insights for inspiration there is plenty of room for considering and adapting the approach I've proposed to any particular situation. Here are some thoughts
When I say assume that the distribution is Normal you should think of it as: Given that the problem has no solution, let's see if there is some qualitative information I can use to decide what kind of distribution would the data have. Given that the algorithm is intended to find the most frequent number, it should be fine to assume that the distribution is not uniform. Let's try with Normal, LogNormal, etc. to see what can be found out (more on this below.)
If the game completely disallows the use of any array, then fine, keep track of only, say 10 numbers. This would allow you to count the occurrences of the 10 best candidates, which will give more confidence to your answer. In doing this choose your candidates around the theoretical most likely value according to the distribution of your hypothesis.
You cannot use arrays but perhaps you can read the sequence of numbers two or three times, not just once. In that case you can read it once to check whether you hypothesis about its distribution is good nor bad. For instance, if you compute not just the variance but the skewness and the kurtosis you will have more elements to check your hypothesis. For instance, if the first reading indicates that there is some bias, you could use a LogNormal distribution instead, etc.
Finally, in addition to providing the approximate answer you would be able to use the information collected during the reading to estimate an interval of confidence around your answer.
Alright, I found a decent solution myself:
int mostFrequentNumber = 0;
int occurencesOfMostFrequentNumber = 0;
int iterations = 10000000;
int maxNumber = -2147483647;
int minNumber = 2147483647;
//Step 1: Find the largest and smallest number that _can_ occur
for(int i = 0; i < iterations; i++)
{
int result = getNumberFromSeed(i);
if(result > maxNumber)
{
maxNumber = result;
}
if(result < minNumber)
{
minNumber = result;
}
}
//Step 2: for each possible number between minNumber and maxNumber, count occurences
for(int thisNumber = minNumber; thisNumber <= maxNumber; thisNumber++)
{
int occurenceOfThisNumber = 0;
for(int i = 0; i < iterations; i++)
{
int result = getNumberFromSeed(i);
if(result == thisNumber)
{
occurenceOfThisNumber++;
}
}
if(occurenceOfThisNumber > occurencesOfMostFrequentNumber)
{
occurencesOfMostFrequentNumber = occurenceOfThisNumber;
mostFrequentNumber = thisNumber;
}
}
I must admit, this may take a long time, depending on the smallest and largest possible. But it will work without using arrays.

Understanding the output number of digits when dividing two floats [duplicate]

I am puzzled. I have no explanation to why this test passes when using the double data type but fails when using the float data type. Consider the following snippet of code.
float total = 0.00;
for ( int i = 0; i < 100; i++ ) total += 0.01;
One would anticipate total to be 1.00, however it is equal to 0.99. Why is this the case? I compiled with both GCC and clang, both compilers have the same result.
Try this:
#include <stdio.h>
int main(){
float total = 0.00;
int i;
for (i = 0; i < 100; i++)
total += 0.01;
printf("%f\n", total);
if (total == 1.0)
puts("Precise");
else
puts("Rounded");
}
At least on most machines, you'll get an output of "Rounded". In other words, the result simply happens to be close enough that when it's printed out, it's rounded so it looks like exactly 1.00, but it really isn't. Change total to a double, and you'll still get the same.
The value for 0.01 in decimal is expressed as the series: a1*(1/2) + a2*(1/2)^2 + a3*(1/2)^4 + etc. where aN is a zero or one.
I leave it to you to figure out the specific values of a1, a2 and how many fractional bits (aN) are required. In some cases a decimal fraction cannot be represented by a finite series of (1/2)^n values.
For this series to sum to 0.01 in decimal requires that aN go beyond the number of bits stored in a float (full word of bits minus the number of bits for a sign and exponent). But since double has more bits then 0.01 decimal can/might/maybe (you do the calculation) be precisely defined.

'while' Loop in Objective-C

The following program calculates and removes the remainder of a number, adds the total of the remainders calculated and displays them.
#import <Foundation/Foundation.h>
int main (int argc, char * argv[]) {
#autoreleasepool {
int number, remainder, total;
NSLog(#"Enter your number");
scanf("%i", &number);
while (number != 0)
{
remainder = number % 10;
total += remainder;
number /= 10;
}
NSLog(#"%i", total);
}
return 0;
}
My questions are:
Why is the program set to continue as long as the number is not equal to 0? Shouldn't it continue as the long as the remainder is not equal to 0?
At what point is the remainder discarded from the value of number? Why is there no number -= remainder statement before n /=10?
[Bonus question: Does Objective-C get any easier to understand?]
The reason we continue until number != 0 instead of using remainder is that if our input is divisible by 10 exactly, then we don't get the proper output (the sum of the base 10 digits).
The remainder is dropped off because of integer division. Remember, an integer cannot hold a decimal place, so when we divide 16 by 10, we don't get 1.6, we just get 1.
And yes, Objective-C does get easier over time (but, as a side-note, this uses absolutely 0 features of Objective-C, so it's basically C with a NSLog call).
Note that the output isn't quite what you would expect at all times, however, as in C / ObjC, a (unlike languages like D or JS) a variable is not always initialized to a set value (in this case, you assume 0). This could cause UB down the road.
It checks to see if number is not equal to zero because remainder very well may never become zero. If we were to input 5 as our input value, the first time through the loop remainder would be set to 5 (because 5 % 10 = 5), and number would go to zero because
5 / 10 = 0.5, and ints do not store floating point values, so the .5 will get truncated and the value of number will equal zero.
The remainder does not get removed from the value of number in this code. I think that you may be confused about what the modulo operator does (see this explanation).
Bonus answer: learning a programming language is difficult at first, but very rewarding in the long run (if you stick with it). Each new language that you learn after your first will most likely be easier to learn too, because you will understand general programming constructs and practices. The best of luck on your endeavor!

CGFloat addition bug?

I was trying to add some CGFloat values recursively in my program. And I just realized in one particular scenario the total generated was incorrect. To ensure I had nothing wrong in my program logic, I created a simple example of that scenario (see below) and this printed the same wrong value.
CGFloat arr[3] = {34484000,512085280,143011440};
CGFloat sum = 0.0;
sum = arr[0] + arr[1] + arr[2];
NSLog(#"%f",sum);
int arr1[3] = {34484000,512085280,143011440};
int sum1 = 0.0;
sum1 = arr1[0] + arr1[1] + arr1[2];
NSLog(#"%d",sum1);
The first NSLog prints 689580736.000000...while the correct result 689580720. However the second NSLog prints the correct result. I am not sure if this is a bug or if I am doing something wrong.
Thanks,
Murali
CGFloat is a single precision float on 32 bit targets such as iOS - it only has a 23 bit mantissa, i.e. around 6 - 7 significant digits. Use a double precision type if you need greater accuracy.
You should probably read David Goldberg's What Every Computer Scientist Should Know About Floating-Point Arithmetic before proceeding much further with learning to program.

Objective-C: Strange calculation result

I am learning Objective-C and have completed a simple program and got an unexpected result. This program is just a multiplication table test... User inputs the number of iterations(test questions), then inputs answers. That after program displays the number of right and wrong answers, percentage and accepted/failed result.
#import <Foundation/Foundation.h>
int main (int argc, const char * argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSLog(#"Welcome to multiplication table test");
int rightAnswers; //the sum of the right answers
int wrongAnswers; //the sum of wrong answers
int combinations; //the number of combinations#
NSLog(#"Please, input the number of test combinations");
scanf("%d",&combinations);
for(int i=0; i<combinations; ++i)
{
int firstInt=rand()%8+1;
int secondInt=rand()%8+1;
int result=firstInt*secondInt;
int answer;
NSLog(#"%d*%d=",firstInt,secondInt);
scanf("%d",&answer);
if(answer==result)
{
NSLog(#"Ok");
rightAnswers++;
}
else
{
NSLog(#"Error");
wrongAnswers++;
}
}
int percent=(100/combinations)*rightAnswers;
NSLog(#"Combinations passed: %d",combinations);
NSLog(#"Answered right: %d times",rightAnswers);
NSLog(#"Answered wrong: %d times",wrongAnswers);
NSLog(#"Completed %d percent",percent);
if(percent>=70)NSLog(#"accepted");
else
NSLog(#"failed");
[pool drain];
return 0;
}
Problem (strange result)
When I input 3 iterations and answer 'em right, i am not getting of 100% right. Getting only
99%. The same count I tried on my iPhone calculator.
100 / 3 = 33.3333333... percentage for one right answer (program displays 33%. The digits after mantissa getting cut off)
33.3333333... * 3=100%
Can someone explain me where I went wrong? Thanx.
This is a result of integer division. When you perform division between two integer types, the result is automatically rounded towards 0 to form an integer. So, integer division of (100 / 3) gives a result of 33, not 33.33.... When you multiply that by 3, you get 99. To fix this, you can force floating point division by changing 100 to 100.0. The .0 tells the compiler that it should use a floating point type instead of an integer, forcing floating point division. As a result, rounding will not occur after the division. However, 33.33... cannot be represented exactly by binary numbers. Because of this, you may still see incorrect results at times. Since you store the result as an integer, rounding down will still occur after the multiplication, which will make it more obvious. If you want to use an integer type, you should use the round function on the result:
int percent = round((100.0 / combinations) * rightAnswers);
This will cause the number to be rounded to the closest integer before converting it to an integer type. Alternately, you could use a floating point storage type and specify a certain number of decimal places to display:
float percent = (100.0 / combinations) * rightAnswers;
NSLog(#"Completed %.1f percent",percent); // Display result with 1 decimal place
Finally, since floating point math will still cause rounding for numbers that can't be represented in binary, I would suggest multiplying by rightAnswers before dividing by combinations. This will increase the chances that the result is representable. For example, 100/3=33.33... is not representable and will be rounded. If you multiply by 3 first, you get 300/3=100, which is representable and will not be rounded.
Ints are integers. They can't represent an arbitrary real number like 1/3. Even floating-point numbers, which can represent reals, won't have enough precision to represent an infinitely repeating decimal like 100/3. You'll either need to use an arbitrary-precision library, use a library that includes rationals as a data type, or just store as much precision as you need and round from there (e.g. make your integer unit hundredths-of-a-percent instead of a single percentage point).
You might want to implement some sort of rounding because 33.333....*3 = 99.99999%. 3/10 is an infinite decimal therefore you need some sort of rounding to occur (maybe at the 3rd decimal place) so that the answer comes out correct. I would say if (num*1000 % 10 >= 5) num += .01 or something along those lines multiply by 100 moves decimal 3 times and then mod returns the 3rd digit (could be zero). You also might only want to round at the end once you sum everything up to avoid errors.
EDIT: Didn't realize you were using integers numbers at the end threw me off, you might want to use double or float (floats are slightly inaccurate past 2 or 3 digits which is OK with what you want).
100/3 is 33. Integer mathematics here.