Division and NSUInteger - objective-c

I have some calculation that involves negative values:
row = (stagePosition - col) / PHNumRow;
Say stagePosition is -7 and col is 1. They are both NSInteger, including row.
PHNumRow is 8.
If PHNumRow is NSInteger, I get the result I expect: -1.
But if PHNumRow is NSUInteger, the result is garbage.
Why should it matter if the divisor is unsigned or signed?
I'm not putting the result in an unsigned int.

Because of integer promotion. When the right hand side is evaluated, all arguments are promoted to the highest type of the operands of the expression which, if PHNumRow is unsigned, will be unsigned integer. The compiler does something similar to the following:
((NSUInteger)stagePosition - (NSUInteger)col) / PHNumRow;
Since, stagePosition is negative, there is a wraparound and your computation goes boom!

Related

Min and Max values of Float and Double types in Kotlin

It's simple to find out what the exact min and max values for Int and Long integers are in Kotlin:
Signed 32 bit Integer:
Int.MIN_VALUE // -2147483648
Int.MAX_VALUE // 2147483647
Signed 64 bit Integer:
Long.MIN_VALUE // -9223372036854775808
Long.MAX_VALUE // 9223372036854775807
However, if I try to print Float or Double types' ranges of min and max values, I'll get unbalanced numbers (where both values will be expressed using a scientific notation).
Signed 32 bit Floating Point Number:
Float.MIN_VALUE // 1.4e-45
Float.MAX_VALUE // 3.4028235e38
Signed 64 bit Floating Point Number:
Double.MIN_VALUE // 4.9e-324
Double.MAX_VALUE // 1.7976931348623157e308
Why the positive and negative values of Float and Double types are so "unbalanced"?
The conceptual definition of MIN_VALUE is different for integers vs floating-point numbers.
Int.MIN_VALUE is the largest negative value.
Float.MIN_VALUE is the smallest positive value.
In other words, 1.4E-45 is 0.00[40 zeroes]0014, and not a very large negative number. The largest possible negative value is represented by -1 * Float.MAX_VALUE.
Just to add to this discussion because I made the mistake of expecting that Float.MIN_VALUE and Double.MIN_VALUE correlated with the Int.MIN_VALUE, in that they both should represent the most negative value for that datatype.
For a Float or Double you have the additional properties, aside from MIN_VALUE or MAX_VALUE, of NEGATIVE_INFINITY and POSITIVE_INFINITY which technically could be your largest negative and positive value. I was trying to find a number that would represent a value of a var that had not been used yet. MIN_VALUE didn't work for me because I was dealing with positive and negative numbers.

Will dividing an NSUInteger by 2 result in a whole number?

I am trying to do this in Objective-C:
self.nsarray.count/2
If the count is equal to 5, will the result be 5/2 = 2.5 or 5/2 = 2?
I am NSLogging the answer and it only shows me 2. I'm not sure if that's the actual answer or if's 2, because I am forced to use the %u format to log the answer. Please also explain the 'why' of this result.
The division with two whole numbers in Objective-C always produces a whole number as a result, in your case it would be NSUInteger, and 2 is a valid result in this case. To get a result with floating point at least one of your operands should be float typed, or at least one of them should be casted to float, so here's some options:
// Second part of division is float, so result is float as well
float result = self.array.count/2.
// First part of division is float, so result is float as well
float result2 = (float)self.array.count/2 // or you can type ((float)self.array.count)/2 for more clearance
Note that casting result to float isn't valid on your case, for instance in (float) (5/2) the result would be a whole number of type float (2.0) as you only cast a NSIntger to float
Floats are usually formatted in NSLog format as %f or %g

Objective c division of two ints

I'm trying to produce a a float by dividing two ints in my program. Here is what I'd expect:
1 / 120 = 0.00833
Here is the code I'm using:
float a = 1 / 120;
However it doesn't give me the result I'd expect. When I print it out I get the following:
inf
Do the following
float a = 1./120.
You need to specify that you want to use floating point math.
There's a few ways to do this:
If you really are interested in dividing two constants, you can specify that you want floating point math by making the first constant a float (or double). All it takes is a decimal point.
float a = 1./120;
You don't need to make the second constant a float, though it doesn't hurt anything.
Frankly, this is pretty easy to miss so I'd suggest adding a trailing zero and some spacing.
float a = 1.0 / 120;
If you really want to do the math with an integer variable, you can type cast it:
float a = (float)i/120;
float a = 1/120;
float b = 1.0/120;
float c = 1.0/120.0;
float d = 1.0f/120.0f;
NSLog(#"Value of A:%f B:%f C:%f D:%f",a,b,c,d);
Output: Value of A:0.000000 B:0.008333 C:0.008333 D:0.008333
For float variable a : int / int yields integer which you are assigning to float and printing it so 0.0000000
For float variable b: float / int yields float, assigning to float and printing it 0.008333
For float variable c: float / float yields float, so 0.008333
Last one is more precise float. Previous ones are of type double: all floating point values are stored as double data types unless the value is followed by an 'f' to specifically specify a float rather than as a double.
In C (and therefore also in Objective-C), expressions are almost always evaluated without regard to the context in which they appear.
The expression 1 / 120 is a division of two int operands, so it yields an int result. Integer division truncates, so 1 / 120 yields 0. The fact that the result is used to initialize a float object doesn't change the way 1 / 120 is evaluated.
This can be counterintuitive at times, especially if you're accustomed to the way calculators generally work (they usually store all results in floating-point).
As the other answers have said, to get a result close to 0.00833 (which can't be represented exactly, BTW), you need to do a floating-point division rather than an integer division, by making one or both of the operands floating-point. If one operand is floating-point and the other is an integer, the integer operand is converted to floating-point first; there is no direct floating-point by integer division operation.
Note that, as #0x8badf00d's comment says, the result should be 0. Something else must be going wrong for the printed result to be inf. If you can show us more code, preferably a small complete program, we can help figure that out.
(There are languages in which integer division yields a floating-point result. Even in those languages, the evaluation isn't necessarily affected by its context. Python version 3 is one such language; C, Objective-C, and Python version 2 are not.)

Objective-C: Strange calculation result

I am learning Objective-C and have completed a simple program and got an unexpected result. This program is just a multiplication table test... User inputs the number of iterations(test questions), then inputs answers. That after program displays the number of right and wrong answers, percentage and accepted/failed result.
#import <Foundation/Foundation.h>
int main (int argc, const char * argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSLog(#"Welcome to multiplication table test");
int rightAnswers; //the sum of the right answers
int wrongAnswers; //the sum of wrong answers
int combinations; //the number of combinations#
NSLog(#"Please, input the number of test combinations");
scanf("%d",&combinations);
for(int i=0; i<combinations; ++i)
{
int firstInt=rand()%8+1;
int secondInt=rand()%8+1;
int result=firstInt*secondInt;
int answer;
NSLog(#"%d*%d=",firstInt,secondInt);
scanf("%d",&answer);
if(answer==result)
{
NSLog(#"Ok");
rightAnswers++;
}
else
{
NSLog(#"Error");
wrongAnswers++;
}
}
int percent=(100/combinations)*rightAnswers;
NSLog(#"Combinations passed: %d",combinations);
NSLog(#"Answered right: %d times",rightAnswers);
NSLog(#"Answered wrong: %d times",wrongAnswers);
NSLog(#"Completed %d percent",percent);
if(percent>=70)NSLog(#"accepted");
else
NSLog(#"failed");
[pool drain];
return 0;
}
Problem (strange result)
When I input 3 iterations and answer 'em right, i am not getting of 100% right. Getting only
99%. The same count I tried on my iPhone calculator.
100 / 3 = 33.3333333... percentage for one right answer (program displays 33%. The digits after mantissa getting cut off)
33.3333333... * 3=100%
Can someone explain me where I went wrong? Thanx.
This is a result of integer division. When you perform division between two integer types, the result is automatically rounded towards 0 to form an integer. So, integer division of (100 / 3) gives a result of 33, not 33.33.... When you multiply that by 3, you get 99. To fix this, you can force floating point division by changing 100 to 100.0. The .0 tells the compiler that it should use a floating point type instead of an integer, forcing floating point division. As a result, rounding will not occur after the division. However, 33.33... cannot be represented exactly by binary numbers. Because of this, you may still see incorrect results at times. Since you store the result as an integer, rounding down will still occur after the multiplication, which will make it more obvious. If you want to use an integer type, you should use the round function on the result:
int percent = round((100.0 / combinations) * rightAnswers);
This will cause the number to be rounded to the closest integer before converting it to an integer type. Alternately, you could use a floating point storage type and specify a certain number of decimal places to display:
float percent = (100.0 / combinations) * rightAnswers;
NSLog(#"Completed %.1f percent",percent); // Display result with 1 decimal place
Finally, since floating point math will still cause rounding for numbers that can't be represented in binary, I would suggest multiplying by rightAnswers before dividing by combinations. This will increase the chances that the result is representable. For example, 100/3=33.33... is not representable and will be rounded. If you multiply by 3 first, you get 300/3=100, which is representable and will not be rounded.
Ints are integers. They can't represent an arbitrary real number like 1/3. Even floating-point numbers, which can represent reals, won't have enough precision to represent an infinitely repeating decimal like 100/3. You'll either need to use an arbitrary-precision library, use a library that includes rationals as a data type, or just store as much precision as you need and round from there (e.g. make your integer unit hundredths-of-a-percent instead of a single percentage point).
You might want to implement some sort of rounding because 33.333....*3 = 99.99999%. 3/10 is an infinite decimal therefore you need some sort of rounding to occur (maybe at the 3rd decimal place) so that the answer comes out correct. I would say if (num*1000 % 10 >= 5) num += .01 or something along those lines multiply by 100 moves decimal 3 times and then mod returns the 3rd digit (could be zero). You also might only want to round at the end once you sum everything up to avoid errors.
EDIT: Didn't realize you were using integers numbers at the end threw me off, you might want to use double or float (floats are slightly inaccurate past 2 or 3 digits which is OK with what you want).
100/3 is 33. Integer mathematics here.

Why does the CLR overflow an Int32.MaxValue -> Single -> Int32, where the JVM does not?

I ran into an unexpected result in round-tripping Int32.MaxValue into a System.Single:
Int32 i = Int32.MaxValue;
Single s = i;
Int32 c = (Int32)s;
Debug.WriteLine(i); // 2147483647
Debug.WriteLine(c); // -2147483648
I realized that it must be overflowing, since Single doesn't have enough bits in the significand to hold the Int32 value, and it rounds up. When I changed the conv.r4 to conv.r4.ovf in the IL, an OverflowExcpetion is thrown. Fair enough...
However, while I was investigating this issue, I compiled this code in java and ran it and got the following:
int i = Integer.MAX_VALUE;
float s = (float)i;
int c = (int)s;
System.out.println(i); // 2147483647
System.out.println(c); // 2147483647
I don't know much about the JVM, but I wonder how it does this. It seems much less surprising, but how does it retain the extra digit after rounding to 2.14748365E9? Does it keep some kind of internal representation around and then replace it when casting back to int? Or does it just round down to Integer.MAX_VALUE to avoid overflow?
This case is explicitly handled by §5.1.3 of the Java Language Specification:
A narrowing conversion of a
floating-point number to an integral
type T takes two steps:
In the first step, the floating-point number is converted
either to a long, if T is long, or to
an int, if T is byte, short, char, or
int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the
first step of the conversion is an int
or long 0.
Otherwise, if the floating-point number is not an
infinity, the floating-point value is
rounded to an integer value V,
rounding toward zero using IEEE 754
round-toward-zero mode (§4.2.3). Then
there are two cases:
If T is long, and this integer value can be represented as a
long, then the result of the first
step is the long value V.
Otherwise, if this integer value can be represented as an int,
then the result of the first step is
the int value V.
Otherwise, one of the following two cases must be true:
The value must be too small (a negative value of large magnitude
or negative infinity), and the result
of the first step is the smallest
representable value of type int or
long.
The value must be too large (a positive value of large magnitude
or positive infinity), and the result
of the first step is the largest
representable value of type int or
long.