Integer multiplication overflow - objective-c

I have problem with multiplying two integers in Objective C. When I multiply 500 with 20000000 and save in a long variable and print it the result is 1410065408.
int x = 500;
long myLongValue = x * 20000000;
NSLog(#" %lu",myLongValue);
I think the problem is about overflowing integers, but I couldn't find real reason. I try to find the real result 10000000000, using multiplication of these integers. Is it possible?

You have an overflow here : an unsigned int is coded using 32bits, so the max value will be 2^32-1, 4294967295. An int is coded using 31 bits (1 bit is for the sign), and the max will be 2147483647.
As you can see here, the 33h bit is used to represent your number, so you cannot represent it with an int. Guess which number you'll get if you set this bit to 0 ? ;)
SOLUTION EDITED : Even if you assign the result to a long value, the result is first stored into an int, so you should cast your values to long before performing the multiplication in objC. Also, as rmaddy noticed in the comments, using a long variable doesn't work in 32bit architecture since longis coded using 4bytes. You should use long long type instead, or use explicit types such as int32_t and int64_t.
int x = 500;
long long myLongValue = (long long)x * 20000000;
NSLog(#" %llu",myLongValue); // logs correctly 10000000000
You can also declare your x directly as a long long variable.
FYI : swift is not as tolerant as objC, and your example code will crash because of the 'out-of range' bit :
let a:Int32 = 20000000
let b:Int32 = 500
let result = a*b // CRASH
let result2 = Int64(a)*Int64(b) // OK

Related

How to split a Long into two Ints in Kotlin?

How can I split a Long (64bit) into two Integer (32bit) in Kotlin?
I've tried something like this but it doesn't seem to be doing it:
val id = Integer.MAX_VALUE.toLong() + 2000
val a = id.toInt()
val b = (id shr 32).toInt()
Everything is working fine. Note that Integer.MAX_VALUE is 0x7FFFFFFF, when you add 2000, it becomes 0x800007CF, which is still within 32-bit, but overflow to the negative number range when interpreted as 32-bit signed integer. Therefore a is a negative Int and b is 0

Get the most occuring number amongst several integers without using arrays

DISCLAIMER: Rather theoretical question here, not looking for a correct answere, just asking for some inspiration!
Consider this:
A function is called repetitively and returns integers based on seeds (the same seed returns the same integer). Your task is to find out which integer is returned most often. Easy enough, right?
But: You are not allowed to use arrays or fields to store return values of said function!
Example:
int mostFrequentNumber = 0;
int occurencesOfMostFrequentNumber = 0;
int iterations = 10000000;
for(int i = 0; i < iterations; i++)
{
int result = getNumberFromSeed(i);
int occurencesOfResult = magic();
if(occurencesOfResult > occurencesOfMostFrequentNumber)
{
mostFrequentNumber = result;
occurencesOfMostFrequentNumber = occurencesOfResult;
}
}
If getNumberFromSeed() returns 2,1,5,18,5,6 and 5 then mostFrequentNumber should be 5 and occurencesOfMostFrequentNumber should be 3 because 5 is returned 3 times.
I know this could easily be solved using a two-dimentional list to store results and occurences. But imagine for a minute that you can not use any kind of arrays, lists, dictionaries etc. (Maybe because the system that is running the code has such a limited memory, that you cannot store enough integers at once or because your prehistoric programming language has no concept of collections).
How would you find mostFrequentNumber and occurencesOfMostFrequentNumber? What does magic() do?? (Of cause you do not have to stick to the example code. Any ideas are welcome!)
EDIT: I should add that the integers returned by getNumber() should be calculated using a seed, so the same seed returns the same integer (i.e. int result = getNumber(5); this would always assign the same value to result)
Make an hypothesis: Assume that the distribution of integers is, e.g., Normal.
Start simple. Have two variables
. N the number of elements read so far
. M1 the average of said elements.
Initialize both variables to 0.
Every time you read a new value x update N to be N + 1 and M1 to be M1 + (x - M1)/N.
At the end M1 will equal the average of all values. If the distribution was Normal this value will have a high frequency.
Now improve the above. Add a third variable:
M2 the average of all (x - M1)^2 for all values of xread so far.
Initialize M2 to 0. Now get a small memory of say 10 elements or so. For every new value x that you read update N and M1 as above and M2 as:
M2 := M2 + (x - M1)^2 * (N - 1) / N
At every step M2 is the variance of the distribution and sqrt(M2) its standard deviation.
As you proceed remember the frequencies of only the values read so far whose distances to M1 are less than sqrt(M2). This requires the use of some additional array, however, the array will be very short compared to the high number of iterations you will run. This modification will allow you to guess better the most frequent value instead of simply answering the mean (or average) as above.
UPDATE
Given that this is about insights for inspiration there is plenty of room for considering and adapting the approach I've proposed to any particular situation. Here are some thoughts
When I say assume that the distribution is Normal you should think of it as: Given that the problem has no solution, let's see if there is some qualitative information I can use to decide what kind of distribution would the data have. Given that the algorithm is intended to find the most frequent number, it should be fine to assume that the distribution is not uniform. Let's try with Normal, LogNormal, etc. to see what can be found out (more on this below.)
If the game completely disallows the use of any array, then fine, keep track of only, say 10 numbers. This would allow you to count the occurrences of the 10 best candidates, which will give more confidence to your answer. In doing this choose your candidates around the theoretical most likely value according to the distribution of your hypothesis.
You cannot use arrays but perhaps you can read the sequence of numbers two or three times, not just once. In that case you can read it once to check whether you hypothesis about its distribution is good nor bad. For instance, if you compute not just the variance but the skewness and the kurtosis you will have more elements to check your hypothesis. For instance, if the first reading indicates that there is some bias, you could use a LogNormal distribution instead, etc.
Finally, in addition to providing the approximate answer you would be able to use the information collected during the reading to estimate an interval of confidence around your answer.
Alright, I found a decent solution myself:
int mostFrequentNumber = 0;
int occurencesOfMostFrequentNumber = 0;
int iterations = 10000000;
int maxNumber = -2147483647;
int minNumber = 2147483647;
//Step 1: Find the largest and smallest number that _can_ occur
for(int i = 0; i < iterations; i++)
{
int result = getNumberFromSeed(i);
if(result > maxNumber)
{
maxNumber = result;
}
if(result < minNumber)
{
minNumber = result;
}
}
//Step 2: for each possible number between minNumber and maxNumber, count occurences
for(int thisNumber = minNumber; thisNumber <= maxNumber; thisNumber++)
{
int occurenceOfThisNumber = 0;
for(int i = 0; i < iterations; i++)
{
int result = getNumberFromSeed(i);
if(result == thisNumber)
{
occurenceOfThisNumber++;
}
}
if(occurenceOfThisNumber > occurencesOfMostFrequentNumber)
{
occurencesOfMostFrequentNumber = occurenceOfThisNumber;
mostFrequentNumber = thisNumber;
}
}
I must admit, this may take a long time, depending on the smallest and largest possible. But it will work without using arrays.

How to combine 2 different integers to create a single float (or a double)

I'm Cesare from Italy (please excuse my english), this is my first question posted on StackOverflow and I'm pretty new to Objective-C... I hope I won't make a mess on my first try.
I would like to "combine" two integers that I already have to create a new float (or a double).
By "combine", I mean that I'd like to have the first int before the point and the second int after the point, I'm not trying to convert from int to float. Maybe an example could explain better what I'm trying to do:
First int: 7
Second int: 92
The float I'm trying to get: 7.92
I looked for a previous question like mine but I haven't found anything, maybe because what I'm trying to do is pretty dumb (I have a UIPickerView with 2 components, each containing hundreds of integers, and I'm trying to create a float or double variable that has the selection of the first component before the point and the selection of the second component after the point).
Thanks in advance for your help,
Cesare
Just think about what the definition and/or the purpose of the decimal point is. It separates the part of the number which is less than one from the part greater than or equal to one.
So, keep dividing the part after the decimal point until it's less than 1:
int firstPart = 7;
int secondPart = 92; // or whatever
float f = secondPart;
while (f >= 1) {
f /= 10;
}
f += firstPart;
I know this is later, but came across a similar situation. Maybe this is more efficient.
Take the second number, 92 and divide it by 100. That gives you .92. Add that to the first number. That can give you 7.92. However, since you're adding integers that you want converted to a float, you'll need to cast the numbers when adding them. Like this:
int firstPart = 7;
int secondPart = 92;
float afterDecimalPlace = (float)secondPart/100.0;
float numberAsFloat = (float)firstPart + afterDecimalPlace;
essentially that is:
92/100 = .92
7 + .92 = 7.92

How to round down a float to the nearest value that can be divided by two without rest?

For example I have a float 55.2f and want to round it down such that the result can be divided by two without rest.
So 55.2 would become 54 as that is the nearest smaller "step" that can be divided by two. Is there a function for this or must I write my own algorithm?
If your result must remain a float, you can do:
float f=55.2f;
f=floorf(f/2.f)*2.f;
First convert to an integral type, such as int or long, and then clear the lowest bit.
float f = 55.2f;
int i = (int)f & ~1;
Explanation
~ means the bitwise inverse, i.e. all the 0 bits become 1, and vice versa.
So, if 1 has the bit pattern
0...0001
then ~1 is
1...1110
(Here I'm using ... to represent all the in-between bits depending on how big an integer is on your platform.)
When you & (bitwise AND) your integer with 1...1110, you are preserving the value of each bit apart from the lowest bit, which is forced to 0. See this description of the bitwise AND operator if you still don't get it.
By forcing the lowest bit to be 0, you are rounding the number down to the nearest even number.
You can write your own algorithm, for example with bitwise operators.
The following code works with clearing the last bit of your number. An even number has indeed the last bit not set.
int
f(float x)
{
return (int)x & ~1;
}
How about long int f = lrintf(x / 2);, where x is your float?
You could also just say int f = x / 2;, but some people have argued that that's more expensive, because the C standard mandates a specific rounding mode which may or may not be native to the CPU. The lrintf function on the other hand uses the CPU's native rounding mode. You need to #include <math.h>.

Objective c division of two ints

I'm trying to produce a a float by dividing two ints in my program. Here is what I'd expect:
1 / 120 = 0.00833
Here is the code I'm using:
float a = 1 / 120;
However it doesn't give me the result I'd expect. When I print it out I get the following:
inf
Do the following
float a = 1./120.
You need to specify that you want to use floating point math.
There's a few ways to do this:
If you really are interested in dividing two constants, you can specify that you want floating point math by making the first constant a float (or double). All it takes is a decimal point.
float a = 1./120;
You don't need to make the second constant a float, though it doesn't hurt anything.
Frankly, this is pretty easy to miss so I'd suggest adding a trailing zero and some spacing.
float a = 1.0 / 120;
If you really want to do the math with an integer variable, you can type cast it:
float a = (float)i/120;
float a = 1/120;
float b = 1.0/120;
float c = 1.0/120.0;
float d = 1.0f/120.0f;
NSLog(#"Value of A:%f B:%f C:%f D:%f",a,b,c,d);
Output: Value of A:0.000000 B:0.008333 C:0.008333 D:0.008333
For float variable a : int / int yields integer which you are assigning to float and printing it so 0.0000000
For float variable b: float / int yields float, assigning to float and printing it 0.008333
For float variable c: float / float yields float, so 0.008333
Last one is more precise float. Previous ones are of type double: all floating point values are stored as double data types unless the value is followed by an 'f' to specifically specify a float rather than as a double.
In C (and therefore also in Objective-C), expressions are almost always evaluated without regard to the context in which they appear.
The expression 1 / 120 is a division of two int operands, so it yields an int result. Integer division truncates, so 1 / 120 yields 0. The fact that the result is used to initialize a float object doesn't change the way 1 / 120 is evaluated.
This can be counterintuitive at times, especially if you're accustomed to the way calculators generally work (they usually store all results in floating-point).
As the other answers have said, to get a result close to 0.00833 (which can't be represented exactly, BTW), you need to do a floating-point division rather than an integer division, by making one or both of the operands floating-point. If one operand is floating-point and the other is an integer, the integer operand is converted to floating-point first; there is no direct floating-point by integer division operation.
Note that, as #0x8badf00d's comment says, the result should be 0. Something else must be going wrong for the printed result to be inf. If you can show us more code, preferably a small complete program, we can help figure that out.
(There are languages in which integer division yields a floating-point result. Even in those languages, the evaluation isn't necessarily affected by its context. Python version 3 is one such language; C, Objective-C, and Python version 2 are not.)