Objective C vs Swift math operations - objective-c

Performing the following operation in Objective-C and Swift returns different results:
With an input of 23492.4852,
Objective-C function:
+ (double)funTest:(double)a {
return a - (int) a / 360 * 360;
}
returns 92.48521
Swift function:
class func funTest(a: Double) -> Double {
return a - Double(Int(a)) / 360 * 360
}
returns 0.48521
Does anybody know why the difference?

The difference is integer vs. floating point division. In integer division, the fractional part is ignored. Some quick examples are 1 / 2 = 0 or 2 / 3 = 0 but 1.0 / 2.0 = 0.5 and 2.0 / 3.0 = 0.67.
Let's break down how your code works in both languages:
Objective-C
Assuming a = 23492.4852:
a - (int) a / 360 * 360 = a - ((int) a) / 360 * 360
= a - 23492 / 360 * 360 // integer division
= a - 65 * 360
= 23492.4852 - 23400
= 92.4852
Objective-C inherits type promotion rules from C, which can be a lot to remember.
Swift
Assuming a = 23492.4852:
a - Double(Int(a)) / 360 * 360 = a - Double(23492) / 360 * 360 // floating point division
= a - 65.2556 * 360
= a - 23492
= 23492.4852 - 23492
= 0.4852
In both cases, the compiler has some leeways in interpreting the literal constant of 360: it can be seen an int or double.
I don't know the exact internal workings of the ObjC compiler. You just have to be careful when mixing numeric types in C.
Swift tries to prevent this confusion by forcing all operands to be of the same data type. Since a is Double, the only way to interpret 360 is that it must also be a Double.

Does anybody know why the difference?
You've just made a simple grouping error.
As you figured out both Objective-C and Swift require a cast when converting from a floating-point value to an integer one, so you have written (int)a for the former and Int(a) for the latter.
You have also understood that converting from an integer to a floating-point value differs in the two languages, in Objective-C (and C and lots of other languages) the conversion is implicit whereas in Swift it is explicit.
The only mistake you have made is in parsing the Objective-C and hence producing the wrong Swift or you've simply mis-typed the Swift.
In arithmetic expressions operators are evaluated according to a priority, relevant to your problem casts bind tightly to the following expression, multiplication and division is done next, then addition and subtraction. What this means is your Objective-C:
a - (int) a / 360 * 360
is parsed as:
a - (double) ( (int) a / 360 * 360 )
note that the (double) cast applies to the result of the expression (int) a / 360 * 360. What you've written in Swift is:
a - Double(Int(a)) / 360 * 360
which isn't the same, here the cast only applies to Int(a). What you should have written is:
a - Double(Int(a) / 360 * 360)
which applies the cast to Int(a) / 360 * 360 just as the Objective-C does.
With that correction in both languages the multiplication and division all operate on integers, and integer division is truncating (e.g. 9 / 4 is 2 not 2.25). With the misplaced parenthesis in Swift the multiplication and division all operate on floating-point values.
TL;DR: You just misplaced a parenthesis.
HTH

It's due to how the compilers see the numbers. Notice in swift you had to explicitly cast a into a double after casting it to an Int? The swift compiler sees the entire expression as Doubles so when you do Double(Int(a)) / 360 * 360 you're getting 23492 / 360 * 360 = 65.25555... * 360 = 23492.4852. However, in C/C++/Obj-C etc it sees that 23492 / 360 as an int division giving 23492 / 360 * 360 = 65 * 360 = 23400. And that's where the 90 comes from (from the loss of precision when dividing 2 ints in C.

Related

Difference between these 2 functions?

I have 2 degree-to-radian functions pre-defined using #define:
Function 1:
#define degreesToRadians(degrees) (M_PI * degrees / 180.0)
Function 2:
#define DEGREES_TO_RADIANS(angle) ((angle) / 180.0 * M_PI)
Only the 2nd function returns correct answer, while the first one provides weird answer. What are the differences between them?
Non of the two "functions" mentioned above is a functions, they are macros, and the first macro is not safe, for example, expanding the macro degreesToRadians(10 + 10) gives (M_PI * 10 + 10 / 180.0), which is interpreted as ((M_PI * 10) + (10 / 180.0)) and this is clearly wrong. While expanding DEGREES_TO_RADIANS(10 + 10) gives ((10 + 10 ) / 180.0 * M_PI) which is correct.
The other difference is that M_PI * degreess might overflow the double boundaries, and thus give a wrong answer (but this requires a rather high value in degrees)
The calculations are pretty much identical, notwithstanding floating point limitations. However, you have angle surrounded with parentheses in the second macro, which is the right thing to do.
In the first macro, if you do:
x = degreesToRadians(a + 45);
then, remembering that macros are simple text substitutions, you'll end up with:
x = (M_PI * a + 45 / 180.0);
which will not end well, since it will be calculated as if you'd written:
x = (M_PI * a) + (45 / 180.0);
In other words, you simply multiply the angle by PI and add a constant 0.25.
If instead you change the first one to be:
#define degreesToRadians(degrees) (M_PI * (degrees) / 180.0)
then it should begin to behave a little better.
The other difference has to do with either large or small values of the angle. A divide-then-multiply on a small angle (and I mean really small like 10-308, approaching the IEEE754 limits) may result in a zero result while a multiply-then-divide on a large angle (like 10308) may give you overflow.
My advice would be to ensure you use "normal" angles (or normalise them before conversion). Provided you do that, the different edge conditions of each method shouldn't matter.
And, in all honesty, you probably shouldn't even be using macros for this. With insanely optimising compilers and enumerations, macros should pretty much be relegated to conditional compilation nowadays. I'd simply rewrite it as a function along the lines of:
double degreesToRadians(double d) {
return M_PI * d / 180.0;
}
Or, you could even adjust the code so as to not worry about small or large angles (if you're paranoid):
double degreesToRadians(double d) {
if ((d > -1) && (d < 1))
return (M_PI * d) / 180.0;
return (d / 180.0) * M_PI;
}

Objective-c power operator not clear

Im working on a small calculation app and I'm using a formula I created in PHP and now trying to translate to Objective-C however, the power operator is not clear to me.
Im looking at the following code:
float value = ((((x)*i)/12)/(1-(1+i/12)^-((x*12))))-i;
The power operator is non existent in Objective-C.
How should I apply the power operator in Objective-C and could some assist me by telling me where it should be?
(Too many parentheses! You don't need parentheses around x or (x*12), for instance.)
There is no power operator. The standard function powf() will do the job, however (pow() if you wanted a double result):
float value = x * i / 12 / (1 - powf(1 + i / 12, -12 * x)) - i;
^ is the bitwise XOR operator both in C (and so in Objective-C as well) and in PHP.
To perform a power operator use the C pow (which returns a double) or powf (which returns a float) functions
float result = powf(5, 2); // => 25
Your expression will then become (stripping away all the redundant parenthesis and leaving some for readability) :
float value = (x*i/12) / (1 - powf(1 + i/12, -x*12)) - i;

calculations in Objective-C

Could anyone explain to me why this keeps returning 0 when it should return a value of 42? it works on paper so i know the math is right I'm just wondering as to why it isn't translating across?
int a = 60;
int b = 120;
int c = 85;
int progress;
progress = ((c-a)/(b-a))*100;
NSLog(#"Progess = %d %%",progress);
It's because your math is all using integers.
In particular, your inner expression is calculating 25 / 60, which in integer math is zero.
In effect you have over-parenthesised your expression, and the resulting order of evaluation is causing integer rounding problems.
It would have worked fine if you had just written the formula so:
progress = 100 * (c - a) / (b - a);
because the 100 * (c - a) would first evaluate to 2500, and would then be divided by 60 to give 41.
Alternative, if any one (or more) of your variables a, b, or c were a float (or cast thereto) the equation would also work.
That's because an expression in which either operand is a float will cause the other (integer) operand to be promoted to a float, too, at which point the result of the expression will also be a float.
c - a will give you 25
b - a will give you 60
Since a, b, and c are all integers, meaning they can't be decimals. Therefore, by doing (c-a)/(b-a), you will get 0, instead of 0.41666666 because in integer division, anything after the decimal point will get cut off, leaving the number before the decimal point.
To make it work the way you wanted it to, you should try casting (c-a) and (b-a) to either double or float:
progress = ((float)(c-a) / (float)(b-a)) * 100;
or
progress = ((double)(c-a) / (double)(b-a)) * 100;
a,b and c are ints. When you calculate ((c-a)/(b-a)), the result is also an int; the real value is a decimal (0.42), but an int can't take a decimal number, so it rounds to 0, which is multiplied by 100 to get 0.
Because (c - a) / (b - a) is computed using integer math.
To fix, cast to a float before dividing:
progress = (int)((((float)(c - a)) / ((float)(b - a))) * 100);

Objective C - Creating Angles From Current Time

I'm trying to write code to draw a clock on the screen of an iOS device. I need to get the angle of a line (seconds, minutes, hours hands of clock) from the current time. My code accurately grabs the time, but for some reason, all of the angles I receive end up being the same (no matter what time it is).
If it helps, the angle I am constantly receiving is:
-1.5707963267948966
Here is the code I use to get the angles:
secondsTheta = ((seconds/60) * (2 * M_PI)) - (M_PI / 2);
minutesTheta = ((minutes/60) + (seconds/3600)) * (2 * M_PI) - (M_PI / 2);
hoursTheta = ((hours/12) + (minutes/720) + (seconds/43200)) * (2 * M_PI) - (M_PI / 2);
My thought is that something is funky with M_PI, but I don't know what would be...but as I said, the seconds, minutes, and hours variables are correct. They are declared in my header file as ints, and I know that [NSDateComponents seconds](etc) returns an NSInteger, but I don't think that should matter for this basic math.
Since the seconds, minutes, and hours variables are declared as ints the division will not give you the correct values. An int divided by another init will result in an int, what is needed for the result is a float. In order to have the compiler use floating point arithmetic it is necessary that one of the operands be a floating point format number (float).
Example: 10 seconds divided by 60 (10/60) will use integer math and result in 0.
Example: 10.0 seconds divided by 60 (10/60) will use floating point math and result in 0.1.66666667.
Example:
secondsTheta = ((seconds/60.0) * (2 * M_PI)) - (M_PI / 2);
or
secondsTheta = (((float)seconds/60) * (2 * M_PI)) - (M_PI / 2);
Your seconds, minutes and hours are ints. Dividing ints by ints does integer arithmetic and truncates the values, so
seconds/60
will always give you 0. Objective C inherits this behavior from C and this is fairly common behavior among programming languages.

Simple maths in Objective-C producing unexpected results

I'm doing the following in Objective-C and expecting 180 as the output but I'm getting 150. Can anyone explain what I'm doing wrong?
(360 / 100) * 50
You're (accidentally) using integer division. 360 / 100 is returning 3, then 3 * 50 is of course 150. To obtain a floating point result, try casting 360 or 100 to a float first, or just use a literal float - i.e., 360.0 / 100 or 360 / 100.0 or even 360.0 / 100.0.
Or, as #KennyTM pointed out in a comment, you can reorder the statement such as 360 * 50 / 100 -- this is particularly useful if a floating-point number is unacceptable for any reason.