Objective C - Creating Angles From Current Time - objective-c

I'm trying to write code to draw a clock on the screen of an iOS device. I need to get the angle of a line (seconds, minutes, hours hands of clock) from the current time. My code accurately grabs the time, but for some reason, all of the angles I receive end up being the same (no matter what time it is).
If it helps, the angle I am constantly receiving is:
-1.5707963267948966
Here is the code I use to get the angles:
secondsTheta = ((seconds/60) * (2 * M_PI)) - (M_PI / 2);
minutesTheta = ((minutes/60) + (seconds/3600)) * (2 * M_PI) - (M_PI / 2);
hoursTheta = ((hours/12) + (minutes/720) + (seconds/43200)) * (2 * M_PI) - (M_PI / 2);
My thought is that something is funky with M_PI, but I don't know what would be...but as I said, the seconds, minutes, and hours variables are correct. They are declared in my header file as ints, and I know that [NSDateComponents seconds](etc) returns an NSInteger, but I don't think that should matter for this basic math.

Since the seconds, minutes, and hours variables are declared as ints the division will not give you the correct values. An int divided by another init will result in an int, what is needed for the result is a float. In order to have the compiler use floating point arithmetic it is necessary that one of the operands be a floating point format number (float).
Example: 10 seconds divided by 60 (10/60) will use integer math and result in 0.
Example: 10.0 seconds divided by 60 (10/60) will use floating point math and result in 0.1.66666667.
Example:
secondsTheta = ((seconds/60.0) * (2 * M_PI)) - (M_PI / 2);
or
secondsTheta = (((float)seconds/60) * (2 * M_PI)) - (M_PI / 2);

Your seconds, minutes and hours are ints. Dividing ints by ints does integer arithmetic and truncates the values, so
seconds/60
will always give you 0. Objective C inherits this behavior from C and this is fairly common behavior among programming languages.

Related

Objective C vs Swift math operations

Performing the following operation in Objective-C and Swift returns different results:
With an input of 23492.4852,
Objective-C function:
+ (double)funTest:(double)a {
return a - (int) a / 360 * 360;
}
returns 92.48521
Swift function:
class func funTest(a: Double) -> Double {
return a - Double(Int(a)) / 360 * 360
}
returns 0.48521
Does anybody know why the difference?
The difference is integer vs. floating point division. In integer division, the fractional part is ignored. Some quick examples are 1 / 2 = 0 or 2 / 3 = 0 but 1.0 / 2.0 = 0.5 and 2.0 / 3.0 = 0.67.
Let's break down how your code works in both languages:
Objective-C
Assuming a = 23492.4852:
a - (int) a / 360 * 360 = a - ((int) a) / 360 * 360
= a - 23492 / 360 * 360 // integer division
= a - 65 * 360
= 23492.4852 - 23400
= 92.4852
Objective-C inherits type promotion rules from C, which can be a lot to remember.
Swift
Assuming a = 23492.4852:
a - Double(Int(a)) / 360 * 360 = a - Double(23492) / 360 * 360 // floating point division
= a - 65.2556 * 360
= a - 23492
= 23492.4852 - 23492
= 0.4852
In both cases, the compiler has some leeways in interpreting the literal constant of 360: it can be seen an int or double.
I don't know the exact internal workings of the ObjC compiler. You just have to be careful when mixing numeric types in C.
Swift tries to prevent this confusion by forcing all operands to be of the same data type. Since a is Double, the only way to interpret 360 is that it must also be a Double.
Does anybody know why the difference?
You've just made a simple grouping error.
As you figured out both Objective-C and Swift require a cast when converting from a floating-point value to an integer one, so you have written (int)a for the former and Int(a) for the latter.
You have also understood that converting from an integer to a floating-point value differs in the two languages, in Objective-C (and C and lots of other languages) the conversion is implicit whereas in Swift it is explicit.
The only mistake you have made is in parsing the Objective-C and hence producing the wrong Swift or you've simply mis-typed the Swift.
In arithmetic expressions operators are evaluated according to a priority, relevant to your problem casts bind tightly to the following expression, multiplication and division is done next, then addition and subtraction. What this means is your Objective-C:
a - (int) a / 360 * 360
is parsed as:
a - (double) ( (int) a / 360 * 360 )
note that the (double) cast applies to the result of the expression (int) a / 360 * 360. What you've written in Swift is:
a - Double(Int(a)) / 360 * 360
which isn't the same, here the cast only applies to Int(a). What you should have written is:
a - Double(Int(a) / 360 * 360)
which applies the cast to Int(a) / 360 * 360 just as the Objective-C does.
With that correction in both languages the multiplication and division all operate on integers, and integer division is truncating (e.g. 9 / 4 is 2 not 2.25). With the misplaced parenthesis in Swift the multiplication and division all operate on floating-point values.
TL;DR: You just misplaced a parenthesis.
HTH
It's due to how the compilers see the numbers. Notice in swift you had to explicitly cast a into a double after casting it to an Int? The swift compiler sees the entire expression as Doubles so when you do Double(Int(a)) / 360 * 360 you're getting 23492 / 360 * 360 = 65.25555... * 360 = 23492.4852. However, in C/C++/Obj-C etc it sees that 23492 / 360 as an int division giving 23492 / 360 * 360 = 65 * 360 = 23400. And that's where the 90 comes from (from the loss of precision when dividing 2 ints in C.

Difference between these 2 functions?

I have 2 degree-to-radian functions pre-defined using #define:
Function 1:
#define degreesToRadians(degrees) (M_PI * degrees / 180.0)
Function 2:
#define DEGREES_TO_RADIANS(angle) ((angle) / 180.0 * M_PI)
Only the 2nd function returns correct answer, while the first one provides weird answer. What are the differences between them?
Non of the two "functions" mentioned above is a functions, they are macros, and the first macro is not safe, for example, expanding the macro degreesToRadians(10 + 10) gives (M_PI * 10 + 10 / 180.0), which is interpreted as ((M_PI * 10) + (10 / 180.0)) and this is clearly wrong. While expanding DEGREES_TO_RADIANS(10 + 10) gives ((10 + 10 ) / 180.0 * M_PI) which is correct.
The other difference is that M_PI * degreess might overflow the double boundaries, and thus give a wrong answer (but this requires a rather high value in degrees)
The calculations are pretty much identical, notwithstanding floating point limitations. However, you have angle surrounded with parentheses in the second macro, which is the right thing to do.
In the first macro, if you do:
x = degreesToRadians(a + 45);
then, remembering that macros are simple text substitutions, you'll end up with:
x = (M_PI * a + 45 / 180.0);
which will not end well, since it will be calculated as if you'd written:
x = (M_PI * a) + (45 / 180.0);
In other words, you simply multiply the angle by PI and add a constant 0.25.
If instead you change the first one to be:
#define degreesToRadians(degrees) (M_PI * (degrees) / 180.0)
then it should begin to behave a little better.
The other difference has to do with either large or small values of the angle. A divide-then-multiply on a small angle (and I mean really small like 10-308, approaching the IEEE754 limits) may result in a zero result while a multiply-then-divide on a large angle (like 10308) may give you overflow.
My advice would be to ensure you use "normal" angles (or normalise them before conversion). Provided you do that, the different edge conditions of each method shouldn't matter.
And, in all honesty, you probably shouldn't even be using macros for this. With insanely optimising compilers and enumerations, macros should pretty much be relegated to conditional compilation nowadays. I'd simply rewrite it as a function along the lines of:
double degreesToRadians(double d) {
return M_PI * d / 180.0;
}
Or, you could even adjust the code so as to not worry about small or large angles (if you're paranoid):
double degreesToRadians(double d) {
if ((d > -1) && (d < 1))
return (M_PI * d) / 180.0;
return (d / 180.0) * M_PI;
}

ios issue with log calculation

I am working on a calculation for free space loss and hitting a snag.
Doing this calculation:
fslLoss = 36.6 + (20 * log(fromAntenna/5280)) + (20 * log(serviceFreq))
Where fslLoss is a float and fromAntenna and servicefreq are integers:
NSLog(#"the freespace Loss is %0.01f", fslLoss);
The result is "the freespace Loss is -inf"
The issue appears to be in the 20log(fromAntenna/5280) section, as I get normal results without it.
BTW ... tried log10 with the same results.
Thanks for the help,
padapa
You say fromAntenna is an integer, so fromAntenna/5280 will be calculated with integer arithmetic. That means it will be rounded (floored, technically), probably not what you intended.
Fix it with:
log( (double) fromAntenna / 5280.0 )
log(0) is -inf. The integer division inside the logarithm may be zero. Use fromAntenna/5280.0 to get float division.
The compiler is correctly using fromAntenna & serviceFreq as ints and that's not giving you good results when fslLoss is a float. Use some float casts and you'll have better luck:
fslLoss = 36.6 + (20 * log((float)fromAntenna/5280)) + (20 * log((float)serviceFreq));

calculations in Objective-C

Could anyone explain to me why this keeps returning 0 when it should return a value of 42? it works on paper so i know the math is right I'm just wondering as to why it isn't translating across?
int a = 60;
int b = 120;
int c = 85;
int progress;
progress = ((c-a)/(b-a))*100;
NSLog(#"Progess = %d %%",progress);
It's because your math is all using integers.
In particular, your inner expression is calculating 25 / 60, which in integer math is zero.
In effect you have over-parenthesised your expression, and the resulting order of evaluation is causing integer rounding problems.
It would have worked fine if you had just written the formula so:
progress = 100 * (c - a) / (b - a);
because the 100 * (c - a) would first evaluate to 2500, and would then be divided by 60 to give 41.
Alternative, if any one (or more) of your variables a, b, or c were a float (or cast thereto) the equation would also work.
That's because an expression in which either operand is a float will cause the other (integer) operand to be promoted to a float, too, at which point the result of the expression will also be a float.
c - a will give you 25
b - a will give you 60
Since a, b, and c are all integers, meaning they can't be decimals. Therefore, by doing (c-a)/(b-a), you will get 0, instead of 0.41666666 because in integer division, anything after the decimal point will get cut off, leaving the number before the decimal point.
To make it work the way you wanted it to, you should try casting (c-a) and (b-a) to either double or float:
progress = ((float)(c-a) / (float)(b-a)) * 100;
or
progress = ((double)(c-a) / (double)(b-a)) * 100;
a,b and c are ints. When you calculate ((c-a)/(b-a)), the result is also an int; the real value is a decimal (0.42), but an int can't take a decimal number, so it rounds to 0, which is multiplied by 100 to get 0.
Because (c - a) / (b - a) is computed using integer math.
To fix, cast to a float before dividing:
progress = (int)((((float)(c - a)) / ((float)(b - a))) * 100);

Objective-C Integer Arithmetic

I'm trying to calculate some numbers in an iPhone application.
int i = 12;
int o = (60 / (i * 50)) * 1000;
I would expect o to be 100 (that's milliseconds) in this example but it equals 0 as displayed by NSLog(#"%d", o).
This also equals 0.
int o = 60 / (i * 50) * 1000;
This equals 250,000, which is straight left-to-right math.
int o = 60 / i * 50 * 1000;
What's flying over my head here?
Thanks,
Nick
In Objective-C / performs integer division on integer arguments, so 4/5 is truncated to 0, 3/2 is truncated to 1, and so on. You probably want to cast some of your numbers to floating-point forms before performing division.
You're also running in to issues with precedence. In the expression
60 / (i * 50) * 1000
the term inside the parentheses is calculated first, so 60 is divided by 600 which produces the result 0. In
60 / i * 50 * 1000
the first operation is to divide 60 by 12 which gives the result 5 and then the multiplications are carried out.
An integer divided by an integer is an integer.
so 60/600 is not 0.1, it is 0.
Cast (or declare) some stuff as float instead.
It's doing integer math. 60 / (12 * 50) is 0.1, truncates to 0.
Should work if you force floating point and then cast back to an integer.
int o = (int)(60.0 / ((double) i / 50.0) * 1000.0;
Probably not really necessary to make everything a double.
Replace:
int o = (60 / (i * 50)) * 1000;
with:
int o = 1200/i;
By order of precedence, the operation:
60 / (12 * 50)
is performed before multiplying by 1000.
This value is less than 1 and is cast to an int, which truncates it to 0. And 0 times anything is 0.
Use a float or first multiply by 1000 to ensure you're not ending up with propagating a 0 in your calculations.
All the operations in your expression are performed in integer arithmetic, meaning that the fractional part of each intermediate result is truncated. This means that if you divide a smaller integer by a larger integer you will always get 0.
To get the result you want you must either make sure the operations are performed in a particular order, or you must use floats. For example the result of
int o = (60.0 / (i * 50.0)) * 1000.0;
should be o = 100.
I think you need to use float here instead of int. It will work the way you want! Will give you answer in decimals as well.