GPSTrack from exif - gps

How would I convert the rational64u GPSTrack --from an exif-- into a direction?
with:
import piexif
exif_dict = piexif.load('./img/IMG_1146.jpg')
h = exif_dict['GPS'][piexif.GPSIFD.GPSTrack]
h
we get: (116001, 424)
with metapix the result is: GPSTrack 273.5872642. An example image here.
What must I do to (116001, 424) to get a bearing?

It is rational64u (as used in latitudes and longitudes, and in many places in EXIF) so a float is represented by two integers, a numerator and a denominator.
To get the float, you just divide the two:
degree = numerator / denominator
so
degree = numerator / denominator = 116001 / 424 = 273.587264150943
Usual warning: carious computer languages will do an integer division if the two operand are integer, so you may need to cast them to floats before doing the division.

Related

Short Rounds Up? [duplicate]

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

Why is only the numerator cast to float to obtain a float quotient while dividing two integers?

I am just starting out with Objective-C,and with C in general,so I suppose this is a C question as well.It's more of a why question rather than a how question.
I noticed that while dividing two integers ,the decimal part is rounded down to 0 even though the result is a float.The sources I followed suggest the following approach to deal with this:
float result = (float) numerator / denominator;
I want to know now why this works. Two things especially.1) if you have to cast the numerator, why don't you have to cast the denominator as well? 2) Why can't you just cast the whole thing? What I tried first was
float result = (float) (numerator / denominator);
But this again rounded the decimal part to 0. What is the reason?
By promoting the numerator to float, you require the denominator to be promoted, as well. However, when you place the two in parens, then you are telling the compiler to do the work as integers then cast to float.
See the "usual arithmetic conversions" in the C99 Standard.
Basically it says operands must be of the same type, and if they aren't they get automagically converted.
So, in 4 * 3.5; the 4 (type int) gets automagically converted to 4.0 (type double).
In (double)4 / 7; the 7 gets converted to 7.0.
In 4 / (double)7; the 4 gets converted to 4.0.
In (double)4 / (float)7; the 7.0F (type float) gets converted to 7.0 (type double).
...
Because in the first case, after the numerator is cast to float, the denominator is automatically promoted to float for the division operation.But in the second case, the result of the division is expected to be an integer since both numerator and denominator are integers.Any decimal part is truncated in the second case,and that's done before you cast the result to float.Hence only the first one will produce the correct result while the second will give you the truncated result.
float sum=5/2; //sum will be 2.000000,not 2.500000
float sum=5.0/2; //sum will be 2.500000
float sum=5/2.0; //sum will be 2.500000
float sum=(float)5/2; //sum will be 2.500000
float sum=5/(float)2; //sum will be 2.500000
float sum=(float)(5/2); //sum'll be 2.000000 as cast is after integer division
I believe you can also cast the denominator instead and you would get the expected result.
float result = numerator / (float) denominator;
This would work, but is unnecessary.
float result = (float) numerator / (float) denominator;
This performs integer division first (because the operands are both integers and they are wrapped in parenthesis), then casts the result as a float.
float result = (float) (numerator / denominator);
As long as you cast either the numerator or denominator to a float (or both individually), then the division will be performed using floating point division instead of integer division.

calculations in Objective-C

Could anyone explain to me why this keeps returning 0 when it should return a value of 42? it works on paper so i know the math is right I'm just wondering as to why it isn't translating across?
int a = 60;
int b = 120;
int c = 85;
int progress;
progress = ((c-a)/(b-a))*100;
NSLog(#"Progess = %d %%",progress);
It's because your math is all using integers.
In particular, your inner expression is calculating 25 / 60, which in integer math is zero.
In effect you have over-parenthesised your expression, and the resulting order of evaluation is causing integer rounding problems.
It would have worked fine if you had just written the formula so:
progress = 100 * (c - a) / (b - a);
because the 100 * (c - a) would first evaluate to 2500, and would then be divided by 60 to give 41.
Alternative, if any one (or more) of your variables a, b, or c were a float (or cast thereto) the equation would also work.
That's because an expression in which either operand is a float will cause the other (integer) operand to be promoted to a float, too, at which point the result of the expression will also be a float.
c - a will give you 25
b - a will give you 60
Since a, b, and c are all integers, meaning they can't be decimals. Therefore, by doing (c-a)/(b-a), you will get 0, instead of 0.41666666 because in integer division, anything after the decimal point will get cut off, leaving the number before the decimal point.
To make it work the way you wanted it to, you should try casting (c-a) and (b-a) to either double or float:
progress = ((float)(c-a) / (float)(b-a)) * 100;
or
progress = ((double)(c-a) / (double)(b-a)) * 100;
a,b and c are ints. When you calculate ((c-a)/(b-a)), the result is also an int; the real value is a decimal (0.42), but an int can't take a decimal number, so it rounds to 0, which is multiplied by 100 to get 0.
Because (c - a) / (b - a) is computed using integer math.
To fix, cast to a float before dividing:
progress = (int)((((float)(c - a)) / ((float)(b - a))) * 100);

Returning a number less than 1

I am working on an app that needs to utilize a ratio of a given number and multiply that ratio times another number. Problem is that I can't get numbers less that 1 to give me the proper decimal ratio, instead it gives me zero (when it should be .5).
Example:
float number = 1/2; // This gives me zero
double number = 1/2; // This also gives me zero
If you don't specify decimal places you're using integers which means the calculation is performed with integer precision before the result is cast to the type on the LHS. You want to do the the following when using hard coded numbers in your code:
float number = 1.0f / 2.0f;
double number = 1.0 / 2.0;
If you're aiming to use integer variables for an operation, you'll want to cast them to the type that you want for your result.
Try this
float number = 1.0/2.0;
Remember that 1 is an int, so you are essentially taking
(int)1 / (int)2
which returns
(int)0
To cast variables that are ints, do
float number = (float)numerator / (float)denominator;

Objective c division of two ints

I'm trying to produce a a float by dividing two ints in my program. Here is what I'd expect:
1 / 120 = 0.00833
Here is the code I'm using:
float a = 1 / 120;
However it doesn't give me the result I'd expect. When I print it out I get the following:
inf
Do the following
float a = 1./120.
You need to specify that you want to use floating point math.
There's a few ways to do this:
If you really are interested in dividing two constants, you can specify that you want floating point math by making the first constant a float (or double). All it takes is a decimal point.
float a = 1./120;
You don't need to make the second constant a float, though it doesn't hurt anything.
Frankly, this is pretty easy to miss so I'd suggest adding a trailing zero and some spacing.
float a = 1.0 / 120;
If you really want to do the math with an integer variable, you can type cast it:
float a = (float)i/120;
float a = 1/120;
float b = 1.0/120;
float c = 1.0/120.0;
float d = 1.0f/120.0f;
NSLog(#"Value of A:%f B:%f C:%f D:%f",a,b,c,d);
Output: Value of A:0.000000 B:0.008333 C:0.008333 D:0.008333
For float variable a : int / int yields integer which you are assigning to float and printing it so 0.0000000
For float variable b: float / int yields float, assigning to float and printing it 0.008333
For float variable c: float / float yields float, so 0.008333
Last one is more precise float. Previous ones are of type double: all floating point values are stored as double data types unless the value is followed by an 'f' to specifically specify a float rather than as a double.
In C (and therefore also in Objective-C), expressions are almost always evaluated without regard to the context in which they appear.
The expression 1 / 120 is a division of two int operands, so it yields an int result. Integer division truncates, so 1 / 120 yields 0. The fact that the result is used to initialize a float object doesn't change the way 1 / 120 is evaluated.
This can be counterintuitive at times, especially if you're accustomed to the way calculators generally work (they usually store all results in floating-point).
As the other answers have said, to get a result close to 0.00833 (which can't be represented exactly, BTW), you need to do a floating-point division rather than an integer division, by making one or both of the operands floating-point. If one operand is floating-point and the other is an integer, the integer operand is converted to floating-point first; there is no direct floating-point by integer division operation.
Note that, as #0x8badf00d's comment says, the result should be 0. Something else must be going wrong for the printed result to be inf. If you can show us more code, preferably a small complete program, we can help figure that out.
(There are languages in which integer division yields a floating-point result. Even in those languages, the evaluation isn't necessarily affected by its context. Python version 3 is one such language; C, Objective-C, and Python version 2 are not.)