Why do some C math expressions require constants to be explicitly marked as floats? - objective-c

So I just found this bug in my code and I am wondering what rules I'm not understanding.
I have a float variable logDiff, that currently contains a very small number. I want to see if it's bigger than a constant expression (80% of a 12th). I read years ago in Code Complete to just leave calculated constants in their simplest form for readability, and the compiler (XCode 4.6.3) will inline them anyway. So I have,
if ( logDiff > 1/12 * .8 ) {
I'm assuming the .8 and the fraction all evaluates to the correct number. Looks legit:
(lldb) expr (float) 1/12 * .8
(double) $1 = 0.0666666686534882
(lldb) expr logDiff
(float) $2 = 0.000328541
But it always wrongly evaluates to true. Even when I mess with enclosing parens and stuff.
(lldb) expr logDiff > 1/12 * .8
(bool) $4 = true
(lldb) expr logDiff > (1/12 * .8)
(bool) $5 = true
(lldb) expr logDiff > (float)(1/12 * .8)
(bool) $6 = true
I found I have to explicitly spell at least one of them as floats to get the correct result,
(lldb) expr logDiff > (1.f/12.f * .8f)
(bool) $7 = false
(lldb) expr logDiff > (1/12.f * .8)
(bool) $8 = false
(lldb) expr logDiff > (1./12 * .8f)
(bool) $11 = false
(lldb) expr logDiff > (1./12 * .8)
(bool) $12 = false
but I recently read a popular style guide explicitly eschew these fancier numeric literals, apparently according to my assumption that the compiler would be smarter than me and Do What I Mean.
Should I always spell my numeric constants like 1.f if they might need to be a float? Sounds superstitious. Help me understand why and when it's necessary?

The expression 1/12 is an integer division. That means that the result will be truncated as zero.
When you do (float) 1/12 you cast the one as a float, and the whole expression becomes a floating point expression.

In C int/int gives an int. If you don't explicitly tell the compiler to convert at least one to a float, it will do the division and round down to the nearest int (in this case 0).
I note that the linked style guide actually says Avoid making numbers a specific type unless necessary. In this case it is needed as what you want is for the compiler to do some type conversions

An expression such as 1 / 4 is treated as integer division and hence has no decimal precision. In this specific case, the result will be 0. You can think of this as int / int implies int.
Should I always spell my numeric constants like 1.f if they might need to be a float? Sounds superstitious. Help me understand why and when it's necessary?
It's not superstitious, you are telling the compiler that these are type literals (floats as an example) and the compiler will treat all operations on them as such.
Moreover, you could cast an expression. Consider the following:
float result = ( float ) 1 / 4;
... I am casting 1 to be a float and hence the result of float / int will be float. See datatype operation precedence (or promotion).

That is simple. Per default, a numeric value is interpredted as an int.
There are math expresssions where that does not matter too much. But in case of divisions it can drive you crazy. (int) 1 / (int) 12 is not (float) 0.08333 but (int) 0.
1/12.0 would evaluate to (float) 0.83333.
Side note: When you go for float where you used int before there is one more trap waiting for you. That is when ever you compare values for equality.
float f = 12/12.0f;
if (f = 1) ... // this may not work out. Never expect a float to be of a specific value. They can vary slightly.
Better is:
if (abs(f - 1) < 0.0001) ... // this way yoru comparison is fuzzy enough for the variances that float values may come with.

Related

i want to write a function that rewrite a float to continued fraction

i am trying to make a recursive function, that can rewrite a float to an continued fraction. I am getting an error messange that i dont understand
it seems like it can't storage certain numbers binary and how do i compare then. Thats my current theory.
condition 'cfa_reg != -1' not met
let rec float2cfrac (x : float) : int list =
if x - floor x = 0.0 then
[int x]
else
[int x] # float2cfrac (1.0/(x - floor x))
printfn "%A" (float2cfrac 3.245)// list
When I run your code. I get a stack overflow.
That means that your condition x - floor x = 0.0 is never met.
Equality with floating point numbers is a tricky thing as there is always a small precision error involved in all calculations. Never use equality, instead calculate until the difference is less than an acceptable error:
abs(x - floor x) < 0.0000000001

how to set precision in Objective-C

In C++, there is a std::setprecision function can set float/double precision.
how can i set precision in Objective-C? and this print below:
(lldb) p 10/200
(int) $0 = 0
(lldb) p (float)10/200
(float) $1 = 0.0500000007
line 3 result is 0.0500000007, why is '7' in the result? how can i get the result is 0.05?
Floating-point numbers are binary floating-point numbers. 0.05 cannot be represented exactly by a binary floating point number. The result cannot ever be exactly 0.05.
In addition, you are quite pointlessly using float instead of double. float has only six or seven digits precision. Unless you have a very good reason that you can explain, use double, which gives you about 15 digits of precision. You still won't be able to get 0.05 exactly, but the error will be much less.
You may use NSNumberFormatter to format objects in a wide variety of ways --- too numerous to list here, see the documentation available from Xcode. Also see the Data Formatting Guide.
You must make change between % modulo operator and its identifier f in order to get desired result.
NSString* formattedNumber = [NSString stringWithFormat:#"%.02f", myFloat];
%.02f tells the formatter that you will be formatting a float (%f) and, that should be rounded to two places, and should be padded with 0s.
Example:
%f = 25.000000 // results 25.000000
%.f = 25 // results 25
%.02f = 25.00 // results 25.00
plz use
double A = 0.0500000007;
NSString *b =[NSString stringWithFormat:#"$%.02f",A] ;
double B = [b doubleValue];

Short Rounds Up? [duplicate]

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

Will dividing an NSUInteger by 2 result in a whole number?

I am trying to do this in Objective-C:
self.nsarray.count/2
If the count is equal to 5, will the result be 5/2 = 2.5 or 5/2 = 2?
I am NSLogging the answer and it only shows me 2. I'm not sure if that's the actual answer or if's 2, because I am forced to use the %u format to log the answer. Please also explain the 'why' of this result.
The division with two whole numbers in Objective-C always produces a whole number as a result, in your case it would be NSUInteger, and 2 is a valid result in this case. To get a result with floating point at least one of your operands should be float typed, or at least one of them should be casted to float, so here's some options:
// Second part of division is float, so result is float as well
float result = self.array.count/2.
// First part of division is float, so result is float as well
float result2 = (float)self.array.count/2 // or you can type ((float)self.array.count)/2 for more clearance
Note that casting result to float isn't valid on your case, for instance in (float) (5/2) the result would be a whole number of type float (2.0) as you only cast a NSIntger to float
Floats are usually formatted in NSLog format as %f or %g

Objective c division of two ints

I'm trying to produce a a float by dividing two ints in my program. Here is what I'd expect:
1 / 120 = 0.00833
Here is the code I'm using:
float a = 1 / 120;
However it doesn't give me the result I'd expect. When I print it out I get the following:
inf
Do the following
float a = 1./120.
You need to specify that you want to use floating point math.
There's a few ways to do this:
If you really are interested in dividing two constants, you can specify that you want floating point math by making the first constant a float (or double). All it takes is a decimal point.
float a = 1./120;
You don't need to make the second constant a float, though it doesn't hurt anything.
Frankly, this is pretty easy to miss so I'd suggest adding a trailing zero and some spacing.
float a = 1.0 / 120;
If you really want to do the math with an integer variable, you can type cast it:
float a = (float)i/120;
float a = 1/120;
float b = 1.0/120;
float c = 1.0/120.0;
float d = 1.0f/120.0f;
NSLog(#"Value of A:%f B:%f C:%f D:%f",a,b,c,d);
Output: Value of A:0.000000 B:0.008333 C:0.008333 D:0.008333
For float variable a : int / int yields integer which you are assigning to float and printing it so 0.0000000
For float variable b: float / int yields float, assigning to float and printing it 0.008333
For float variable c: float / float yields float, so 0.008333
Last one is more precise float. Previous ones are of type double: all floating point values are stored as double data types unless the value is followed by an 'f' to specifically specify a float rather than as a double.
In C (and therefore also in Objective-C), expressions are almost always evaluated without regard to the context in which they appear.
The expression 1 / 120 is a division of two int operands, so it yields an int result. Integer division truncates, so 1 / 120 yields 0. The fact that the result is used to initialize a float object doesn't change the way 1 / 120 is evaluated.
This can be counterintuitive at times, especially if you're accustomed to the way calculators generally work (they usually store all results in floating-point).
As the other answers have said, to get a result close to 0.00833 (which can't be represented exactly, BTW), you need to do a floating-point division rather than an integer division, by making one or both of the operands floating-point. If one operand is floating-point and the other is an integer, the integer operand is converted to floating-point first; there is no direct floating-point by integer division operation.
Note that, as #0x8badf00d's comment says, the result should be 0. Something else must be going wrong for the printed result to be inf. If you can show us more code, preferably a small complete program, we can help figure that out.
(There are languages in which integer division yields a floating-point result. Even in those languages, the evaluation isn't necessarily affected by its context. Python version 3 is one such language; C, Objective-C, and Python version 2 are not.)