Objective-C - Get number of decimals of a double variable - objective-c

Is there a nice way to get the number of decimals of a double variable in Objective-C?
I am struggling for a while to find a way but with no success.
For example 231.44232000 should return 5.
Thanks in advance.

You could, in a loop, multiply by 10 until the fractional part (returned by modf()) is really close to zero. The number of iterations'll be the answer you're after. Something like:
int countDigits(double num) {
int rv = 0;
const double insignificantDigit = 8;
double intpart, fracpart;
fracpart = modf(num, &intpart);
while ((fabs(fracpart) > 0.000000001f) && (rv < insignificantDigit)) {
num *= 10;
rv++;
fracpart = modf(num, &intpart);
}
return rv;
}

Is there a nice way to get the number of decimals of a double variable in Objective-C?
No. For starters, a double stores a number in binary, so there may not even be an exact binary representation that corresponds to your decimal number. There's also no consideration for the number of significant decimal digits -- if that's important, you'll need to track it separately.
You might want to look into using NSDecimalNumber if you need to store an exact representation of a decimal number. You could create your own subclass and add the ability to store and track significant digits.

Related

VB.NET This Program Output [duplicate]

for (float i = -1; i <= 1; i+=0.1f)
{
Console.WriteLine(i);
}
This is results
-1
-0.9
-0.8
-0.6999999
-0.5999999
-0.4999999
-0.3999999
-0.2999999
-0.1999999
-0.09999993
7.450581E-08
0.1000001
0.2000001
0.3000001
0.4000001
0.5000001
0.6000001
0.7000001
0.8000001
0.9000002
Because a float is not an exact decimal number but a floating point number. Use decimal instead.
See wikipedia for reference: Wikipedia
Float and double are not able to display decimal values exactly. Have a look at wikipedia how they're implemented.
You may want to use Decimal instead.
You need to read this:
What Every Computer Scientist Needs To Know About Floating Point Numbers
Use integer numbers for indexing purposes. And if you need float values inside the loop, calculate it there:
for (int i = -10; i <= 10; i++)
{
Console.WriteLine(i / (float) 10);
}
float represents a 32-bit floating point number. It cannot accurately represent these values. Here is another, must-read-article, about floating point specifically for .NET: http://csharpindepth.com/Articles/General/FloatingPoint.aspx
This is because of you're using floating points. Calculating of floating points is not totally correct, because your computer is using binary numbers internally and not decimal numbers. Good information about that problem are here: http://floating-point-gui.de/

Objective C, division between floats not giving an exact answer

Right now I have a line of code like this:
float x = (([self.machine micSensitivity] - 0.0075f) / 0.00025f);
Where [self.machine micSensitivity] is a float containing the value 0.010000
So,
0.01 - 0.0075 = 0.0025
0.0025 / 0.00025 = 10.0
But in this case, it keeps returning 9.999999
I'm assuming there's some kind of rounding error but I can't seem to find a clean way of fixing it. micSensitivity is incremented/decremented by 0.00025 and that formula is meant to return a clean integer value for the user to reference so I'd rather get the programming right than just adding 0.000000000001.
Thanks.
that formula is meant to return a clean integer value for the user to reference
If that is really important to you, then why do you not multiply all the numbers in this story by 10000, coerce to int, and do integer arithmetic?
Or, if you know that the answer is arbitrarily close to an integer, round to that integer and present it.
Floating-point arithmetic is binary, not decimal. It will almost always give rounding errors. You need to take that into account. "float" has about six digit precision. "double" has about 15 digits precision. You throw away nine digits precision for no reason.
Now think: What do you want to display? What do you want to display if the result of your calculation is 9.999999999? What would you want to display if the result is 9.538105712?
None of the numbers in your question, except 10.0, can be exactly represented in a float or a double on iOS. If you want to do float math with those numbers, you will have rounding errors.
You can round your result to the nearest integer easily enough:
float x = rintf((self.machine.micSensitivity - 0.0075f) / 0.00025f);
Or you can just multiply all your numbers, including the allowed values of micSensitivity, by 4000 (which is 1/0.00025), and thus work entirely with integers.
Or you can change the allowed values of micSensitivity so that its increment is a fraction whose denominator is a power of 2. For example, if you use an increment of 0.000244140625 (which is 2-12), and change 0.0075 to 0.00732421875 (which is 30 * 2-12), you should get exact results, as long as your micSensitivity is within the range ±4096 (since 4096 is 212 and a float has 24 bits of significand).
The code you have posted is correct and functioning properly. This is a known side effect of using floating point arithmetic. See the wiki on floating point accuracy problems for a dull explanation as to why.
There are several ways to work around the problem depending on what you need to use the number for.
If you need to compare two floats, then most everything works OK: less than and greater than do what you would expect. The only trouble is testing if two floats are equal.
// If x and y are within a very small number from each other then they are equal.
if (fabs(x - y) < verySmallNumber) { // verySmallNumber is usually called epsilon.
// x and y are equal (or at least close enough)
}
If you want to print a float, then you can specify a precision to round to.
// Get a string of the x rounded to five digits of precision.
NSString *xAsAString = [NSString stringWithFormat:#"%.5f", x];
9.999999 is equal 10. there is prove:
9.999999 = x then 10x = 99.999999 then 10x-x = 9x = 90 then x = 10

Objective C float is not showing all decimals

I'm passing a float to a method but it's not showing all decimals. I have no idea why this is happening.
Here's an example:
[[LocationApiCliente sharedInstance] nearPlacesUsingLatitude:-58.3645248331830402 andLongitude:-34.6030467894227982];
Then:
- (BOOL)nearPlacesUsingLatitude:(double)latitude andLongitude:(double) longitude {
NSString *urlWithCoords = [NSString stringWithFormat:#"%#&lat=%f&long=%f", CountriesPath, latitude, longitude];
Printing urlWithCoords will result in:
format=json&lat=-58.364525&long=-34.603047
More of this. What I'm getting from the output terminal:
(lldb) p -3.13419834918349f
(float) $4 = -3.1342
(lldb) p -3.13419834918349
(double) $5 = -3.1342
Any ideas?
Change the %fs in your formatting strings to specify the desired number of decimals, e.g., %.16f.
Note that the number of decimals shown does not guarantee that they are correct, but at least they won't be truncated.
Overall the problem is that floating point numbers do not contain information about the precision, and cannot precisely represent some decimal values, so formatting can not in the general case “autodetect” the number of decimals. So you just need to override the default by specifying the desired number and accept that it's not representative of location accuracy. But since you seem to be passing the floats to another program via the URL, this shouldn't be a problem—a larger number of decimals is better.
It looks like CoreLocation uses doubles to represent degrees, so I'd be surprised if there's any more geographic precision to be found on the device.
But, in general, if you want to represent higher precision than double, you can use long double in Objective-C like this...
long double myPi = 3.141592653589793;
NSLog(#"%16.16Lf", myPi);

What is the range of a double in Objective-C?

I have a program where I am calculating large distances. I then convert the distance (a double) using stringfromnumber to get the commas inserted (i.e. 1,234,567). This works fine as long as the distance is below 2,000,000,000. Any time I exceed 2 billion I get a negative number that is also not the correct distance. From checking the help, I seem to be well within the range of a double.
You can check you're own float.h for DBL_MAX to find out. I got a value of 1.797693e+308 from:
#include <stdio.h>
#include <float.h>
int main ( void ) {
printf("Max double %e \n", DBL_MAX);
return 0;
}
A double can hold anything from infinity to negative infinity. However It can only hold 15 digits accurately. Also, keep in mind that if you cast it to an int and the double stores something larger than 2,147,483,648, the int will overflow to a negative number.

Having a hard time working with floats

I wasn't really sure what to name the title.
I'm checking to see if the value of two floats are the same. If I use printf() or NSLog(), the values return 0.750000. However, a line like if (value1 == value2) { return TRUE; } doesn't work. I can assume that in reality, the floats are beyond the 7 decimal places, and printf() / NSLog() can't return a value beyond 7 decimals.
I tried googling a way to see how I could cut down a float to a smaller amount of decimal places, or simply convert it to another data type, but I didn't get such luck so far.
You might want to peek at float.h (http://www.gnu.org/software/libc/manual/html_node/Floating-Point-Parameters.html) for a non-arbitrary definition of epsilon. In particular, FLT_EPSILON and FLT_DIG.
You can decide of an epsilon that is the maximum value under which number are equals. Like
#define EPSILON 0.0001
if (fabs(floatA - floatB) < EPSILON) { retun TRUE; }
fabs(x) returns the absolute value of the double x.
You may also want to use the double instead of float data type (double is twice the size of a float).
When ever you compare floating point numbers you need to use a tolerance:
if (Abs(value1 - value2) < epsilon)
{
}
where epsilon is a value such as 0.000001