Im converting some information Im receiving by a string to a float to get the sum of all of them.
The problem is when I convert the float, for example:
myString = #"13502.63"
float *f = [myString floatValue];
NSLog(#"Converted %f", f);
The result of "f" is 13502.629883
This thing is ok for some values, but when I have to add a big amount of this values, these extra decimals make the result incorrect.
Could anybody help me, please?
Thanks
If you want accuracy you should not use float. Use NSDecimalNumber.
NSString *myString = #"13502.63";
NSDecimalNumber *number = [NSDecimalNumber decimalNumberWithString:myString];
Unfortunately all floating point types in any language will have this problem, as they have to convert into an underlying binary integer format.
Have you considered using NSDecimalNumber?
These will be much slower than a float, but if that is not a problem, then they are much more accurate for such calculations.
If you need speed for some reason, would a double or long-double be accurate enough?
float numbers have no exact representation, that is the reason why "13502.63" is converted to 13502.629883; this is the closest float to the original number.
So, I don't think there is an easy solution with float. You should try NSDecimalNumber. I don't know about the performance, but it should give you an exact representation.
Related
I'm having a serious dispute with NSNumberFormatter, and even after going through its extensive documentation, I haven't quite been able to wrap my head around a pretty straightforward issue that I encountered. I hope you guys can help me out.
What I have: an NSDecimalNumber representing a calculation result, displayed in a UITextField
What I need: Scientific notation of that result.
What I'm doing:
-(void)setScientificNotationForTextField:(UITextField*)tf Text:(NSString*)text {
NSString* textBefore = text;
// use scientific notation, i.e. NSNumberFormatterScientificStyle
NSNumberFormatter* formatter = [[NSNumberFormatter alloc] init];
//[formatter setGeneratesDecimalNumbers:YES];
[formatter setNumberStyle:NSNumberFormatterScientificStyle];
NSDecimalNumber* number = (NSDecimalNumber*)[formatter numberFromString:text];
tf.text = [number descriptionWithLocale:[[Utilities sharedUtilities] USLocale]];
NSString* textAfter = tf.text;
// DEBUG
NSLog(#"setScientificNotation | text before = %#, text after = %#", textBefore, textAfter);
[formatter release];
}
What happens:
A certain result may be 0.0099. textBefore will hold that correct value. If I don't tell the formatter to generate decimal numbers (commented out in the above snippet), it will create an NSNumber from an NSDecimalNumber which creates a false result and turns textAfterinto 0.009900000000000001 - a rounding error due to the reduced precision of NSNumber over NSDecimalNumber.
If I do tell the NumberFormatter to generate decimals, it will still create the wrong result . And what's more, where before it would insert the exponent notation (e.g. "1.23456e-10"), it would now generate (and thus display) the full decimal number, which is not what I want.
Again, I'd like to have the formatter use NSDecimalNumber so it doesn't falsify results plus have exponent notation where necessary.
Am I using the class wrong? Did I misinterpret the documentation? Can someone explain why this happens and how I can create the behavior I want? I will of course continue researching and update if I find anything.
You can't just cast an NSNumber to an NSDecimalNumber and expect it to work. If your number is not too complex, you can ditch NSNumberFormatter and try using this instead:
NSDecimalNumber* number = [NSDecimalNumber decimalNumberWithString:text];
That will give you an actual NSDecimalNumber instance, with its precision.
Unfortunately, setGeneratesDecimalNumbers: doesn't work properly. It's a known bug.
If your number is too complex to work with decimalNumberWithString:, you're probably out of luck with Apple's APIs. Your only options are either parsing the string manually into something NSDecimalNumber can understand or performing some post-processing on the imprecise value given to you by NSNumberFormatter.
Finally, if you really want a number in scientific notation, why not just use the number formatter you just used? Just call stringFromNumber: to get the formatted value.
I have a value being stored as an NSDecimalNumber and when I convert it to a double it's losing precision.
For the current piece of data I'm debugging against, the value is 0.2676655. When I send it a doubleValue message, I get 0.267665. It's truncating instead of rounding and this is wreaking havoc with some code that uses hashes to detect data changes for a syncing operation.
The NSDecimalNumber instance comes from a third-party framework so I can't just replace it with a primitive double. Ultimately it gets inserted into an NSMutableString so I'm after a string representation, however it needs to be passed through a format specifier of "%.6lf", basically I need six digits after the decimal so it looks like 0.267666.
How can I accomplish this without losing precision? If there's a good way to format the NSDecimalNumber without converting to a double that will work as well.
The NSDecimalNumber instance comes from a third-party framework so I
can't just replace it with a primitive double.
Yes you can. NSDecimalNumber is an immutable subclass of NSNumber, which is a little too helpful when it comes to conversion:
double myDub = [NSDecimalNumber decimalNumberWithDecimal:[[NSNumber numberWithDouble:((double)0.2676655)] doubleValue]];
Ultimately it gets inserted into an NSMutableString so I'm after a
string representation, however it needs to be passed through a format
specifier of "%.6lf", basically I need six digits after the decimal so
it looks like 0.267666.
Double precision unfortunately does not round, but getting a string value that's off by one-millionth is not that big of a deal (I hope):
NSDecimalNumber *num = [NSDecimalNumber decimalNumberWithDecimal:[[NSNumber numberWithDouble:((double)0.2676655)] decimalValue]];
NSString *numString = [NSString stringWithFormat:#"%.6lf", [num doubleValue]];
NSLog(#"%#",numString);
I think that your are on a wrong path and somewhere lost in what to do.
First of all, keep in mind that in objective-c lond double is not supported, so you might better want to use something like %f instead of %lf.
[to be found in the documentation library under "Type encodings" of the objective c runtime programming guide]
Then I would rather expect that the value is show as being truncated, as the doubleValue returns an approximate value but the range you are using is still within the correct range.
You should use a simple formatter instead of moving numbers around, like:
// first line as an example for your real value
NSDecimalNumber *value = [NSDecimalNumber decimalNumberWithString:#"0.2676655"];
NSNumberFormatter *numFmt = [[NSNumberFormatter alloc] init];
[numFmt setMaximumFractionDigits:6];
[numFmt setMinimumFractionDigits:6];
[numFmt setMinimumIntegerDigits:1];
NSLog(#"Formatted number %#",[numFmt stringFromNumber:value]);
This has another benefit of using a locale aware formatter if desired. The result of the number formatter is the desired string.
I have a float value, that I would like to show it in format string and if it corresponds to an int, showing the integer, if not, showing it with one decimal.
Like this :
3.1
3
2.9
2.8
For now I'm stuck, in the concept, I'd do something like that :
float myFloat = 3.1
float mySecondFloat = 3
[NSString stringWithFormat:#"%g %g", myFloat, mySecondFloat];
My question is:
1/ the type format "%g" works in most cases but sometimes i have result shown like "0.600001" while in reality there should be 0.6 because all I do is 0.7 - 0.1.
Is there a kind of type cast for float at 1 decimal or maybe a bitwise operation to get rid of the final imprecision, or other way to make it works in 100% of cases ?
Thanks for your answers.
If your need absolute precision when working with decimal numbers, you may consider using the NSDecimalNumber class.
Number and Value Programming Topics: Using Decimal Numbers
Otherwise, the %.1g format specifier will be OK.
You have to use this code:
float myFloat = 3.1
float mySecondFloat = 3
[NSString stringWithFormat:#"%.1f %.1f", myFloat, mySecondFloat];
EDIT:
If I really want to print the integer value of a float, i would do it this way (ugly code)
int intValue = myFloatNumber/1;
NSString *string;
if(myFloatNumber == intValue)
{
string = [NSString stringWithFormat:#"%.0f", myFloatValue];
}
else
{
string = [NSString stringWithFormat:#"%.1f", myFloatValue];
}
Doing an integer division by 1, you automatically cast your float to an int.
Once you have the NSString *string you can concat it to your string.
Seeing others answers, it seems that there is no standard C formatter to achieve this.
I went following Nicolas answer (NSDecimalNumber), hence the solve flag. It worked fine indeed, but, in my (very simple) case it might be overkill. Giving it a second thought, if I had to do it again, I think that I would go using only NSNumberFormatter on a NSNumber (using numberWithFloat: method)
If it can help someone..
I'm using the following code in xCode, to work out how much to initially zoom on a UIImage in a scroll view but it keeps returning zero.
int heightRatio = self.scrollView.bounds.size.height / self.imageView.image.size.height;
The strange thing is if I do a subtraction it works, just not with division.
XCode describes both height variables as a "GLuint", some graphics version of int I think.
I've NSLogged both numbers on the right, and neither are zero.
I've tried defining heightRatio as a GLuint as well, with no joy, and also converting to NSNumber but it's all just getting a bit messy.
Am I missing something?
Thanks for any feedback.
You should declare heightRatio as a double (and round later if you want to have an int).
Both self.scrollView.bounds.size.height and self.imageView.image.size.height are of type CGFloat. If you declare heightRatio as CGFloat as well, you would correctly capture division results between 0 and 1. If you need to make it an integer later, use ceil(), trunc(), or round() to round up, down, or to the nearest whole number.
Sorry but double may not be enough, the only way to make division without lose data is using NSDecimal number. Here a sample code, show the percentage of working in a loop
NSDecimalNumber *Percentuale = [[[NSDecimalNumber decimalNumberWithString:[NSString stringWithFormat:#"%i",ClienteNr]] decimalNumberByDividingBy:[NSDecimalNumber decimalNumberWithString:[NSString stringWithFormat:#"%i",[[Globals ClientiDaMostrareMappa] count]]]] decimalNumberByMultiplyingBy:[NSDecimalNumber decimalNumberWithString:#"100"]];
[Globals.WaitAlert setTitle:[NSString stringWithFormat:#"Localizzazione Clienti!\nAttendere... %i%%",[Percentuale intValue]]];
I am doing a simple :
float x = 151.185436;
printf("x=%f",x);
and the result is
x=151.185440
Whats wrong here? I want to retain and print my original value of 151.185436
Thanks
Amarsh
floats just aren't very accurate. Try double. And read this: http://docs.sun.com/source/806-3568/ncg_goldberg.html
A float can only hold 32 bits (4 bytes) of information about your number - it can't just store as many decimal places as you need it to. 151.18544 is as close to your value that the float could represent without running out of bits.
For the precision you want, you need to use a double instead of a float.
Floats are inaccurate, use doubles. Also as you're using Objective C and not straight C it might be better if you use Objective C functions for this:
myNumber = [NSNumber numberWithDouble:151.185436];
NSLog(#"myNumber = %#", myNumber);