Take the following piece of code:
NSError *error;
NSString *myJSONString = #"{ \"foo\" : 0.1}";
NSData *jsonData = [myJSONString dataUsingEncoding:NSUTF8StringEncoding];
NSDictionary *results = [NSJSONSerialization JSONObjectWithData:jsonData options:0 error:&error];
My question is, is results[#"foo"] an NSDecimalNumber, or something with finite binary precision like a double or float? Basically, I have an application that requires the lossless accuracy that comes with an NSDecimalNumber, and need to ensure that the JSON deserialization doesn't result in rounding because of doubles/floats etcetera.
E.g. if it was interpreted as a float, I'd run into problems like this with precision:
float baz = 0.1;
NSLog(#"baz: %.20f", baz);
// prints baz: 0.10000000149011611938
I've tried interpreting foo as an NSDecimalNumber and printing the result:
NSDecimalNumber *fooAsDecimal = results[#"foo"];
NSLog(#"fooAsDecimal: %#", [fooAsDecimal stringValue]);
// prints fooAsDecimal: 0.1
But then I found that calling stringValue on an NSDecimalNumber doesn't print all significant digits anyway, e.g...
NSDecimalNumber *barDecimal = [NSDecimalNumber decimalNumberWithString:#"0.1000000000000000000000000000000000000000000011"];
NSLog(#"barDecimal: %#", barDecimal);
// prints barDecimal: 0.1
...so printing fooAsDecimal doesn't tell me whether results[#"foo"] was at some point rounded to finite precision by the JSON parser or not.
To be clear, I realise I could use a string rather than a number in the JSON representation to store the value of foo, i.e. "0.1" instead of 0.1, and then use [NSDecimalNumber decimalNumberWithString:results[#"foo"]]. But, what I'm interested in is how the NSJSONSerialization class deserializes JSON numbers, so I know whether this is really necessary or not.
NSJSONSerialization (and JSONSerialization in Swift) follow the general pattern:
If a number has only an integer part (no decimal or exponent), attempt to parse it as a long long. If that doesn't overflow, return an NSNumber with long long.
Attempt to parse a double with strtod_l. If it doesn't overflow, return an NSNumber with double.
In all other cases, attempt to use NSDecimalNumber which supports a much larger range of values, specifically a mantissa up to 38 digits and exponent between -128...127.
If you look at other examples people have posted you can see that when the value exceeds the range or precision of a double you get an NSDecimalNumber back.
The short answer is that you should not serialize to JSON if you require NSDecimalNumber levels of precision. JSON has only one number format: double, which has inferior precision to NSDecimalNumber.
The long answer, which is of academic interest only, because the short answer is also the right answer, is "Not necessarily." NSJSONSerialization does sometimes deserialize as NSDecimalNumber, but it is undocumented, and I have not determined, what the set of circumstances under which it does is. For instance:
BOOL boolYes = YES;
int16_t int16 = 12345;
int32_t int32 = 2134567890;
uint32_t uint32 = 3124141341;
unsigned long long ull = 312414134131241413ull;
double dlrep = 1.5;
double dlmayrep = 1.1234567891011127;
float fl = 3124134134678.13;
double dl = 13421331.72348729 * 1000000000000000000000000000000000000000000000000000.0;
long long negLong = -632414314135135234;
unsigned long long unrepresentable = 10765432100123456789ull;
dict[#"bool"] = #(boolYes);
dict[#"int16"] = #(int16);
dict[#"int32"] = #(int32);
dict[#"dlrep"] = #(dlrep);
dict[#"dlmayrep"] = #(dlmayrep);
dict[#"fl"] = #(fl);
dict[#"dl"] = #(dl);
dict[#"uint32"] = #(uint32);
dict[#"ull"] = #(ull);
dict[#"negLong"] = #(negLong);
dict[#"unrepresentable"] = #(unrepresentable);
NSData *data = [NSJSONSerialization dataWithJSONObject:dict options:NSJSONWritingPrettyPrinted error:nil];
NSDictionary *dict_back = (NSDictionary *)[NSJSONSerialization JSONObjectWithData:data options:NSJSONReadingMutableContainers error:nil];
and in the debugger:
(lldb) po [dict_back[#"bool"] class]
__NSCFBoolean
(lldb) po [dict_back[#"int16"] class]
__NSCFNumber
(lldb) po [dict_back[#"int32"] class]
__NSCFNumber
(lldb) po [dict_back[#"ull"] class]
__NSCFNumber
(lldb) po [dict_back[#"fl"] class]
NSDecimalNumber
(lldb) po [dict_back[#"dl"] class]
NSDecimalNumber
(lldb) po [dict_back[#"dlrep"] class]
__NSCFNumber
(lldb) po [dict_back[#"dlmayrep"] class]
__NSCFNumber
(lldb) po [dict_back[#"negLong"] class]
__NSCFNumber
(lldb) po [dict_back[#"unrepresentable"] class]
NSDecimalNumber
So make of that what you will. You should definitely not assume that if you serialize an NSDecimalNumber to JSON that you will get an NSDecimalNumber back out.
But, again, you should not store NSDecimalNumbers in JSON.
I had the same problem, except I'm using Swift 3. I made a patched version of the JSONSerialization class that parses all numbers as Decimal's. It can only parse/deserialize JSON, but does not have any serialization code. It's based on Apple's open source re-implementation of Foundation in Swift.
To answer the question in the title: No, it doesn't, it creates NSNumber objects. You can easily test this:
NSArray *a = #[[NSDecimalNumber decimalNumberWithString:#"0.1"]];
NSData *data = [NSJSONSerialization dataWithJSONObject:a options:0 error:NULL];
a = [NSJSONSerialization JSONObjectWithData:data options:0 error:NULL];
NSLog(#"%#", [a[0] class]);
will print __NSCFNumber.
You can convert that NSNumber object to an NSDecimalNumber with [NSDecimalNumber decimalNumberWithDecimal:[number decimalValue]], but according to the docs for decimalValue
The value returned isnβt guaranteed to be exact for float and double values.
Related
I am trying to display a picture from a byte-array produced by a web service. Printing out a description it looks like this:
("-119",80,78,71,13,10,26,10,0,0,0,13,3 ... )
From the header it is clear that it's a png encoded in signed integers. It is an __NSCFArray having __NSCFNumber elements.
My code in Objective-C (based on much googling):
NSData *data = [NSData dataWithBytes:(const void *)myImageArray length [myImageArray count]];
UIImage *arrayImage = [UIImage imageWithData:data];
I receive a null UIImage pointer.
I also tried to converting it to unsigned NSNumbers first and then passing it to NSData, though perhaps I did not do this correctly. What am I doing wrong?
You cannot simply cast an NSArray of NSNumber into binary data. Both NSArray and NSNumber are objects; they have their own headers and internal structure that is not the same as the original string of bytes. You'll need to convert it byte-by-byte with something along these lines:
NSArray *bytes = #[#1, #2, #3];
NSMutableData *data = [NSMutableData dataWithLength:bytes.count];
for (NSUInteger i = 0; i < bytes.count; i++) {
char value = [bytes[i] charValue];
[data replaceBytesInRange:NSMakeRange(i, 1) withBytes:&value];
}
char is a signed int8_t, which appears to be the kind of data you're working with. It is often used to mean "an ASCII character," but in C it is commonly also used to mean "byte."
I've been using the following method to parse NSString's into NSNumber's:
// (a category method on NSString)
-(NSNumber*) tryParseAsNumber {
NSNumberFormatter* formatter = [NSNumberFormatter new];
[formatter setNumberStyle:NSNumberFormatterDecimalStyle];
return [formatter numberFromString:self];
}
And I had tests verifying that this was working correctly:
test(#"".tryParseAsNumber == nil);
...
test([(#NSUIntegerMax).description.tryParseAsNumber isEqual:#NSUIntegerMax]);
...
The max-value test started failing when I switched to testing on an iPhone 6, probably because NSUInteger is now 64 bits instead of 32 bits. The value returned by the formatter is now the double 1.844674407370955e+19 instead of the uint64_t 18446744073709551615.
Is there a built-in method that succeeds exactly for all int64s and unsigned int64s, or do I have to implement one myself?
+ [NSNumber numberWithLongLong:]
+ [NSNumber numberWithUnsignedLongLong:]
Have you tried these?
EDIT
I'm not at all certain what it is you'd ultimately do with your instances of NSNumber, but consider that NSDecimalNumber seems to do exactly what you want:
NSDecimalNumber *decNum = [NSDecimalNumber decimalNumberWithString:#"18446744073709551615"];
NSLog(#"%#", decNum);
which yields:
2014-09-21 15:11:25.472 Test[1138:812724] 18446744073709551615
Here's another thing to consider: NSDecimalNumber "is a" NSNumber, as it's a subclass of the latter. So it would appear that, whatever you can do with NSNumber, you can do with NSDecimalNumber.
trudyscousin's answer allowed me to figure it out.
NSDecimalNumber decimalNumberWithString: is capable of parsing with full precision, but it lets some bad inputs by (e.g. "88ffhih" gets parsed as 88). On the other hand, NSNumberFormatter numberFromString: always detects bad inputs but loses precision. They have opposite weaknesses.
So... just do both. For example, here's a method that should parse representable NSUIntegers but nothing else:
+(NSNumber*) parseAsNSUIntegerElseNil:(NSString*)decimalText {
// NSNumberFormatter.numberFromString is good at noticing bad inputs, but loses precision for large values
// NSDecimalNumber.decimalNumberWithString has perfect precision, but lets bad inputs through sometimes (e.g. "88ffhih" -> 88)
// We use both to get both accuracy and detection of bad inputs
NSNumberFormatter* formatter = [NSNumberFormatter new];
[formatter setNumberStyle:NSNumberFormatterDecimalStyle];
if ([formatter numberFromString:decimalText] == nil) {
return nil;
}
NSNumber* value = [NSDecimalNumber decimalNumberWithString:decimalText];
// Discard values not representable by NSUInteger
if (![value isEqual:#(value.unsignedIntegerValue)]) {
return nil;
}
return value;
}
I'm having problems with converting NSNumber to string and string to NSNumber.
Here's a sample problem:
NSString *stringValue = #"9.2";
NSNumberFormatter *formatter = [[NSNumberFormatter alloc] init];
NSLog(#"stringvalue:%#",[[formatter numberFromString: stringValue] stringValue]);
Output will be:
stringvalue:9.199999999999999
I need to retrieve the original value, where, in the example should be 9.2.
On the contrary, when the original string is 9.4 the output is still 9.4.
Do you have any idea how to retrieve the original string value without NSNumber doing anything about it?
You are discovering that floating point numbers can't always be represented exactly. There are numerous posts about such issues.
If you need to get back to the original string, then keep the original string as your data and only convert to a number when you need to perform a calculation.
You may want to look into NSDecimalNumber. This may better fit your needs.
NSString *numStr = #"9.2";
NSDecimalNumber *decNum = [NSDecimalNumber decimalNumberWithString:numStr];
NSString *newStr = [decNum stringValue];
NSLog(#"decNum = %#, newStr = %#", decNum, newStr);
This gives 9.2 for both values.
If I do the following in Objective-C:
NSString *result = [NSString stringWithFormat:#"%1.1f", -0.01];
It will give result #"-0.0"
Does anybody know how I can force a result #"0.0" (without the "-") in this case?
EDIT:
I tried using NSNumberFormatter, but it has the same issue. The following also produces #"-0.0":
double value = -0.01;
NSNumberFormatter *numberFormatter = [[NSNumberFormatter alloc] init];
[numberFormatter setNumberStyle:NSNumberFormatterDecimalStyle];
[numberFormatter setMaximumFractionDigits:1];
[numberFormatter setMinimumFractionDigits:1];
NSString *result = [numberFormatter stringFromNumber:[NSNumber numberWithDouble:value]];
I wanted a general solution, independent of the configuration of the number formatter.
I've used a category to add the functionality to NSNumberFormater;
#interface NSNumberFormatter (PreventNegativeZero)
- (NSString *)stringFromNumberWithoutNegativeZero:(NSNumber *)number;
#end
With the implementation:
#implementation NSNumberFormatter (PreventNegativeZero)
- (NSString *)stringFromNumberWithoutNegativeZero:(NSNumber *)number
{
NSString *const string = [self stringFromNumber: number];
NSString *const negZeroString = [self stringFromNumber: [NSNumber numberWithFloat: -0.0f]];
if([string isEqualToString: negZeroString])
{
NSString *const posZeroString = [self stringFromNumber: [NSNumber numberWithFloat: 0.0]];
return posZeroString;
}
return string;
}
#end
How it works
The key feature is to ask the number formatter how it will format -0.0f (i.e., floating point minus zero) as an NSString so that we can detect this and take remedial action.
Why do this? Depending on the formatter configuration, -0.0f could be formatted as: #"-0", #"-0.0", #"-000", #"-0ΒΊC", #"Β£-0.00", #"----0.0", #"(0.0)", #"π‘π.βͺιΆ" really, pretty much anything. So, we ask the formatter how it would format -0.0f using the line: NSString *const negZeroString = [self stringFromNumber: [NSNumber numberWithFloat: -0.0f]];
Armed with the undesired -0.0f string, when an arbitrary input number is formatted, it can be tested to see if it is matches the undesirable -0.0f string.
The second important feature is that the number formatter is also asked to supply the replacement positive zero string. This is necessary so that as before, its formatting is respected. This is done with the line: [self stringFromNumber: [NSNumber numberWithFloat: 0.0]]
An optimisation that doesn't work
It's tempting to perform a numerical test yourself for whether the input number will be formatted as the -0.0f string, but this is extremely non trivial (ie, basically impossible in general). This is because the set of numbers that will format to the -0.0f string depend on the configuration of the formatter. If if happens to be rounding to the nearest million, then -5,000f as an input would be formatted as the -0.0f string.
An implementation error to avoid
When input that formats to the -0.0f string is detected, a positive zero equivalent output string is generated using [self stringFromNumber: [NSNumber numberWithFloat: 0.0]]. Note that, specifically:
The code formats the float literal 0.0f and returns it.
The code does not use the negation of the input.
Negating an input of -0.1f would result in formatting 0.1f. Depending on the formatter behaviour, this could be rounded up and result in #"1,000", which you don't want.
Final Note
For what it's worth, the approach / pattern / algorithm used here will translate to other languages and different string formatting APIs.
Use a NSNumberFormatter. In general, NSString formatting should not be used to present data to the user.
EDIT:
As stated in the question, this is not the correct answer. There is a number of solutions. It's easy to check for negative zero because it is defined to be equal to any zero (0.0f == -0.0f) but the actual problem is that a number of other values can be rounded to the negative zero. Instead of catching such values, I suggest postprocessing - a function that will check if the result contains only zero digits (skipping other characters). If yes, remove leading minus sign.
NSString *result = [NSString stringWithFormat:#"%1.1f", -0.01*-1];
If instead of a value you pass an instance you can check:
float myFloat = -0.01;
NSString *result = [NSString stringWithFormat:#"%1.1f", (myFloat<0? myFloat*-1:myFloat)];
Edit:
If you just want 0.0 as positive value:
NSString *result = [NSString stringWithFormat:#"%1.1f",(int)(myFloat*10)<0?myFloat:myFloat*-1];
Convert the number to NSString by taking the float or double value.
Convert the string back to NSNumber.
NSDecimalNumber *num = [NSDecimalNumber decimalNumberWithString:#"-0.00000000008"];
NSString *st2 = [NSString stringWithFormat:#"%0.2f", [num floatValue]];
NSDecimalNumber *result = [NSDecimalNumber decimalNumberWithString:st2]; //returns 0
The NSNumberFormatter has two methods convert from Number to String, and from String to Number. What if we use method (Number) -> String? twice?
public extension NumberFormatter {
func stringWithoutNegativeZero(from number: NSNumber) -> String? {
string(from: number)
.flatMap { [weak self] string in self?.number(from: string) }
.flatMap { [weak self] number in self?.string(from: number) }
}
}
Now this must be easy, but how can sum two NSNumber? Is like:
[one floatValue] + [two floatValue]
or exist a better way?
There is not really a better way, but you really should not be doing this if you can avoid it. NSNumber exists as a wrapper to scalar numbers so you can store them in collections and pass them polymorphically with other NSObjects. They are not really used to store numbers in actual math. If you do math on them it is much slower than performing the operation on just the scalars, which is probably why there are no convenience methods for it.
For example:
NSNumber *sum = [NSNumber numberWithFloat:([one floatValue] + [two floatValue])];
Is blowing at a minimum 21 instructions on message dispatches, and however much code the methods take to unbox the and rebox the values (probably a few hundred) to do 1 instruction worth of math.
So if you need to store numbers in dicts use an NSNumber, if you need to pass something that might be a number or string into a function use an NSNumber, but if you just want to do math stick with scalar C types.
NSDecimalNumber (subclass of NSNumber) has all the goodies you are looking for:
β decimalNumberByAdding:
β decimalNumberBySubtracting:
β decimalNumberByMultiplyingBy:
β decimalNumberByDividingBy:
β decimalNumberByRaisingToPower:
...
If computing performance is of interest, then convert to C++ array std::vector or like.
Now I never use C-Arrays anymore; it is too easy to crash using a wrong index or pointer. And very tedious to pair every new [] with delete[].
You can use
NSNumber *sum = #([first integerValue] + [second integerValue]);
Edit:
As observed by ohho, this example is for adding up two NSNumber instances that hold integer values. If you want to add up two NSNumber's that hold floating-point values, you should do the following:
NSNumber *sum = #([first floatValue] + [second floatValue]);
The current top-voted answer is going to lead to hard-to-diagnose bugs and loss of precision due to the use of floats. If you're doing number operations on NSNumber values, you should convert to NSDecimalNumber first and perform operations with those objects instead.
From the documentation:
NSDecimalNumber, an immutable subclass of NSNumber, provides an object-oriented wrapper for doing base-10 arithmetic. An instance can represent any number that can be expressed as mantissa x 10^exponent where mantissa is a decimal integer up to 38 digits long, and exponent is an integer from β128 through 127.
Therefore, you should convert your NSNumber instances to NSDecimalNumbers by way of [NSNumber decimalValue], perform whatever arithmetic you want to, then assign back to an NSNumber when you're done.
In Objective-C:
NSDecimalNumber *a = [NSDecimalNumber decimalNumberWithDecimal:one.decimalValue]
NSDecimalNumber *b = [NSDecimalNumber decimalNumberWithDecimal:two.decimalValue]
NSNumber *result = [a decimalNumberByAdding:b]
In Swift 3:
let a = NSDecimalNumber(decimal: one.decimalValue)
let b = NSDecimalNumber(decimal: two.decimalValue)
let result: NSNumber = a.adding(b)
Why not use NSxEpression?
NSNumber *x = #(4.5), *y = #(-2);
NSExpression *ex = [NSExpression expressionWithFormat:#"(%# + %#)", x, y];
NSNumber *result = [ex expressionValueWithObject:nil context:nil];
NSLog(#"%#",result); // will print out "2.5"
You can also build an NSExpression that can be reused to evaluate with different arguments, like this:
NSExpression *expr = [NSExpression expressionWithFormat: #"(X+Y)"];
NSDictionary *parameters = [NSDictionary dictionaryWithObjectsAndKeys:x, #"X", y, #"Y", nil];
NSLog(#"%#", [expr expressionValueWithObject:parameters context:nil]);
For instance, we can loop evaluating the same parsed expression, each time with a different "Y" value:
for (float f=20; f<30; f+=2.0) {
NSDictionary *parameters = [NSDictionary dictionaryWithObjectsAndKeys:x, #"X", #(f), #"Y", nil];
NSLog(#"%#", [expr expressionValueWithObject:parameters context:nil]);
}
In Swift you can get this functionality by using the Bolt_Swift library https://github.com/williamFalcon/Bolt_Swift.
Example:
var num1 = NSNumber(integer: 20)
var num2 = NSNumber(integer: 25)
print(num1+num2) //prints 45