NSUInteger should not be used in format strings? - objective-c

Here's my code in all it's glory:
[NSString stringWithFormat:#"Total Properties: %d", (int)[inArray count]];
Which gets me an Xcode 5.1 warning:
Values of type 'NSUInteger' should not be used as format arguments; add an explicit cast to 'unsigned long' instead
Ok so I'm confused. The value really is a 32-bit int, and I cast it to a 32-bit int. So what is this NSUInteger it's complaining about (the count I assume) and why doesn't this cast fix it?

NSUInteger and NSInteger are different lengths on 32-bit (int) and 64-bit (long). In order for one format specifier to work for both architectures, you must use a long specifier and cast the value to long:
Type Format Specifier Cast
---- ---------------- ----
NSInteger %ld long
NSUInteger %lu unsigned long
So, for example, your code becomes:
[NSString stringWithFormat:#"Total Properties: %lu", (unsigned long)[inArray count]];
There is very little work to do, really, because Xcode's Fix-It feature will do this for you automatically.

It is also possible to use the "z" and "t" modifiers for CPU-independent format strings, e.g.
NSInteger x = -1;
NSUInteger y = 99;
NSString *foo = [NSString stringWithFormat:#"NSInteger: %zd, NSUInteger: %tu", x, y];

The underlying type of NSUInteger changes based on the platform: it is a 32-bit unsigned integer on 32-bit platforms, and a 64-bit unsigned integer on 64-bit platforms.
In the Platform Dependencies section on of the String Programming Guide Apple suggests that you do the following:
To avoid the need to use different printf-style type specifiers depending on the platform, you can use the specifiers shown in Table 3. Note that in some cases you may have to cast the value.
For NSUInteger use format %lu or %lx, and cast the value to unsigned long.
Hence your code needs to be changed as follows to avoid the warning:
[NSString stringWithFormat:#"Total Properties: %lu", (unsigned long)[inArray count]];

You could also try using NSNumber methods:
[NSString stringWithFormat:#"Total Properties: %#", [[NSNumber numberWithUnsignedInteger:[inArray count]] stringValue]];

Related

NSInteger integer initialisation

Is there is no needs of declaring NSInteger using the alloc and initialise keyword? and why?
I was trying doing NSInteger *choice = [[NSInteger alloc] init]; while I found direct assignment
NSInteger is just typedef of int or long (depends on platform)
So you can initialize something like that
NSInteger i = 10;
Quote from SDK documentation
NSInteger
Used to describe an integer.
typedef long NSInteger;
When building 32-bit applications, NSInteger is a 32-bit integer. A 64-bit application treats NSInteger as a 64-bit integer.
https://developer.apple.com/library/ios/documentation/cocoa/reference/foundation/Miscellaneous/Foundation_DataTypes/Reference/reference.html
NSInteger is not an Objective-C object. It's a datatype (much like C's int or char types). You do not need to call "alloc" (to explicitly allocate memory space) for it.
You can read up on the other Objective-C data types (like NSRange, which you'll use a lot of once you get into NSString objects) in this Apple documentation.

Objective-C implicit conversion loses integer precision 'NSUInteger' (aka 'unsigned long') to 'int' warning

I'm working through some exercises and have got a warning that states:
Implicit conversion loses integer precision: 'NSUInteger' (aka 'unsigned long') to 'int'
#import <Foundation/Foundation.h>
int main (int argc, const char * argv[])
{
#autoreleasepool {
NSArray *myColors;
int i;
int count;
myColors = #[#"Red", #"Green", #"Blue", #"Yellow"];
count = myColors.count; // <<< issue warning here
for (i = 0; i < count; i++)
NSLog (#"Element %i = %#", i, [myColors objectAtIndex: i]);
}
return 0;
}
The count method of NSArray returns an NSUInteger, and on the 64-bit OS X platform
NSUInteger is defined as unsigned long, and
unsigned long is a 64-bit unsigned integer.
int is a 32-bit integer.
So int is a "smaller" datatype than NSUInteger, therefore the compiler warning.
See also NSUInteger in the "Foundation Data Types Reference":
When building 32-bit applications, NSUInteger is a 32-bit unsigned
integer. A 64-bit application treats NSUInteger as a 64-bit unsigned
integer.
To fix that compiler warning, you can either declare the local count variable as
NSUInteger count;
or (if you are sure that your array will never contain more than 2^31-1 elements!),
add an explicit cast:
int count = (int)[myColors count];
Contrary to Martin's answer, casting to int (or ignoring the warning) isn't always safe even if you know your array doesn't have more than 2^31-1 elements. Not when compiling for 64-bit.
For example:
NSArray *array = #[#"a", #"b", #"c"];
int i = (int) [array indexOfObject:#"d"];
// indexOfObject returned NSNotFound, which is NSIntegerMax, which is LONG_MAX in 64 bit.
// We cast this to int and got -1.
// But -1 != NSNotFound. Trouble ahead!
if (i == NSNotFound) {
// thought we'd get here, but we don't
NSLog(#"it's not here");
}
else {
// this is what actually happens
NSLog(#"it's here: %d", i);
// **** crash horribly ****
NSLog(#"the object is %#", array[i]);
}
Change key in Project > Build Setting
"typecheck calls to printf/scanf : NO"
Explanation : [How it works]
Check calls to printf and scanf, etc., to make sure that the arguments supplied have types appropriate to the format string specified, and that the conversions specified in the format string make sense.
Hope it work
Other warning
objective c implicit conversion loses integer precision 'NSUInteger' (aka 'unsigned long') to 'int
Change key "implicit conversion to 32Bits Type > Debug > *64 architecture : No"
[caution: It may void other warning of 64 Bits architecture conversion].
Doing the expicit casting to the "int" solves the problem in my case. I had the same issue. So:
int count = (int)[myColors count];

Why is longLongValue returning the incorrect value

I have a NSDictionary that contains a key with a value of 4937446359977427944. I try and get the value of it as a long long and get 4937446359977427968 back?
NSLog(#"value1 = %#", [dict objectForKey"MyKey"]); // prints 4937446359977427944
long long lv = [dict objectForKey:#"MyKey"] longLongValue];
NSLog(#"value2 = %lld", lv); // prints 4937446359977427968
Doing:
NSLog(#"%lld", [#"4937446359977427944" longLongValue]); // prints 4937446359977427944
I'm assuming it is some kind of round off issue since the lower bits seems to be cleared, I just don't know how to stop it (or why it's happening).
The dictionary is being created using NSJSONSerialization and the JSON object does (correctly) contain a "MyKey": 4937446359977427944 entry and the dict object is correct.
The value being held in the NSDictionary is a NSDecimalNumber
Is something being convert to a float behind the scenes?
NSDecimalValue is not stored as a double, it's a 64 bits unsigned integer mantissa, an 8 bit signed integer exponent of base 10, and a sign bit.
The problem is that an exact value of an NSDecimalValue is only representable as ... an NSDecimalValue.
You can get an approximate 64 bits IEE754 value with method doubleValue.
When you try to use longLongValue you effectively get the result of casting to a long long int the approximate IEE754 value.
You may or may not consider it a bug in the implementation of NSDecimalValue (and eventually file a radar and ask Apple to use a different conversion routine). But strictly speaking this is not a bug: it's a design decision.
You should think of NSDecimalValue as a sort of floating point decimal. In fact it's very similar to a software implementation of what IEEE754 would call an extended precision floating point decimal number, except that it does not conform to that definition (because it does not have an exponent supporting at least values between −6143 and +6144 and because it does not support NANs and infinites).
In other words, it's not an extended implementation of an integer, it's an extended (but lacking NANs and infinites) implementation of a double. The fact that Apple natively only provides an approximate conversion to double (implying that the conversion to long long int may or may not be exact for any value that exceed 53 bits of precision) is not a bug.
You may or may not want to implement a different conversion yourself (with a category).
Another possible point of view is to consider the problem being a bug in the JSon implementation you used. But this is also highly debatable: it gave you a NSDecimalValue and that's arguably a correct representation. Either you operate with the NSDecimalValue or you are responsible for any conversion of it.
I'm not sure if your are interested in a simple solution or just looking into the details of why the loss of precision takes place.
If you are interested in a simple answer: -[NSDecimalNumber description] products a string with the value, and -[NSString longLongValue] converts a string into a long long
NSDecimalNumber *decimalNumber = [NSDecimalNumber decimalNumberWithString:#"4937446359977427944"];
long long longLongNumber = [[decimalNumber description] longLongValue];
NSLog(#"decimalNumber %# -- longLongNumber %lld", decimalNumber, longLongNumber);
outputs
2014-04-16 08:51:21.221 APP_NAME[30458:60b] decimalNumber 4937446359977427944 -- longLongNumber 4937446359977427944
Final Note
[decimalNumber descriptionWithLocale:[[NSLocale alloc] initWithLocaleIdentifier:#"en_US"]] may be more reliable is your app supports multiple locales.
For anyone interested in quick solution to the problem, as per Analog File proper answer:
long long someNumber = 8204064638523577098;
NSLog(#"some number lld: %lld", someNumber);
NSNumber *snNSNumber = [NSNumber numberWithLongLong:someNumber];
NSLog(#"some number NSNumber: %#", snNSNumber);
NSString *someJson = #"{\"someValue\":8204064638523577098}";
NSDictionary* dict = [NSJSONSerialization
JSONObjectWithData:[someJson dataUsingEncoding:NSUTF8StringEncoding]
options:0
error:nil];
NSLog(#"Dict: %#", dict);
NSLog(#"Some digit out of dict: %#", [dict objectForKey:#"someValue"]);
NSLog(#"Some digit out of dict as lld: %lld", [[dict objectForKey:#"someValue"] longLongValue]);
long long someNumberParsed;
sscanf([[[dict objectForKey:#"someValue"] stringValue] UTF8String], "%lld", &someNumberParsed);
NSLog(#"Properly parsed lld: %lld", someNumberParsed);
Results in:
2014-04-16 14:22:02.997 Tutorial4[97950:303] some number lld:
8204064638523577098
2014-04-16 14:22:02.998 Tutorial4[97950:303] some number NSNumber:
8204064638523577098
2014-04-16 14:22:02.998 Tutorial4[97950:303] Dict: { someValue =
8204064638523577098; }
2014-04-16 14:22:02.998 Tutorial4[97950:303] Some digit out of dict:
8204064638523577098
2014-04-16 14:22:02.999 Tutorial4[97950:303] Some digit out of dict as
lld: 8204064638523577344
2014-04-16 14:22:02.999 Tutorial4[97950:303] Properly parsed lld:
8204064638523577098

About the formation in NSLog in Objective-C

const char *string ="Hi there,this is a C string";
NSData *data=[NSData dataWithBytes:string
length:strlen(string)+1];
NSLog(#"data is %#",data);
NSLog(#"%lu byte string is '%s'",[data length],[data bytes]);
This can be implied successfully. If the last sentence is:
NSLog(#"%d byte string is '%s'",[data length],[data bytes]);
it will warn that conversion specifies type 'int' but argument has typed 'NSUInteger' (aka'usigned long')
Why %d can't?
NSUInteger is basically an unsigned long, so use %lu instead.
%d means 'int'. NSUInteger is not an 'int', so %d won't work. You have to match format specifiers with the type. If you specify the wrong type, your program can crash or more likely, it'll print garbage.

Cannot implicitly convert type 'integer' to 'NSString' -

I want to ask very simple thing about conversion between type.
int theinteger = 75;
NSLog(#"Before theinteger is: %#", theinteger);
NSString *string = [NSString stringWithFormat:#"%d", theinteger];
NSLog(#"After theinteger is: %#", string);
Output is like
Before theinteger: 75
After theinteger: 114503696
What does this mean? Why after conversion my data is changed to 114503696?
When I tried
int theinteger = -1;
It's OK
Before theinteger: -1
After theinteger: -1
Am I missing something?
Without seeing any evidence of it, I assume theinteger is, in fact, of type NSInteger or int.
When using a specifier, try using %i:
NSInteger theinteger = 75;
NSLog(#"theinteger is: %i", theinteger);
Output
theinteger is: 75
Here's a list of specifiers.
You shouldn't be using %# as a format specifier for an integer, as in your first log statement. However, my guess is the code you've posted here isn't actually the code you're using, because this line NSLog(#"Before theinteger is: %#", the integer); won't actually compile due to the space in "the integer". Can you copy/paste your actual code?
Anyway, %# is the format specifier for Objective-C objects. When NSLog() sees a %#, it substitutes it with the NSString returned by calling -(NSString *)description on the corresponding object in the variables list. In your case, NSLog() sees the %# and assumes that means that theinteger is an Objective-C object, which it is not.
If you want to print an integer, you should use a format specifier of %i (or another of the several integer format specifiers):
NSLog(#"Before theinteger is: %i", theinteger);