I am taking an integer, in this case 192, and left shifting it 24 spaces. The leading 1 is causing it to become negative, it seems.
unsigned int i = 192;
unsigned int newnumber = i << 24;
NSLog(#"newnumber is %d",newnumber);
I am expecting 3,221,225,472 but I get -1,073,741,824 (commas added for clarity)
An unsigned integer shouldn't be negative right?
Because you reinterpret it in NSLog as a signed integer. You should use %u to see an unsigned value.
There is no way for a function with variable number of arguments to know with certainty the type of the value that you pass. That is why NSLog relies on the format string to learn how many parameters you passed, and what their types are. If you pass a type that does not match the corresponding format specifier, NSLog will trust the specifier and interpret your data according to it. Modern compilers may even warn you about it.
You wan to do NSLog(#"newnumber is %u",newnumber);
%d converts it back to a signed int.
%d means "signed integer"; use %u for "unsigned integer".
Related
now, when I try to nslog my array count using self.array.count with %d I receive this messageL:Values of type "NSUInteget" Should be not be used as format argumments and it suggest s that I fix it with %lu instead,. is this documented anywhere?
NSArray's count method returns an NSUInteger, as documented. If you're using a 64 bit environment, it's also documented that those require the format of %lu.
If you were using a signed NSInteger instead, you would need to use %ld in a 64 bit environment, or %d in a 32 bit environment.
You are probably seeing the message now because you have updated your compiler, Xcode is doing more checking and issuing more warnings.
The issue here is that the size of NSUInteger (and NSInteger) is different on different platforms. Rather than add a new format specifier to handle this Apple choose to recommend that the format specifier for the largest possible type is used and a cast. So:
For NSUInteger use the format specifier %lu and cast the value to unsigned long; and
For NSInteger use the format specifier %ld and cast the value to long.
Doing this will produce correct results on both 32-bit and 64-bit platforms.
See Platform Dependencies in String Format Specifiers.
I want to write a method that takes a given float, a precision, and formats it such that it displays the given precision number of decimal places.
Of course, this method also does other bunch of things.
Right now, I have
[NSString stringWithFormat:#"0%.3f", s]; //s is the float argument
Which gives the string of the float to 3 decimal places. Now, I want to make it such that it has n decimal places. In effect, I want something like:
[NSString stringWithFormat:#"0%.%if", n, s];
//n is the precision argument, s is the float argument
But for obvious reasons, it does not work.
Is there a workaround?
You can use …"%.*f", n, s. For more information, stringWithFormat: follows the IEEE printf spec.
Not sure what the 0 prefix is for in your original format string. You certainly don't need it as part of the format specification. If it's for zero padding up to, say, two digits, you should do something like "%0*.*f", 3 + n, n, s.
You can pass a star (*) as the precision width in the format string. * star signifies that the width will be specified as a variable in the variable arguments (before the variable to be printed). So
[NSString stringWithFormat:#"0%.*f", n, s];
Where n is an integer describing the width, and s is the floating point value.
I always thoughts that sending a message to a nil pointer would normally return 0. So the same would apply to properties. But then this code snippet seems to contradict my assumptions
NSArray *testArray;
NSInteger i = 0;
NSLog(#"testArray.count-1=%ld", testArray.count-1);
NSLog(#"i<testArray.count-1=%d", i<testArray.count-1);
The output is
2013-05-22 11:10:24.009 LoopTest[45413:303] testArray.count-1=-1
2013-05-22 11:10:24.009 LoopTest[45413:303] i<testArray.count-1=1
While the first line makes sense, the second does not. What am I missing?
EDIT: thanks to #JoachimIsaksson and #Monolo for pointing (pun intended) me to the right direction. The problem is actually signed v. unsigned and the following code shows it:
NSArray *testArray;
NSInteger i = 0;
unsigned ucount = 0;
int count = 0;
NSLog(#"testArray.count-1=%ld", testArray.count-1);
NSLog(#"i<testArray.count-1=%d", i<testArray.count-1);
NSLog(#"i<ucount-1=%d", i<ucount-1);
NSLog(#"i<count-1=%d", i<count-1);
And the output is
2013-05-22 11:26:14.443 LoopTest[45496:303] testArray.count-1=-1
2013-05-22 11:26:14.444 LoopTest[45496:303] i<testArray.count-1=1
2013-05-22 11:26:14.444 LoopTest[45496:303] i<ucount-1=1
2013-05-22 11:26:14.445 LoopTest[45496:303] i<count-1=0
Return values from a nil receiver
When accessing properties or reading return values from a nil object, you will get their default value. This is usually 0 for any numeric return type. So getting the count of an array when it is nilled will yield 0. Other possible values from nil receivers are NO for BOOLs, and nil for object return types. Returned structures have all members initialized to zero.
Array counts are unsigned...
Now, you need to remember that an array count returns an NSUInteger. With this being unsigned, if you subtract from 0, you will underflow, and get a very large number.
Why does NSLog print -1 for the first statement then?
It's because you have used #"%ld", which specifies a long signed integer. As such, the value is interpreted as signed, and this results in -1. The type of the variable actually states it's an unsigned long, whose format specifier should be #"lu". When using this, it results in 18446744073709551615 for me (could vary for you, depending on platform).
How does this affect the second NSLog statement?
Taking into account what's going on in the first statement, the second statement may now make more sense. You may have thought it was comparing 0 < -1, which results in NO, and shouldn't produce a result of 1. What's actually being compared, is 0 < 18446744073709551615, which results in YES. This is why you're getting a result of 1.
It all boils down to using the incorrect format identifier in NSLog, which caused confusion on how to interpret the value.
It is always some version of nothing: nil, zero, NO.
The return type of count is NSUInteger as also pointed out by Joachim Isaksson in the comments. testArray.count-1 is expected to be -1, which in binary is encoded ...111111 (the exact number of bits depends on the platform, and whether the code is compiled as 32-bit or 64-bit). However, since the expression is unsigned it will be interpreted as a very large number instead - in fact the largest possible number that can be represented as an unsigned integer.
This very large number, when compared with the variable i is much bigger. Hence the output.
I have a person object of class Person with property is address, age. When I check properties, I print address on screen with:
NSLog(#"Add: %#, length: %i", person.address, [person.address length]);
The result is: Add: nil, length: 6
Can you explain for me why the string is null but its length shows as 6?
Thanks.
You are probably getting the string value of "(null)" in the field somehow, which is 6 characters long.
What type is address? NSString I assume? use %d or %lu for length (and cast to unsigned long if using %lu) instead of %i - that should give correct result.
Length gives a NSUInteger as a result, correct string format specifier would hence be %lu although %d should also work (thinking of it, I always use %d). I never as %i...
NSLog(#"%lu", (unsigned long)string.length)
Look at the format specifies:
https://developer.apple.com/library/mac/ipad/#documentation/Cocoa/Conceptual/Strings/Articles/formatSpecifiers.html
The answer you're probably looking for is: you're using the wrong string format specifier. I'm not sure what %i is; %d is for integers and %u for unsigned, so you need:
NSLog(#"Add: %#, lenght: %u", person.address, [person.address length]);
But... the length is possibly not returning what you think it is.
If person.address is nil, then sending the message length to it won't return the length (what is the length of "nothing"?). As it happens, the Objective C runtime will return zero so your example "works" but is arguably not correct.
I need to convert values like 1393443048683555715 to HEX. But, first of all, i cann't display it as decimal using NSLog(), for example.
Ok, it works:
NSLog(#"%qu", 1393443048683555706);
But what about converting to HEX. What type i have to use to store this big value?
NSLog([NSString stringWithFormat: #"%x", 1393443048683555706]);
// result eb854b7a. It's incorrect result!
but i forgot to say that this big number represented as string #"1393443048683555706" (not int)
You can use %qi and %qu format specifiers with NSLog to display 64-bit integers. Your constant appears to fit in 64-bit signed number, with the limits of:
[−9223372036854775808 to 9223372036854775807]
The "x" format specifier is for 32-bit numbers; you need to use either "qx" or "qX" (depending on whether you want the letter values to be uppercase or not). These are the formatters for unsigned long long values, see:
https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/Strings/Articles/formatSpecifiers.html#//apple_ref/doc/uid/TP40004265-SW1
Next, you should not pass a string as you have done above directly to NSLog - this can cause a crash.
NSLog(string); // bad!!
NSLog(#"%#", string); // good
So if your value comes as a string, you'll want to do this:
NSString *longNumber = #"1393443048683555706";
NSLog(#"%qx", [longNumber longLongValue]);
If the string value can't be coerced to a number, longLongValue will return 0. I'll leave it to you do handle the error (and bounds) checking - see NSString for details.
If you want to save the hex value as a string, do this:
NSString *hexRepresentation = [NSString stringWithFormat:#"%qx", [longNumber longLongValue]];
Again, best to take care for error handling.