In Cocoa do you prefer NSInteger or int, and why? - objective-c

NSInteger/NSUInteger are Cocoa-defined replacements for the regular built-in types.
Is there any benefit to using the NS* types over the built-ins? Which do you prefer and why? Are NSInteger and int the same width on 32-bit / 64-bit platforms?

The way I understand it is that NSInteger et al. are architecture safe versions of the corresponding C types. Basically their size vary depending on the architecture, but NSInteger, for example, is guaranteed to hold any valid pointer for the current architecture.
Apple recommends that you use these to work with OS X 10.5 and onwards, and Apple's API:s will use them, so it's definitely a good idea to get into the habit of using them. They require a little more typing, but apart from that it doesn't seem to be any reason not to use them.

Quantisation issues for 64-bit runtime
In some situations there may be good reason to use standard types instead of NSInteger: "unexpected" memory bloat in a 64-bit system.
Clearly if an integer is 8 instead of 4 bytes, the amount of memory taken by values is doubled. Given that not every value is an integer, though, you should typically not expect the memory footprint of your application to double. However, the way that Mac OS X allocates memory changes depending on the amount of memory requested.
Currently, if you ask for 512 bytes or fewer, malloc rounds up to the next multiple of 16 bytes. If you ask for more than 512 bytes, however, malloc rounds up to the next multiple of 512 (at least 1024 bytes). Suppose then that you define a class that -- amongst others -- declares five NSInteger instance variables, and that on a 32-bit system each instance occupies, say, 272 bytes. On a 64-bit system, instances would in theory require 544 bytes. But, because of the memory allocation strategy, each will actually occupy 1024 bytes (an almost fourfold increase). If you use a large number of these objects, the memory footprint of your application may be considerably greater than you might otherwise expect. If you replaced the NSInteger variables with sint_32 variables, you would only use 512 bytes.
When you're choosing what scalar to use, therefore, make sure you choose something sensible. Is there any reason why you need a value greater than you needed in your 32-bit application? Using a 64-bit integer to count a number of seconds is unlikely to be necessary...

64-bit is actually the raison d'être for NSInteger and NSUInteger; before 10.5, those did not exist. The two are simply defined as longs in 64-bit, and as ints in 32-bit:
#if __LP64__ || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Thus, using them in place of the more basic C types when you want the 'bit-native' size.
CocoaDev has some more info.

I prefer the standard c style declarations but only because I switch between several languages and I don't have to think too much about it but sounds like I should start looking at nsinteger

For importing and exporting data to files or over the net I use UInt32, SInt64 etc...
These are guaranteed to be of a certain size regardless of the architecture and help in porting code to other platforms and languages which also share those types.

Related

How does NSNotFound work? ObjectiveC [duplicate]

NSNotFound is defined as NSIntegerMax, which has different values on 32-bit and 64-bit, this brings a lot of inconvenience for persistent and distributed env. Why not define it as -1?
PS, In ObjC and Cocoa, some indexOf... methods returns NSNotFound but some returns -1, is there a reason these results are inconsistent.
This note from the docs is probably relevant:
Special Considerations
Prior to Mac OS X v10.5, NSNotFound was defined as 0x7fffffff. For 32-bit systems, this was effectively the same as NSIntegerMax. To support 64-bit environments, NSNotFound is now formally defined as NSIntegerMax. This means, however, that the value is different in 32-bit and 64-bit environments.
Well the answer probably isn't because it is sometimes returned as an unsigned value - in Objective-C conversions between signed and unsigned are effectively bit-copies so you can compare an unsigned to -1 and get the answer you expect (the values is all ones - -1 as signed, maximum integer as unsigned). See answer to this question.
So we come to your second part of the question, why in the inconsistency? Well variety is the spice of life, or put another way there's nowt so queer as folk - people are just inconsistent, no more complicated than that!
Almost certainly because the some of the functions that return it are returning an unsigned value, such as NSIndexOfObject:
- (NSUInteger)indexOfObject:(id)anObject
// ^
// Note this!

Why NSNotFound isn't -1 but NSIntegerMax

NSNotFound is defined as NSIntegerMax, which has different values on 32-bit and 64-bit, this brings a lot of inconvenience for persistent and distributed env. Why not define it as -1?
PS, In ObjC and Cocoa, some indexOf... methods returns NSNotFound but some returns -1, is there a reason these results are inconsistent.
This note from the docs is probably relevant:
Special Considerations
Prior to Mac OS X v10.5, NSNotFound was defined as 0x7fffffff. For 32-bit systems, this was effectively the same as NSIntegerMax. To support 64-bit environments, NSNotFound is now formally defined as NSIntegerMax. This means, however, that the value is different in 32-bit and 64-bit environments.
Well the answer probably isn't because it is sometimes returned as an unsigned value - in Objective-C conversions between signed and unsigned are effectively bit-copies so you can compare an unsigned to -1 and get the answer you expect (the values is all ones - -1 as signed, maximum integer as unsigned). See answer to this question.
So we come to your second part of the question, why in the inconsistency? Well variety is the spice of life, or put another way there's nowt so queer as folk - people are just inconsistent, no more complicated than that!
Almost certainly because the some of the functions that return it are returning an unsigned value, such as NSIndexOfObject:
- (NSUInteger)indexOfObject:(id)anObject
// ^
// Note this!

In which case I have to use NSInteger and when to use simple int? [duplicate]

When should I be using NSInteger vs. int when developing for iOS? I see in the Apple sample code they use NSInteger (or NSUInteger) when passing a value as an argument to a function or returning a value from a function.
- (NSInteger)someFunc;...
- (void)someFuncWithInt:(NSInteger)value;...
But within a function they're just using int to track a value
for (int i; i < something; i++)
...
int something;
something += somethingElseThatsAnInt;
...
I've read (been told) that NSInteger is a safe way to reference an integer in either a 64-bit or 32-bit environment so why use int at all?
You usually want to use NSInteger when you don't know what kind of processor architecture your code might run on, so you may for some reason want the largest possible integer type, which on 32 bit systems is just an int, while on a 64-bit system it's a long.
I'd stick with using NSInteger instead of int/long unless you specifically require them.
NSInteger/NSUInteger are defined as *dynamic typedef*s to one of these types, and they are defined like this:
#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
With regard to the correct format specifier you should use for each of these types, see the String Programming Guide's section on Platform Dependencies
Why use int at all?
Apple uses int because for a loop control variable (which is only used to control the loop iterations) int datatype is fine, both in datatype size and in the values it can hold for your loop. No need for platform dependent datatype here. For a loop control variable even a 16-bit int will do most of the time.
Apple uses NSInteger for a function return value or for a function argument because in this case datatype [size] matters, because what you are doing with a function is communicating/passing data with other programs or with other pieces of code; see the answer to When should I be using NSInteger vs int? in your question itself...
they [Apple] use NSInteger (or NSUInteger) when passing a value as an
argument to a function or returning a value from a function.
OS X is "LP64". This means that:
int is always 32-bits.
long long is always 64-bits.
NSInteger and long are always pointer-sized. That means they're 32-bits on 32-bit systems, and 64 bits on 64-bit systems.
The reason NSInteger exists is because many legacy APIs incorrectly used int instead of long to hold pointer-sized variables, which meant that the APIs had to change from int to long in their 64-bit versions. In other words, an API would have different function signatures depending on whether you're compiling for 32-bit or 64-bit architectures. NSInteger intends to mask this problem with these legacy APIs.
In your new code, use int if you need a 32-bit variable, long long if you need a 64-bit integer, and long or NSInteger if you need a pointer-sized variable.
If you dig into NSInteger's implementation:
#if __LP64__
typedef long NSInteger;
#else
typedef int NSInteger;
#endif
Simply, the NSInteger typedef does a step for you: if the architecture is 32-bit, it uses int, if it is 64-bit, it uses long. Using NSInteger, you don't need to worry about the architecture that the program is running on.
You should use NSIntegers if you need to compare them against constant values such as NSNotFound or NSIntegerMax, as these values will differ on 32-bit and 64-bit systems, so index values, counts and the like: use NSInteger or NSUInteger.
It doesn't hurt to use NSInteger in most circumstances, excepting that it takes up twice as much memory. The memory impact is very small, but if you have a huge amount of numbers floating around at any one time, it might make a difference to use ints.
If you DO use NSInteger or NSUInteger, you will want to cast them into long integers or unsigned long integers when using format strings, as new Xcode feature returns a warning if you try and log out an NSInteger as if it had a known length. You should similarly be careful when sending them to variables or arguments that are typed as ints, since you may lose some precision in the process.
On the whole, if you're not expecting to have hundreds of thousands of them in memory at once, it's easier to use NSInteger than constantly worry about the difference between the two.
On iOS, it currently does not matter if you use int or NSInteger. It will matter more if/when iOS moves to 64-bits.
Simply put, NSIntegers are ints in 32-bit code (and thus 32-bit long) and longs on 64-bit code (longs in 64-bit code are 64-bit wide, but 32-bit in 32-bit code). The most likely reason for using NSInteger instead of long is to not break existing 32-bit code (which uses ints).
CGFloat has the same issue: on 32-bit (at least on OS X), it's float; on 64-bit, it's double.
Update: With the introduction of the iPhone 5s, iPad Air, iPad Mini with Retina, and iOS 7, you can now build 64-bit code on iOS.
Update 2: Also, using NSIntegers helps with Swift code interoperability.
As of currently (September 2014) I would recommend using NSInteger/CGFloat when interacting with iOS API's etc if you are also building your app for arm64.
This is because you will likely get unexpected results when you use the float, long and int types.
EXAMPLE: FLOAT/DOUBLE vs CGFLOAT
As an example we take the UITableView delegate method tableView:heightForRowAtIndexPath:.
In a 32-bit only application it will work fine if it is written like this:
-(float)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 44;
}
float is a 32-bit value and the 44 you are returning is a 32-bit value.
However, if we compile/run this same piece of code in a 64-bit arm64 architecture the 44 will be a 64-bit value. Returning a 64-bit value when a 32-bit value is expected will give an unexpected row height.
You can solve this issue by using the CGFloat type
-(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 44;
}
This type represents a 32-bit float in a 32-bit environment and a 64-bit double in a 64-bit environment. Therefore when using this type the method will always receive the expected type regardless of compile/runtime environment.
The same is true for methods that expect integers.
Such methods will expect a 32-bit int value in a 32-bit environment and a 64-bit long in a 64-bit environment. You can solve this case by using the type NSInteger which serves as an int or a long based on the compile/runtime environemnt.
int = 4 byte (fixed irrespective size of the architect)
NSInteger = depend upon size of the architect(e.g. for 4 byte architect = 4 byte NSInteger size)

What is Objective C equivalent for 'long' in Java

Equivalent to 'long' in Java what do we have in Objective C, NSInteger?
In Java, a long is always 64 bits. In C and Objective-C, a long might be 64 bits, or it might be 32 bits, or (in less common cases) it might be something else entirely; the C standard doesn't specify an exact bit width.
On OS X, an NSInteger is 64 bits on 64-bit platforms, and 32 bits on 32-bit platforms. 32-bit Mac platforms are increasingly rare, so you can probably use NSInteger and be fine.
However, if you always want a 64-bit integer, you'll probably want to use the int64_t data type defined in stdint.h.
A Java long is defined as a signed 64-bit value, neither long nor NSInteger guarantee this for Objective-C. For example, on 32-bit systems plaforms, NSInteger and long are 32-bit signed values. If your platform comes with C99 headers (for example when your compiler is gcc based), then you should have stdint.h which has platform independent definitions for integer types with guaranteed sizes. The 64 bit signed type is named int64_t.
#include <stdint.h>
int64_t someVariable; // 64 bit signed integer, like Java's long
You didn't ask, but int32_t is the analogue to Java's int type (a 32-bit integer).

How can I reverse the byte order of an NSInteger or NSUInteger in objective-c

This is a somewhat of a follow up to this posting but with a different question so I felt I should ask in a separate thread.
I am at the point where I have four consecutive bytes in memory that I have read in from a file. I'd like to store these as a bit array (the actual int value of them does not matter until later). When I print out what is in my int, I notice that it seems to be stored in reverse order (little endian).
Does anyone have a good method for reversing the order of the bytes. Then once reversed, picking out consecutive bits spanning two bytes and converting back to an int?
unsigned char data[] = { 0x00, 0x02, 0x45, 0x28 };
NSInteger intData = *((NSInteger *)data);
NSLog(#"data:%08x", intData); // data:28450200
Cocoa (or to be exact the Foundation framework) has functions to swap the endianness of bytes: NSSwapInt, NSSwapShort, NSSwapLong, and NSSwapLongLong. These swap around the bytes no matter what - they make big-endian integers from small-endian integers and vice versa.
If you know which format you have there are other functions that swap it to the native endianness: NSSwapLittleIntToHost and NSSwapBigIntToHost. There are also the reverse functions which swap from the native format to little or big endian format: NSSwapHostIntToLittle and NSSwapHostIntToBig. Those are available for the other integer types and floating point types as well. What they do is they call the primitive swap functions on the values if necessary. So NSSwapLittleIntToHost doesn’t do anything while NSSwapBigIntToHost returns the result of NSSwapInt on a little endian machine.
Note that these take parameters of the compilers integer types and not the NSInteger type. So depending on wether you’re generating 32bit or 64bit code you have to use different functions if you are using NSInteger.
You also should not cast your byte array to an integer pointer and dereference that. It would be better to assemble the integer using bit shift operations. Your code will only work if NSInteger is 32 bit wide. If it is 64 bit then your number will be garbage or your program might even crash. But even if you are using an integer type that is always 32 bit wide (int32_t from the C99 <stdint.h> header for example) this might not work as expected.