Why NSNotFound isn't -1 but NSIntegerMax - objective-c

NSNotFound is defined as NSIntegerMax, which has different values on 32-bit and 64-bit, this brings a lot of inconvenience for persistent and distributed env. Why not define it as -1?
PS, In ObjC and Cocoa, some indexOf... methods returns NSNotFound but some returns -1, is there a reason these results are inconsistent.

This note from the docs is probably relevant:
Special Considerations
Prior to Mac OS X v10.5, NSNotFound was defined as 0x7fffffff. For 32-bit systems, this was effectively the same as NSIntegerMax. To support 64-bit environments, NSNotFound is now formally defined as NSIntegerMax. This means, however, that the value is different in 32-bit and 64-bit environments.

Well the answer probably isn't because it is sometimes returned as an unsigned value - in Objective-C conversions between signed and unsigned are effectively bit-copies so you can compare an unsigned to -1 and get the answer you expect (the values is all ones - -1 as signed, maximum integer as unsigned). See answer to this question.
So we come to your second part of the question, why in the inconsistency? Well variety is the spice of life, or put another way there's nowt so queer as folk - people are just inconsistent, no more complicated than that!

Almost certainly because the some of the functions that return it are returning an unsigned value, such as NSIndexOfObject:
- (NSUInteger)indexOfObject:(id)anObject
// ^
// Note this!

Related

Objective-C: Type casting object to integer_t

integer_t is a typedef of int32_t as defined here, and after some checking, integer_t has a size of 4 bytes, and so does int (intValue) as per mentioned is this doc. My question is, is casting like this produce valid result?
integer_t value = 100;
id anObject = #(value);
integer_t aValue = [anObject intValue];
Is aValue always equal to value? Will this cause any issue in the long run? Should I do long value = [anObject longValue] instead? Thanks in advance.
Short and specific answer - YES those values are equal since integer_t and int both (according to you - here's the catch) have the same size AND the same signedness. If one was e.g. some type of unsigned int then it would not work. Neither would it work if one was e.g. 8 bytes (long) and the other 4 (int).
The long and general answer is - it depends. Yes, here you think it is equal but there are always funny cases you need to watch out for. I already mentioned the size and signedness but the real trip can be over the system architecture. So you might assume they are the same and then one day you compile for 64b arch and all breaks down as int there has 8 bytes length and integer_t still is 4 e.g. You could also run into endianness troubles. Thus if you get a bunch of ints from a mainframe they could be stored BADC where A, B, C and D are the 4 bytes of the int.
As you can see, it is easy to scare anybody working with these, and in practice that is why there are things such as NSInteger - Objective-C's attempt to protect you from these. But don't be scared, these are toothless monsters, unless you work at a low level, and then your work will be to work with them. Doesn't that sound poetic.
Back to the code - don't worry too much about these. If you work in Objective-C, maybe try to use the NSInteger and NSUInteger types for now. If you store these and need to load it again later then you need to think about the possibility that you store it from a 32b arch and restore it on a 64b arch and work around that somehow.

How does NSNotFound work? ObjectiveC [duplicate]

NSNotFound is defined as NSIntegerMax, which has different values on 32-bit and 64-bit, this brings a lot of inconvenience for persistent and distributed env. Why not define it as -1?
PS, In ObjC and Cocoa, some indexOf... methods returns NSNotFound but some returns -1, is there a reason these results are inconsistent.
This note from the docs is probably relevant:
Special Considerations
Prior to Mac OS X v10.5, NSNotFound was defined as 0x7fffffff. For 32-bit systems, this was effectively the same as NSIntegerMax. To support 64-bit environments, NSNotFound is now formally defined as NSIntegerMax. This means, however, that the value is different in 32-bit and 64-bit environments.
Well the answer probably isn't because it is sometimes returned as an unsigned value - in Objective-C conversions between signed and unsigned are effectively bit-copies so you can compare an unsigned to -1 and get the answer you expect (the values is all ones - -1 as signed, maximum integer as unsigned). See answer to this question.
So we come to your second part of the question, why in the inconsistency? Well variety is the spice of life, or put another way there's nowt so queer as folk - people are just inconsistent, no more complicated than that!
Almost certainly because the some of the functions that return it are returning an unsigned value, such as NSIndexOfObject:
- (NSUInteger)indexOfObject:(id)anObject
// ^
// Note this!

Casting between uint8_t * and char *. What happens?

What happens when casting between these two in relation to the termination character? In C99 Objective-C.
I assume that a char is 8 bits on your system in this answer.
If your architecture uses unsigned char as char type then absolutely nothing will happen.
If your architecture uses signed char as char type then negative values of char will wrap around causing possibly unexpected results. This however will never happen to the termination null character.
Please note, by "casting" nothing really happens, you just tell the compiler to interpret a certain location in the memory differently. This difference in interpretation would create the actual (side)effects of the cast.
If char and uint8_t are compatible types (they should be on most current desktop computers), pointers to objects of that type have the same representation and alignment requirements, so there should be no problem converting (implicitly, or explicitly with a cast) one to the other.
The values pointed to, again, if they are compatible, should be treated equally no matter what type they are interpreted as.
Note: I am not 100% certain that uint8_t and char are compatible on a implementation with signed chars.
If the types are not compatible you invoke Undefined Behaviour and anything can happen: a very likely thing to happen is that everything "works" as you expect -- but there is no guarantee it will always work the same

In which case I have to use NSInteger and when to use simple int? [duplicate]

When should I be using NSInteger vs. int when developing for iOS? I see in the Apple sample code they use NSInteger (or NSUInteger) when passing a value as an argument to a function or returning a value from a function.
- (NSInteger)someFunc;...
- (void)someFuncWithInt:(NSInteger)value;...
But within a function they're just using int to track a value
for (int i; i < something; i++)
...
int something;
something += somethingElseThatsAnInt;
...
I've read (been told) that NSInteger is a safe way to reference an integer in either a 64-bit or 32-bit environment so why use int at all?
You usually want to use NSInteger when you don't know what kind of processor architecture your code might run on, so you may for some reason want the largest possible integer type, which on 32 bit systems is just an int, while on a 64-bit system it's a long.
I'd stick with using NSInteger instead of int/long unless you specifically require them.
NSInteger/NSUInteger are defined as *dynamic typedef*s to one of these types, and they are defined like this:
#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
With regard to the correct format specifier you should use for each of these types, see the String Programming Guide's section on Platform Dependencies
Why use int at all?
Apple uses int because for a loop control variable (which is only used to control the loop iterations) int datatype is fine, both in datatype size and in the values it can hold for your loop. No need for platform dependent datatype here. For a loop control variable even a 16-bit int will do most of the time.
Apple uses NSInteger for a function return value or for a function argument because in this case datatype [size] matters, because what you are doing with a function is communicating/passing data with other programs or with other pieces of code; see the answer to When should I be using NSInteger vs int? in your question itself...
they [Apple] use NSInteger (or NSUInteger) when passing a value as an
argument to a function or returning a value from a function.
OS X is "LP64". This means that:
int is always 32-bits.
long long is always 64-bits.
NSInteger and long are always pointer-sized. That means they're 32-bits on 32-bit systems, and 64 bits on 64-bit systems.
The reason NSInteger exists is because many legacy APIs incorrectly used int instead of long to hold pointer-sized variables, which meant that the APIs had to change from int to long in their 64-bit versions. In other words, an API would have different function signatures depending on whether you're compiling for 32-bit or 64-bit architectures. NSInteger intends to mask this problem with these legacy APIs.
In your new code, use int if you need a 32-bit variable, long long if you need a 64-bit integer, and long or NSInteger if you need a pointer-sized variable.
If you dig into NSInteger's implementation:
#if __LP64__
typedef long NSInteger;
#else
typedef int NSInteger;
#endif
Simply, the NSInteger typedef does a step for you: if the architecture is 32-bit, it uses int, if it is 64-bit, it uses long. Using NSInteger, you don't need to worry about the architecture that the program is running on.
You should use NSIntegers if you need to compare them against constant values such as NSNotFound or NSIntegerMax, as these values will differ on 32-bit and 64-bit systems, so index values, counts and the like: use NSInteger or NSUInteger.
It doesn't hurt to use NSInteger in most circumstances, excepting that it takes up twice as much memory. The memory impact is very small, but if you have a huge amount of numbers floating around at any one time, it might make a difference to use ints.
If you DO use NSInteger or NSUInteger, you will want to cast them into long integers or unsigned long integers when using format strings, as new Xcode feature returns a warning if you try and log out an NSInteger as if it had a known length. You should similarly be careful when sending them to variables or arguments that are typed as ints, since you may lose some precision in the process.
On the whole, if you're not expecting to have hundreds of thousands of them in memory at once, it's easier to use NSInteger than constantly worry about the difference between the two.
On iOS, it currently does not matter if you use int or NSInteger. It will matter more if/when iOS moves to 64-bits.
Simply put, NSIntegers are ints in 32-bit code (and thus 32-bit long) and longs on 64-bit code (longs in 64-bit code are 64-bit wide, but 32-bit in 32-bit code). The most likely reason for using NSInteger instead of long is to not break existing 32-bit code (which uses ints).
CGFloat has the same issue: on 32-bit (at least on OS X), it's float; on 64-bit, it's double.
Update: With the introduction of the iPhone 5s, iPad Air, iPad Mini with Retina, and iOS 7, you can now build 64-bit code on iOS.
Update 2: Also, using NSIntegers helps with Swift code interoperability.
As of currently (September 2014) I would recommend using NSInteger/CGFloat when interacting with iOS API's etc if you are also building your app for arm64.
This is because you will likely get unexpected results when you use the float, long and int types.
EXAMPLE: FLOAT/DOUBLE vs CGFLOAT
As an example we take the UITableView delegate method tableView:heightForRowAtIndexPath:.
In a 32-bit only application it will work fine if it is written like this:
-(float)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 44;
}
float is a 32-bit value and the 44 you are returning is a 32-bit value.
However, if we compile/run this same piece of code in a 64-bit arm64 architecture the 44 will be a 64-bit value. Returning a 64-bit value when a 32-bit value is expected will give an unexpected row height.
You can solve this issue by using the CGFloat type
-(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 44;
}
This type represents a 32-bit float in a 32-bit environment and a 64-bit double in a 64-bit environment. Therefore when using this type the method will always receive the expected type regardless of compile/runtime environment.
The same is true for methods that expect integers.
Such methods will expect a 32-bit int value in a 32-bit environment and a 64-bit long in a 64-bit environment. You can solve this case by using the type NSInteger which serves as an int or a long based on the compile/runtime environemnt.
int = 4 byte (fixed irrespective size of the architect)
NSInteger = depend upon size of the architect(e.g. for 4 byte architect = 4 byte NSInteger size)

In Cocoa do you prefer NSInteger or int, and why?

NSInteger/NSUInteger are Cocoa-defined replacements for the regular built-in types.
Is there any benefit to using the NS* types over the built-ins? Which do you prefer and why? Are NSInteger and int the same width on 32-bit / 64-bit platforms?
The way I understand it is that NSInteger et al. are architecture safe versions of the corresponding C types. Basically their size vary depending on the architecture, but NSInteger, for example, is guaranteed to hold any valid pointer for the current architecture.
Apple recommends that you use these to work with OS X 10.5 and onwards, and Apple's API:s will use them, so it's definitely a good idea to get into the habit of using them. They require a little more typing, but apart from that it doesn't seem to be any reason not to use them.
Quantisation issues for 64-bit runtime
In some situations there may be good reason to use standard types instead of NSInteger: "unexpected" memory bloat in a 64-bit system.
Clearly if an integer is 8 instead of 4 bytes, the amount of memory taken by values is doubled. Given that not every value is an integer, though, you should typically not expect the memory footprint of your application to double. However, the way that Mac OS X allocates memory changes depending on the amount of memory requested.
Currently, if you ask for 512 bytes or fewer, malloc rounds up to the next multiple of 16 bytes. If you ask for more than 512 bytes, however, malloc rounds up to the next multiple of 512 (at least 1024 bytes). Suppose then that you define a class that -- amongst others -- declares five NSInteger instance variables, and that on a 32-bit system each instance occupies, say, 272 bytes. On a 64-bit system, instances would in theory require 544 bytes. But, because of the memory allocation strategy, each will actually occupy 1024 bytes (an almost fourfold increase). If you use a large number of these objects, the memory footprint of your application may be considerably greater than you might otherwise expect. If you replaced the NSInteger variables with sint_32 variables, you would only use 512 bytes.
When you're choosing what scalar to use, therefore, make sure you choose something sensible. Is there any reason why you need a value greater than you needed in your 32-bit application? Using a 64-bit integer to count a number of seconds is unlikely to be necessary...
64-bit is actually the raison d'ĂȘtre for NSInteger and NSUInteger; before 10.5, those did not exist. The two are simply defined as longs in 64-bit, and as ints in 32-bit:
#if __LP64__ || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Thus, using them in place of the more basic C types when you want the 'bit-native' size.
CocoaDev has some more info.
I prefer the standard c style declarations but only because I switch between several languages and I don't have to think too much about it but sounds like I should start looking at nsinteger
For importing and exporting data to files or over the net I use UInt32, SInt64 etc...
These are guaranteed to be of a certain size regardless of the architecture and help in porting code to other platforms and languages which also share those types.