NSNumber vs. NSInteger vs. int for NSObject property - objective-c

We've got a model in our iOS with an id property. Here's what we're currently using (this is iOS 5 by the way).
#property (nonatomic, assign) int userID;
Seems to be working fine so far. I'm wondering if this will cause any problems going forward.
Example: I understand that this means that the ID property itself could not be stored into a plist. However, this is a property of an NSObject. If we were storing anything into a file/core data/nsuserdefaults/whatever it would likely be the entire object and not just this property.
I guess my question is ... are we going to cause ourselves any problems by storing this as an int as opposed to an NSNumber?
Secondly, what would be the difference in storing this as an NSInteger instead. I understand that it's just a type def to either long or int depending in the architecture. Since we're only targeting iPhone does it matter that it's just set to int? Doesn't seem like it would make any difference in that case.

I guess my question is ... are we going to cause ourselves any
problems by storing this as an int as opposed to an NSNumber?
That really depends on what you're going to do with this value. If you'll want to treat it as an object (for example, so that you can store it in an NSArray or NSDictionary) NSNumber may be convenient. If you just want to keep track of the value, and int works for you, then it's fine to use int.
Secondly, what would be the difference in storing this as an NSInteger
instead. I understand that it's just a type def to either long or int
depending in the architecture. Since we're only targeting iPhone does
it matter that it's just set to int?
I'd go with NSInteger (and NSUInteger). Using those types, your code will automatically use the appropriate size for the architecture you're compiling for. You may only be targeting iOS, but you're probably running your code on the iOS simulator, which runs on MacOS X. So that's two architectures right there -- and you don't know what may happen with iOS in the future.

The only limitation I can think of is if int=0 is a valid value or int not having a value (null) is an important use case.
Since ints are always initialised with 0, you wont have a situation where you can check for non-existence of that property.
In your case say you wanna test if user_id is present or not then its not possible with primitive data type like int since it will always have a value.
In another scenarios 0 could be a valid value (or even in your scenario - Steve Jobs was joked to be Employee Number 0 in Apple in many pop culture references). In that case int getting initialized with 0 every time might be an unwanted side-effect you will have to deal with.

It is quite normal to use an int as a property in a subclass of NSObject.
Depending on your platform, NSInteger could be an int or a long but other than that, it doesn't matter if you use int or NSInteger and can be used interchangeably as long as the value doesn't exceed the limit of an int.

Related

Objective-C: Type casting object to integer_t

integer_t is a typedef of int32_t as defined here, and after some checking, integer_t has a size of 4 bytes, and so does int (intValue) as per mentioned is this doc. My question is, is casting like this produce valid result?
integer_t value = 100;
id anObject = #(value);
integer_t aValue = [anObject intValue];
Is aValue always equal to value? Will this cause any issue in the long run? Should I do long value = [anObject longValue] instead? Thanks in advance.
Short and specific answer - YES those values are equal since integer_t and int both (according to you - here's the catch) have the same size AND the same signedness. If one was e.g. some type of unsigned int then it would not work. Neither would it work if one was e.g. 8 bytes (long) and the other 4 (int).
The long and general answer is - it depends. Yes, here you think it is equal but there are always funny cases you need to watch out for. I already mentioned the size and signedness but the real trip can be over the system architecture. So you might assume they are the same and then one day you compile for 64b arch and all breaks down as int there has 8 bytes length and integer_t still is 4 e.g. You could also run into endianness troubles. Thus if you get a bunch of ints from a mainframe they could be stored BADC where A, B, C and D are the 4 bytes of the int.
As you can see, it is easy to scare anybody working with these, and in practice that is why there are things such as NSInteger - Objective-C's attempt to protect you from these. But don't be scared, these are toothless monsters, unless you work at a low level, and then your work will be to work with them. Doesn't that sound poetic.
Back to the code - don't worry too much about these. If you work in Objective-C, maybe try to use the NSInteger and NSUInteger types for now. If you store these and need to load it again later then you need to think about the possibility that you store it from a 32b arch and restore it on a 64b arch and work around that somehow.

Having issues when trying to convert string input to an integer value that can be set as an entities int32 attribute

I am still fairly new to Objective-C and iOS development
. I am able to make an app run fine with core data when all the attributes are strings only. My problem occurs when i have an entity (i made a test one to show as an example) which has an attribute that is set to be of type integer 16(though i have tried setting both integer 16 and integer 64 and get the exact same errors) but i cannot seem to understand how i am supposed to convert the string input from a user to a format which will be accepted as a value to be set. I keep getting the same error messages (implicit conversion of NSInteger to IDNullable is disallowed in ARC) & (incompatible integer to pointer conversion sending NSInteger(AKA "long" to parameter of type ID Nullable)
ex 1:
NSManagedObjectContext *context = [self managedObjectContext];
NSManagedObject *newEntity = [NSEntityDescription insertNewObjectForEntityForName:#"TestEntity" inManagedObjectContext:context];
int valueOne = [self.valueOneIn.text intValue];
[newEntity setValue:valueOne forKey:#"value1"]; //!!2 errors listed above
ex 2 (above ex edited):
NSInteger valueOne = [self.valueOneIn.text intValue];
[newEntity setValue:valueOne forKey:#"value1"]; //!!2 errors listed above
ex 3 (ex1, just edited):
NSInteger *valueOne = [self.valueOneIn.text intValue]; //!!error
[newEntity setValue:valueOne forKey:#"value1"]; //!!2 errors listed above
I have attached two photos showing simple examples of the errors that i am getting. I have spent the past couple days looking up videos, online courses and even reading some possible solutions on stack overflow, but none seem to remedy the situation (my examples above were made in my attempts to use the potential solutions i had found but, most cover using core data and string values or NSDate values). Any help or nudge in the right direction would really (i cannot stress this enough, i mean really) be appreciated[example of error when setting to NSIntegerexample of error with NSInteger
The setValue:forKey: method wants an object as the value. But int and NSInteger are primitive numeric types, not objects. That's why the first two examples don't work. The third one doesn't work because a pointer to NSInteger is still not an object.
Assuming that valueOneIn is a text field, you should do something like:
NSInteger valueOne = [self.valueOneIn.text integerValue];
[newEntity setValue:#(valueOne) forKey:#"value1"];
The #(valueOne) syntax tells the compiler to convert the NSInteger variable valueOne to an instance of NSNumber. That's a class designed to wrap numeric values when objects are required, so it's what you need for setValue:forKey:. Also, note that the code uses integerValue instead of intValue-- which is better because the compiler will use the correct integer size depending on the platform you're targeting.
It would be better to use custom NSManagedObject subclasses for your entities than to use NSManagedObject directly. One major advantage is that setValue:forKey: will accept any object as the value. Subclasses will tell the compiler what object types are acceptable, so that the compiler can verify that you're using the correct types.

Can you create an NSValue from a C struct with bitfields?

I'm trying to do the following, but NSValue's creation method returns nil.
Are C bitfields in structs not supported?
struct MyThingType {
BOOL isActive:1;
uint count:7;
} myThing = {
.isActive = YES,
.count = 3,
};
NSValue *value = [NSValue valueWithBytes:&myThing objCType:#encode(struct MyThingType)];
// value is nil here
First and foremost, claptrap makes a very good point in his comment: why bother using bitfield specifiers (which are mainly used to either do micro-optimization or manually add padding bits where you need them), to then wrap it all up in an instance of NSValue).
It's like buying a castle, but then living in the kitchen to not ware out the carpets...
I don't think it is, a quick canter through the apple dev-docs came up with this... there are indeed several issues to take into account when it comes to bit fields.
I've also just found this, which explains why bit-fields + NSValue don't really play well together.
Especially in cases where the sizeof a struct can lead to NSValue reading the data in an... shall we say erratic manner:
The struct you've created is padded to 8 bits. Now these bits could be read as 2 int, or 1 long or something... From what I've read on the linked page, it's not unlikely that this is what is happening.
So, basically, NSValue is incapable of determining the actual types, when you're using bit fields. In case of ambiguity, an int (width 4 in most cases) is assumed and under/overflow occurs, and you have a mess on your hands.
Since the compiler still has some liberty as to where what member is actually stored, it doesn't quite suffice to pass the stringified typedef sort of thing (objCType: #encode(struct YourStruct), because there is a good chance that you won't be able to make sense of the actual struct itself, owing to compiler optimizations and such...
I'd suggest you simply drop the bit field specifiers, because structs should be supported... at least, last time I tried, a struct with simple primitive types worked just fine.
You can solve this with a union. Simply put the structure into union that has another member with a type supported by NSValue and has a size larger than your structure. In your case this is obvious for long.
union _bitfield_word_union
{
yourstructuretype bitfield;
long plain;
};
You can make it more robust against resizing the structure by using an array whose size is calculated at compile time. (Please remember that sizeof() is a compile time operator, too.)
char plain[(sizeof(yourstructuretype)/sizeof(char)];
Then you can store the structure with the bitfield into the union and read the plain member out.
union converter = { .bitfield = yourstructuretypevalue };
long plain = converter.plain;
Use this value for NSValue instance creation. Reading out you have to do the inverse way.
I'm pretty sure that through a technical correctum of C99 this became standard conforming (called type punning), because you can expect that reading out a member's value (bitfield) through another members value (plain) and storing it back is defined, if the member being read is at least as big as the member being written. (There might be undefined bits 9-31/63 in plain, but you do not have to care about it.) However it is real-world conforming.
Dirty hack? Maybe. One might call it C99. However using bitfields in combination with NSValue sounds like using dirty hacks.

Why is the row property of NSIndexPath a signed integer?

Why is the row property of NSIndexPath a signed integer?
Could it ever take on a "valid" negative value?
edit
I haven't thought about this until today when I set LLVM to check sign comparison. This made the compiler spew out warnings whenever there was indexPath.row <= [someArray count] or similar.
What happens if you use negative numbers?
It isn't wise to use negative values, if you do, you'll get crazy results
NSIndexPath* path = [NSIndexPath indexPathForRow:-2 inSection:0];
The above results in a section of 0, and a row of 4294967294 (which looks like underflow of an NSUInteger to me!) Be safe in the knowledge that this only occurs within the UIKit Additions category, and not within NSIndexPath itself. Looking at the concept behind NSIndexPath, it really doesn't make sense to hold negative values. So why?
(Possible) Reason for why it is so
The core object NSIndexPath from OS X uses NSUIntegers for its indices, but the UIKit Addition uses NSInteger. The category only builds on top of the core object, but the use of NSInteger over NSUInteger doesn't provide any extra capabilities.
Why it works this way, I have no idea. My guess (and I stipulate guess), is it was a naive API slipup when first launching iOS. When UITableView was released in iOS 2, it used NSIntegers for a variety of things (such as numberOfSections). Think about it: This conceptually doesn't make sense, you can't have a negative number of sections. Now even in iOS 6, it still uses NSInteger, so not to break previous application compatibility with table views.
Alongside UITableView, we have the additions to NSIndexPath, which are used in conjunction with the table view for accessing it's rows and such. Because they have to work together, they need compatible types (in this case NSInteger).
To change the type to NSUInteger across the board would break a lot of things, and for safe API design, everything would need to be renamed so that the NSInteger and NSUInteger counterparts could work safely side by side. Apple probably don't want this hassle (and neither do the developers!), and as such they have kept it to NSInteger.
One possible reason is that unsigned types underflow very easily. As an example, I had an NSUInteger variable for stroke width in my code. I needed to create an “envelope” around a point painted with this stroke, hence this code:
NSUInteger width = 3;
CGRect envelope = CGRectInset(CGRectZero, -width, -width);
NSLog(#"%#", NSStringFromCGRect(envelope));
With an unsigned type this outputs {{inf, inf}, {0, 0}}, with a signed integer you get {{-3, -3}, {6, 6}}. The reason is that the unary minus before the width variable creates an underflow. This might be obvious to somebody, but will surprise a lot of programmers:
NSUInteger a = -1;
NSUInteger b = 1;
NSLog(#"a: %u, b: %u", a, -b); // a: 4294967295, b: 4294967295
So even in situations where it doesn’t make sense to use a negative value (stroke width can’t be negative) it makes sense to use the value in a negative context, causing an underflow. Switching to a signed type leads to less surprises, while still keeping the range reasonably high. Sounds like a nice compromise.
I think UIKit Additions on NSIndexPath use NSInteger type intentionally. If for some reason negative row would be passed as parameter to any method (I see none at the moment, though...), autocast to NSIntegerMax + parameter value would not happen and any possible object would not look for a ridiculously large parameter that does not exist. Still, there are other ways to prevent this, so it might be just a matter of taste.
I, for example, would not take NSUInteger as parameters in NSIndexPath class, but rather NSInteger and checked for a sign and wouldn't create NSIndexPath at all, if any parameter was negative.

Do NSDouble, NSFloat, or other types than NSInteger exist?

Over at In Cocoa do you prefer NSInteger or just regular int, and why?, there was mention of NSDouble and NSFloat, but I can't see a reference for those in any documentation. If NSInteger's purpose is for architectural safety, what about other types such as double or float?
NSInteger exists because the int type varies in size between 32-bit and 64-bit systems. float and double don't vary in size the same way, so there's no need to have wrapper types for them.
There is no NSFloat but I know the Core Graphics API eventually changed from float to CGFloat so that it could use a double on some architectures.
It is best to use the exact types that API headers declare. This makes type changes automatic if you ever recompile your code for a different target.
It's also about conventions.
A typedef to an int is incompatible to int int itself.
Example: pid_t is of type int, but passing an int would create a warning.
Why? Because you want to be sure that if you cross API boundaries everyone knows what the code expects.
There are float and double types, i.e NSTimeInterval. It's not really about the underlying type, but the convention to adhere to.
If you declare a local int as a loop counter and you do not plan to pass it to a well-defined API, it's fine to call an int an int.