This question already has answers here:
Sending nil to CGPoint type parameter
(2 answers)
Closed 9 years ago.
I have a CGSize property in a class and I need to check if it has been initialized. I know that a CGSize isn't a object, but generally speaking, is the same idea of checking if an object is different from nil. How to do that?
You can compare it to CGSizeZero or an arbitrary size that you consider invalid.
if (!CGSizeEqualToSize(CGSizeZero, mySize)) {
// do something
}
It depends on what you mean by "in a class". If this is an instance variable, your problems are over, because you are guaranteed that an instance variable will be auto initialized to some form of zero (i.e. CGSizeZero). But if you just mean "in my code somewhere", e.g. an automatic variable, then there is no such test; it is entirely up to you to initialize before use, and until you do, the value could be anything at all (sorry, but that's how C works).
On the whole, your question is itself a "bad smell". If it matters to you at some point in your code whether this value has been initialized, then you are doing it wrong. It's your value; it was up to you to initialize it (e.g. when your overall object was initialized). Or, if for some reason you need to know whether your setter has ever been called, then you need to add a boolean to your setter that tells whether it has ever been called.
CGSize is a C struct. With few exceptions (when used as an iVar), there is no guarantee for its initialized value. It could be anything, especially when created on the stack.
Thus, you are responsible for initializing it properly, and since "zeros" are valid values, there is no guaranteed way to tell if it was set to "zero" on purpose, or it has been uninitialized.
Do I need to explicitly zero primitives, i.e., set BOOLs to NO, set ints to 0?
Do I need to explicitly assign an NSString* to nil or #""?
I know that pointers must be explicitly set to nil, otherwise they may be filled with garbage. (Or is that only for Objective-C++?)
It depends on what kind of variable you're talking about. Globals, static variables and instance variables are already guaranteed to be initialized to 0.
Local variables are a different story. They are never initialized at all by default, so you shouldn't read their values until you initialize or set them. It isn't strictly necessary to initialize them to 0 specifically. For example, the following code is very redundant:
Controller *controller = nil;
int countOfThings = 0;
controller = [Controller sharedInstance];
countOfThings = controller.totalThings - controller.thingsUsed;
Instead, you should initialize variables to the values you actually want:
Controller *controller = [Controller sharedInstance];
int countOfThings = controller.totalThings - controller.thingsUsed;
It's always good programming practice to initialize primitives, but in general it's only required if you're referring to that variable when there's a chance it wasn't set to anything except garbage memory.
I believe the compiler still throws warnings on "uninitialized variables", but if not, there's definitely a compiler checkbox in XCode for that.
The compiler flag for this is -Wuninitialized, b.t.w.
Let's imagine I have Fraction class. So, the correct way to create instance of it is:
Fraction *myFraction;
or
myFraction = Fraction;
or
myFraction = [Fraction new];
or something else?
In the book i'm studying the correct one is first, but it looks unreasonable to me. Why do we have to create a pointer for it? Why don't we make the real instance?
That first expression means - give me a pointer to the new instance of Fraction class, doesn't it?
The first declares a variable named myFraction of type Fraction *, but doesn't create anything, nor initialize myFraction. The second isn't valid. The third creates a new Fraction and assigns it to a previously declared variable named myFraction. Often in Objective-C, you'll declare and initialize a variable in a single statement:
Fraction *myFraction = [[Fraction alloc] init];
As for whether to use new or alloc followed by init, it's largely a matter of taste.
Variables for storing objects are pointers in part because Objective-C inherited C's call-by-value semantics. When one variable is assigned to another (such as when passing it to a function), the value will be copied. At best, this is inefficient for immutable objects. At worst, it leads to incorrect behavior. Pointers are a way around call-by-value and the copy-on-assign semantics: the value of a variable with pointer type is just the pointer. It can be copied without touching the target object. The cost for this is you need some form of memory management.
It would be a good idea to read Kernihan and Ritchie's "The C Programming Language" so you can get an idea about how variables are declared.
There are two modes of allocation in C and Obj-C and C++: manual and automatic.
Integers and floats and characters and such are generally automatically declared. They are created when the declaration passes (i.e. int i), and deleted when the scope they were created in goes away, i.e. when you exit the block in which they were declared. They're called automatics. (it's also possible to declare them "static" but for the purposes of this discussion regarding allocation, these are the same)
Objects are too complicated to pass around to functions, as function parameters are "pass by value", meaning that the parameter gets a copy of the value being passed in, instead of the variable itself. It'd take a huge amount of time to copy a whole object all the time.
For this reason, you want to just tell the various functions where they can find the object. Instead of handing off a copy of the object, you hand off a copy of the address of the object. The address is stored in an automatic with a type of pointer. (This is really just an integer, but it's size is dictated by the hardware and OS, so it needs to be a special type.)
The declaration Fraction *myFraction; means "myFraction is a pointer, and just so you know, it's going to point to a Fraction later."
This will automatically allocate the pointer, but not the whole Fraction. For that to happen, you must call alloc and init.
The big reason why you have this two step process is that since we typically want objects to stick around for a while, we don't want the system automatically killing them at the end of a function. We need them to persist. We create places to hang the object in our functions, but those hangers go away when they aren't needed. We don't want them taking the object with them.
Ultimately, you might make declarations like this:
Fraction *myFraction = [[Fraction alloc] initWithNumerator: 2 Denominator: 3];
which says: "Make me a Fraction, and set it to be 2/3, and then put the address of that Fraction into 'myFraction'."
Why do we have to create a pointer for it? Why don't we make the real instance?
In Objective-C, every object is pointer type. So, you need to use either new or alloc/init.
Fraction *myFraction = [ Fraction new ] ;
or
Fraction *myFraction = [ [Fraction alloc] init ] ;
And myFraction needs to be released.
That first expression means - give me a pointer to the new instance of Fraction class, doesn't it?
No, you are just declaring a pointer of type Fraction. And the second statement is not even valid.
So there 's NULL, which is used for pointers in general, and nil, which is used for object pointers.
Now I see there's also Nil, which is used by lower-level Obj-C runtime functions like class_getProperty.
Is this somehow different from nil philosophically? (yes, I know they're all actually 0)
Why was it even introduced? Or, if Nil was first (which is likely), why was nil introduced?
Googling "Nil vs nil" found this post http://numbergrinder.com/node/49, which states:
All three of these values represent
null, or zero pointer, values. The
difference is that while NULL
represents zero for any pointer, nil
is specific to objects (e.g., id) and
Nil is specific to class pointers. It
should be considered a best practice
of sorts to use the right null object
in the right circumstance for
documentation purposes, even though
there is nothing stopping someone from
mixing and matching as they go along.
As a Java developer who is reading Apple's Objective-C 2.0 documentation: I wonder what "sending a message to nil" means - let alone how it is actually useful. Taking an excerpt from the documentation:
There are several patterns in Cocoa
that take advantage of this fact. The
value returned from a message to nil
may also be valid:
If the method returns an object, any pointer type, any integer scalar
of size less than or equal to
sizeof(void*), a float, a double, a
long double, or a long long, then a
message sent to nil returns 0.
If the method returns a struct, as defined by the Mac OS X ABI Function
Call Guide to be returned in
registers, then a message sent to nil
returns 0.0 for every field in the
data structure. Other struct data
types will not be filled with zeros.
If the method returns anything other than the aforementioned value
types the return value of a message
sent to nil is undefined.
Has Java rendered my brain incapable of grokking the explanation above? Or is there something that I am missing that would make this as clear as glass?
I do get the idea of messages/receivers in Objective-C, I am simply confused about a receiver that happens to be nil.
Well, I think it can be described using a very contrived example. Let's say you have a method in Java which prints out all of the elements in an ArrayList:
void foo(ArrayList list)
{
for(int i = 0; i < list.size(); ++i){
System.out.println(list.get(i).toString());
}
}
Now, if you call that method like so: someObject.foo(NULL); you're going to probably get a NullPointerException when it tries to access list, in this case in the call to list.size(); Now, you'd probably never call someObject.foo(NULL) with the NULL value like that. However, you may have gotten your ArrayList from a method which returns NULL if it runs into some error generating the ArrayList like someObject.foo(otherObject.getArrayList());
Of course, you'll also have problems if you do something like this:
ArrayList list = NULL;
list.size();
Now, in Objective-C, we have the equivalent method:
- (void)foo:(NSArray*)anArray
{
int i;
for(i = 0; i < [anArray count]; ++i){
NSLog(#"%#", [[anArray objectAtIndex:i] stringValue];
}
}
Now, if we have the following code:
[someObject foo:nil];
we have the same situation in which Java will produce a NullPointerException. The nil object will be accessed first at [anArray count] However, instead of throwing a NullPointerException, Objective-C will simply return 0 in accordance with the rules above, so the loop will not run. However, if we set the loop to run a set number of times, then we're first sending a message to anArray at [anArray objectAtIndex:i]; This will also return 0, but since objectAtIndex: returns a pointer, and a pointer to 0 is nil/NULL, NSLog will be passed nil each time through the loop. (Although NSLog is a function and not a method, it prints out (null) if passed a nil NSString.
In some cases it's nicer to have a NullPointerException, since you can tell right away that something is wrong with the program, but unless you catch the exception, the program will crash. (In C, trying to dereference NULL in this way causes the program to crash.) In Objective-C, it instead just causes possibly incorrect run-time behavior. However, if you have a method that doesn't break if it returns 0/nil/NULL/a zeroed struct, then this saves you from having to check to make sure the object or parameters are nil.
A message to nil does nothing and returns nil, Nil, NULL, 0, or 0.0.
All of the other posts are correct, but maybe it's the concept that's the thing important here.
In Objective-C method calls, any object reference that can accept a selector is a valid target for that selector.
This saves a LOT of "is the target object of type X?" code - as long as the receiving object implements the selector, it makes absolutely no difference what class it is! nil is an NSObject that accepts any selector - it just doesn't do anything. This eliminates a lot of "check for nil, don't send the message if true" code as well. (The "if it accepts it, it implements it" concept is also what allows you to create protocols, which are sorta kinda like Java interfaces: a declaration that if a class implements the stated methods, then it conforms to the protocol.)
The reason for this is to eliminate monkey code that doesn't do anything except keep the compiler happy. Yes, you get the overhead of one more method call, but you save programmer time, which is a far more expensive resource than CPU time. In addition, you're eliminating more code and more conditional complexity from your application.
Clarifying for downvoters: you may think this is not a good way to go, but it's how the language is implemented, and it's the recommended programming idiom in Objective-C (see the Stanford iPhone programming lectures).
What it means is that the runtime doesn't produce an error when objc_msgSend is called on the nil pointer; instead it returns some (often useful) value. Messages that might have a side effect do nothing.
It's useful because most of the default values are more appropriate than an error. For example:
[someNullNSArrayReference count] => 0
I.e., nil appears to be the empty array. Hiding a nil NSView reference does nothing. Handy, eh?
In the quotation from the documentation, there are two separate concepts -- perhaps it might be better if the documentation made that more clear:
There are several patterns in Cocoa that take advantage of this fact.
The value returned from a message to nil may also be valid:
The former is probably more relevant here: typically being able to send messages to nil makes code more straightforward -- you don't have to check for null values everywhere. The canonical example is probably the accessor method:
- (void)setValue:(MyClass *)newValue {
if (value != newValue) {
[value release];
value = [newValue retain];
}
}
If sending messages to nil were not valid, this method would be more complex -- you'd have to have two additional checks to ensure value and newValue are not nil before sending them messages.
The latter point (that values returned from a message to nil are also typically valid), though, adds a multiplier effect to the former. For example:
if ([myArray count] > 0) {
// do something...
}
This code again doesn't require a check for nil values, and flows naturally...
All this said, the additional flexibility that being able to send messages to nil does come at some cost. There is the possibility that you will at some stage write code that fails in a peculiar way because you didn't take into account the possibility that a value might be nil.
From Greg Parker's site:
If running LLVM Compiler 3.0 (Xcode 4.2) or later
Messages to nil with return type | return
Integers up to 64 bits | 0
Floating-point up to long double | 0.0
Pointers | nil
Structs | {0}
Any _Complex type | {0, 0}
It means often not having to check for nil objects everywhere for safety - particularly:
[someVariable release];
or, as noted, various count and length methods all return 0 when you've got a nil value, so you do not have to add extra checks for nil all over:
if ( [myString length] > 0 )
or this:
return [myArray count]; // say for number of rows in a table
Don't think about "the receiver being nil"; I agree, that is pretty weird. If you're sending a message to nil, there is no receiver. You're just sending a message to nothing.
How to deal with that is a philosophical difference between Java and Objective-C: in Java, that's an error; in Objective-C, it is a no-op.
ObjC messages which are sent to nil and whose return values have size larger than sizeof(void*) produce undefined values on PowerPC processors. In addition to that, these messages cause undefined values to be returned in fields of structs whose size is larger than 8 bytes on Intel processors as well. Vincent Gable has described this nicely in his blog post
I don't think any of the other answers have mentioned this clearly: if you're used to Java, you should keep in mind that while Objective-C on Mac OS X has exception handling support, it's an optional language feature that can be turned on/off with a compiler flag. My guess is that this design of "sending messages to nil is safe" predates the inclusion of exception handling support in the language and was done with a similar goal in mind: methods can return nil to indicate errors, and since sending a message to nil usually returns nil in turn, this allows the error indication to propagate through your code so you don't have to check for it at every single message. You only have to check for it at points where it matters. I personally think exception propagation&handling is a better way to address this goal, but not everyone may agree with that. (On the other hand, I for example don't like Java's requirement on you having to declare what exceptions a method may throw, which often forces you to syntactically propagate exception declarations throughout your code; but that's another discussion.)
I've posted a similar, but longer, answer to the related question "Is asserting that every object creation succeeded necessary in Objective C?" if you want more details.
C represents nothing as 0 for primitive values, and NULL for pointers (which is equivalent to 0 in a pointer context).
Objective-C builds on C's representation of nothing by adding nil. nil is an object pointer to nothing. Although semantically distinct from NULL, they are technically equivalent to one another.
Newly-alloc'd NSObjects start life with their contents set to 0. This means that all pointers that object has to other objects begin as nil, so it's unnecessary to, for instance, set self.(association) = nil in init methods.
The most notable behavior of nil, though, is that it can have messages sent to it.
In other languages, like C++ (or Java), this would crash your program, but in Objective-C, invoking a method on nil returns a zero value. This greatly simplifies expressions, as it obviates the need to check for nil before doing anything:
// For example, this expression...
if (name != nil && [name isEqualToString:#"Steve"]) { ... }
// ...can be simplified to:
if ([name isEqualToString:#"Steve"]) { ... }
Being aware of how nil works in Objective-C allows this convenience to be a feature, and not a lurking bug in your application. Make sure to guard against cases where nil values are unwanted, either by checking and returning early to fail silently, or adding a NSParameterAssert to throw an exception.
Source:
http://nshipster.com/nil/
https://developer.apple.com/library/ios/#documentation/cocoa/conceptual/objectivec/Chapters/ocObjectsClasses.html (Sending Message to nil).