Objective-C [on OS X Leopard] Garbage Collection, nil Question - objective-c

I have a question about garbage collection in Objective-C
If I have an object, lets call it 'A'. And 'A' contains instance variables that point to other multiple objects. If I set the pointer to A equaled to nil, will the garbage collector understand that all that is contained in 'A' is also now unused and handle the cleanup? Or do I need to also explicitly make all instance variables in 'A' nil for memory cleanup to occur?

Yes, it just works; the collector knows that a sub-graph of objects, potentially complexly inter-connected, that no longer has any connections from the live objects is garbage.
The collector does full cycle detection, too.

Yes, absolutely, it will work.
HOWEVER, note that garbage collection is non-deterministic, that is, there's no telling when it will run.
Therefore, any destructors you need called won't be called immediately when you nil the pointer.
If the object 'A' is, or holds references to, file objects, database objects, connection objects, etc. then you will need to use reference counting to ensure that these are freed immediately.
Otherwise, use GC; it's a lot less painful.

Related

objective-c memory management--how long is object guaranteed to exist?

I have ARC code of the following form:
NSMutableData* someData = [NSMutableData dataWithLength:123];
...
CTRunGetGlyphs(run, CGRangeMake(0, 0), someData.mutableBytes);
...
const CGGlyph *glyphs = [someData mutableBytes];
...
...followed by code that reads memory from glyphs but does nothing with someData, which isn't referenced anymore. Note that CGGlyph is not an object type but an unsigned integer.
Do I have to worry that the memory in someData might get freed before I am done with glyphs (which is actually just pointing insidesomeData)?
All this code is WITHIN the same scope (i.e., a single selector), and glyphs and someData both fall out of scope at the same time.
PS In an earlier draft of this question I referred to 'garbage collection', which didn't really apply to my project. That's why some answers below give it equal treatment with what happens under ARC.
You are potentially in trouble whether you use GC or, as others have recommended instead, ARC. What you are dealing with is an internal pointer which is not considered an owning reference in either GC or ARC in general - unless the implementation has special-cased NSData. Without that owning reference either GC or ARC might remove the object. The problem you face is peculiar to internal pointers.
As you describe your situation the safest thing to do is to hang onto the real reference. You could do this by assigning the NSData reference to either an instance variable or a static (method local if you wish) variable and then assigning nil to that variable when you've done with the internal pointer. In the case of static be careful about concurrency!
In practice your code will probably work in both GC and ARC, probably more likely in ARC, but either could conceivably bite you especially as compilers change. For the cost of one variable declaration and one extra assignment you avoid the problem, cheap insurance.
[See this discussion as an example of short lifetime under ARC.]
Under actual, real garbage collection that code is potentially a problem. Objects may be released as soon as there is no longer any reference to them and the compiler may discard the reference at any time if you never use it again. For optimisation purposes scope is just a way of putting an upper limit on that sort of thing, not a way of dictating it absolutely.
You can use NSAllocateCollectable to attach lifecycle calculation to C primitive pointers, though it's messy and slightly convoluted.
Garbage collection was never implemented in iOS and is now deprecated on the Mac (as referenced at the very bottom of this FAQ), in both cases in favour of automatic reference counting (ARC). ARC adds retains and releases where it can see that they're implicitly needed. Sadly it can perform some neat tricks that weren't previously possible, such as retrieving objects from the autorelease pool if they've been used as return results. So that has the same net effect as the garbage collection approach — the object may be released at any point after the final reference to it vanishes.
A workaround would be to create a class like:
#interface PFDoNothing
+ (void)doNothingWith:(id)object;
#end
Which is implemented to do nothing. Post your autoreleased object to it after you've finished using the internal memory. Objective-C's dynamic dispatch means that it isn't safe for the compiler to optimise the call away — it has no way of knowing you (or the KVO mechanisms or whatever other actor) haven't done something like a method swizzle at runtime.
EDIT: NSData being a special case because it offers direct C-level access to object-held memory, it's not difficult to find explicit discussions of the GC situation at least. See this thread on Cocoabuilder for a pretty good one though the same caveat as above applies, i.e. garbage collection is deprecated and automatic reference counting acts differently.
The following is a generic answer that does not necessarily reflect Objective-C GC support. However, various GC implementaitons, including ref-counting, can be thought of in terms of Reachability, quirks aside.
In a GC language, an object is guaranteed to exist as long as it is Strongly-Reachable; the "roots" of these Strong-Reachability graphs can vary by language and executing environment. The exact meaning of "Strongly" also varies, but generally means that the edges are Strong-References. (In a manual ref-counting scenario each edge can be thought of as an unmatched "retain" from a given "owner".)
C# on the CLR/.NET is one such implementation where a variable can remain in scope and yet not function as a "root" for a reachability-graph. See the Systems.Timer.Timer class and look for GC.KeepAlive:
If the timer is declared in a long-running method, use KeepAlive to prevent garbage collection from occurring [on the timer object] before the method ends.
As of summer 2012, things are in the process of change for Apple objects that return inner pointers of non-object type. In the release notes for Mountain Lion, Apple says:
NS_RETURNS_INNER_POINTER
Methods which return pointers (other than Objective C object type)
have been decorated with the clang compiler attribute
objc_returns_inner_pointer (when compiling with clang) to prevent the
compiler from aggressively releasing the receiver expression of those
messages, which no longer appear to be referenced, while the returned
pointer may still be in use.
Inspection of the NSData.h header file shows that this also applies from iOS 6 onward.
Also note that NS_RETURNS_INNER_POINTER is defined as __attribute__((objc_returns_inner_pointer)) in the clang specification, which makes it such that
the object's lifetime will be extended until at least the earliest of:
the last use of the returned pointer, or any pointer derived from it,
in the calling function;
or the autorelease pool is restored to a
previous state.
Caveats:
If you're using anything older then Mountain Lion or iOS 6 you will still need to use any of the methods discussed here (e.g., __attribute__((objc_precise_lifetime))) when declaring your local NSData or NSMutableData objects.
Also, even with the newest compiler and Apple libraries, if you use older or third party libraries with objects that do not decorate their inner-pointer-returning methods with __attribute__((objc_returns_inner_pointer)) you will need to decorate your local variables declarations of such objects with __attribute__((objc_precise_lifetime)) or use one of the other methods discussed in the answers.

Destroy lua object by his method

I want to destroy class instance by its own method. For example:
obj = Object()
obj:destroy()
type(obj) == nil
Object is implemented on C. Is it possible?
If it's not possible, Second way is:
_G["obj"] = nil
collectgarbage()
Thanks!
I want to destroy class instance by its own method.
You should avoid this at all costs. Only expose an explicit destructor routine in Lua if you absolutely need to.
The correct way to handle this is to give your Lua C object a metatable with an __gc metamethod. This metamethod will be called right before Lua garbage collects the object.
If you absolutely must use an explicit destructor function (because you want the user to be able to release expensive resources immediately when they're done, without waiting for garbage collection), then you need to do two things:
Do not require the user to explicitly destroy the object. That is, the object should be able to be destroyed either via destructor or via garbage collection.
Do not break the object when it is explicitly destroyed. Every function that takes this object (member functions or free functions) needs to still work if the user called the explicit destruction function. Those functions may do nothing, which is fine. But the program shouldn't crash.
Basically, your object needs to still be in an "alive" state when it was explicitly destroyed. You need to have the object be a zombie: alive, but not very useful. That way, your program will still function even if it doesn't do the right thing.
Simple obj = nil in your example is enough. Note that you do not destroy content of object, you delete a reference that was in the variable obj, making real object somewhere in memory have one less reference and, if it reached 0 references, unreferenced an eligible for GC.
If your object doesn't have some external task to perform on destruction, that's pretty much all you need. Just lose all references by letting them go out of scope or overwriting variables/table members that contain those references with something else or nil. Otherwise you'd need to call object-specific destructor first and only then remove references.
It is not possible to make such a destructor automatically remove all references from everywhere, but at least it can clear object's internal state and set some internal flag that object is no longer usable or ready to be re-initialized.
It is possible, to some degree. You can create a subtable within the object as a private data store. That subtable is managed solely by the object and therefore can only have one reference. If you define a destructor method for the object, then it would delete the respective subtable, making it eligible for garbage collection. Of course, the parent table would still exist, leaving only the methods which do not occupy any significant resources.
Whether this is "good design" is subjective. I am merely offering a solution for the question asked.

Objective-C: Setting autoreleased objects to nil

Is it safe to set an autoreleased object to nil? I know I don't need to release an autoreleased object but if I want the object to be released immediately to minimize memory use, can I set the object to nil?.
I may have read this somewhere a while back, but I can't remember where and I want to make sure that it is safe to do that.
I think you're missing something pretty fundamental. Setting an object to nil does nothing for you in terms of memory management. Here's what's going on:alt text http://gallery.me.com/davedelong/100084/Pointers1/web.png?ver=12783505480001
In this image, you have the stack (where your local variables live; it's more or less synonymous to where you currently are in your code while executing). If you declare something like int bar = 42;, then bar lives on the stack. On the right you have the heap. This is global memory space that belongs to your application. The heap was invented to solve the problem of scope: how to make useful information live beyond the scope of the current function (or method). When we malloc space, we are assigned a slot of memory on the heap. When you alloc/init an object, that object lives on the heap. In Objective-C, all objects live on the heap.* So let's consider this line:
MyObject * foo = [[MyObject alloc] init];
We really have 2 things going on. The first is that we have allocated (alloc) a new chunk of space on the heap that's large enough to hold a MyObject structure. Then we've taken the location of that chunk o' memory and assigned it into a local variable called foo. In the image above, the reddish circle on the left is foo, and the reddish blob on the right is the actual MyObject (along with all of its data). Make sense so far?
Here's what happens when you "set an object to nil" (foo = nil;)
alt text http://gallery.me.com/davedelong/100084/Pointers2/web.png?ver=12783505490001
You'll see that the object still lives on the heap. In fact the only thing that has changed is that your local variable foo no longer points to the chunk of memory on the heap. It points to 0 (nil, NULL, whatever you want to call it), which is how we indicate that "this doesn't point to anything relevant any more".
In a nutshell: setting a variable to nil has nothing to do with memory management. If you want to get rid of an object immediately, then use release and not autorelease. However, even then that's not a guaranteed "destroy immediately", since something else might be retaining the object (which is the whole point of using the retain-release memory management model).
Beyond this, once you're done with an object (and after you've invoked either release or autorelease), it's still a good idea to set the variable to nil, just to avoid potential problems. In Objective-C we can safe send messages to nil without things blowing up in our faces (unlike Java). However, if you don't "nil out" a variable, bad things can happen. Let's say foo is pointing to a MyObject instance, and then the MyObject instance is destroyed (you released it, but didn't set it to nil). If you try to invoke a method on foo again, your app will crash. If you do set foo to nil, then your app will continue on its merry way. It might not do what you were hoping, but that's a different problem entirely.
Welcome to the wonderful world of Objective-C memory management.
* except for local blocks, but then only until they're copied. There are some other caveats to this as well, but that's getting into the arcane.
Yes, this is good practice as demonstrates that object is unused - i.e. no referenced data.
For what it's worth, you'll see it a lot in the sample code from Apple as well.

Does assigning to Nothing cause Dispose to be invoked?

I recently saw some VB .NET code as follows:
Dim service = ...
Try
...
service.Close()
Finally
service = Nothing
End Try
Does assigning Nothing to service do anything? If it is a garbage collection issue, I am assuming that when "service" went out of scope the referenced object would be garbage collected and the dispose method called on the object.
It seems to me that assigning this variable Nothing can't really do anything, as there could be another reference to the object around so the reference counts haev to be checked anyways.
It only releases the reference, which may mean that the object is available for garbage collection (there could still be other variables referencing the same object). If the object implements IDisposable, you need to call Dispose explicitly, otherwise you may have a resource leak.
NO!
You're seeing old VB6 code, where assigning Nothing reduced the reference count on COM objects.
In most situations assigning null (Nothing) to a reference makes no difference to garbage collection what so ever.
The garbage collector doesn't care about scope, it only cares about usage. After the point in the code where the object is used the last time, the garbage collector knows that it can collect it because it won't be used any more.
Assigning null to the reference doesn't count as using the object, so the last point of usage is before that code. That means that when you clear the reference the garbage collector may already have collected the object.
(In debug mode though the usage of a variable is expanded to it's scope, so that the debugger can show the value of the variable throughout it's scope.)
Assinging NULL to a reference in .NET does not help to clean the object away. It might help the garbage collector to run a little quicker in some corner cases but that's not important at all. It does not call dispose, either (when dealing with a disposable)
I love to assign NULL anyways to explicitly state that I won't use that other object anymore. So it has much more to do with catching bugs (you'll get a nullreference exception instead of possibly calling into some other object - which might fail or even silently create some side effects.)
So assigning NULL after closing another object (File or whatever) is a "code cleanliness" thing that eases debugging, it's not a help to the garbage collector (except in some really strange corner cases - but when you need to care about that you WILL know more about the garbage collector than you ever wanted to know anyways ...)
As everybody has already said, setting to nothing does not force garbage collection, if you want to force GC then you would be far better to use the using ke word
Using objA As A = New A()
objA.DoSomething()
End Using
You still don't need to set to nothing as the End Using tells the Garbage collection that the object is no longer to be used
It's important to understand in .net (or Java) that a variable, field, or other storage location of class type Foo doesn't hold a Foo. It holds a reference to a Foo. Likewise, a List<Foo> doesn't hold Foos; it holds references to Foos. In many cases, a variable will be known by the programmer to hold the only extant reference to some particular Foo. Unfortunately, the compiler has no general means of knowing whether a storage location holds the only extant reference to an object, or whether it holds one of many.
The main rule about IDisposable is that objects which implements IDisposable should be told they are no longer need sometime between the moment they are in fact no longer needed, and the time that all references to them are abandoned. If an object hasn't been Disposed, and code is about to overwrite the only extant reference to it (either by storing null, or by storing a reference to something else), the object should have its Dispose method called. If there exist other reference to the object, and the holders of those references expect to keep using it, Dispose should not be called. Since the compiler can't tell which situation applies, it doesn't call Dispose but leaves that to the programmer (who hopefully has a better idea of whether or not to call it).

How do I find out if I need to retain or assign an property?

Are there any good rules to learn when I should use retain, and when assign?
Assign is for primitive values like BOOL, NSInteger or double. For objects use retain or copy, depending on if you want to keep a reference to the original object or make a copy of it.
The only common exception is weak references, where you want to keep a pointer to an object but can't retain it because of reference cycles. An example of this is the delegate pattern, where an object (for example a table view) keeps a pointer to its delegate. Since the delegate object retains the table view, having the table view retain the delegate would mean neither one will ever be released. A weak reference is used in this case instead. In this situation you would use assign when you create your property.
I would think that when working with objects you would almost always use retain instead of assign and when working with primitive types, structs, etc, you would use assign (since you can't retain non-objects). That's because you want the object with the property deciding when it is done with the object, not something else. Apple's Memory Management Guide states this:
There are times when you don’t want a
received object to be disposed of; for
example, you may need to cache the
object in an instance variable. In
this case, only you know when the
object is no longer needed, so you
need the power to ensure that the
object is not disposed of while you
are still using it. You do this with a
retain message, which stays the effect
of a pending autorelease (or preempts
a later release or autorelease
message). By retaining an object you
ensure that it won’t be deallocated
until you are done with it.
For discussion around using copy vs retain, see this SO question.
I know this was an old question, but I found these guidelines from the uber guru Matt Gallagher, super useful: http://cocoawithlove.com/2009/07/rules-to-avoid-retain-cycles.html. In my case, I had a "retain hell" of my own making for having a hard reference to a parent object.
If you intend to keep the object and use it, use retain. Otherwise, it may be released and you'll end up with errors with your code.