Is it a legitimate practice to rely on a deterministic dealloc (ex: for clean-up)?
Since ARC, and even manual reference counting, is inherently deterministic, I was wondering what other people thought about relying on dealloc getting called immediately (relatively, considering autoreleasepool).
In other modern programming languages, like C#, a dispose-like pattern is employed when you need deterministic clean-up. And I would imagine Obj-C with garbage collection encourages this behavior, as well.
So, with that said, an example would be a UIViewController which cancels outstanding operations in dealloc, rather than trying to program around the sometimes frustrating semantics of viewDidDisappear.
Another example would be a stream object that implicitly opens and closes in init and dealloc, respectively, rather than requiring open or close to be called.
Since Apple has deprecated GC, I would imagine that these sorts of patterns won't be broken anytime soon, and they are incredibly handy, though I can't find any documentation on whether this should be encouraged.
You are absolutely correct, you can rely on dealloc being called relatively soon after the last reference is released (manually or though ARC, it does not matter). Unlike GC where the finalizer is called when the system has some free time, or in some cases is never called, dealloc gets called very reliably. Apple allows and even encourages using this pattern by suggesting that we should perform all of our resource clean-up tasks inside dealloc.
This does not mean, however, that you should rely on dealloc exclusively. For example, take a look at the NSStream class: it offers you an explicit close method, letting you force closing the stream at will, without waiting for the call of dealloc to happen. This is a very good pattern to follow in case the resource is very expensive (file handles, semaphores, etc.): the primary mechanism for releasing these resources should be a separate close method. The dealloc method should release the resource as well, but it should also issue a warning, informing your that you missed a call of close.
Regardless of your memory management system, tying expensive resources (e.g. files & sockets, images, views, large memory allocations, etc) to object lifetimes is risky. Even if you're manually retaining & releasing, you might unwittingly retain an object somewhere and forget about it (or otherwise delay its release unnecessarily). ARC makes it even more likely that these things will happen, since it's much less apparent where the retains come from, and when the corresponding releases take effect. And of course GC makes it all completely indefinite.
Generally for expensive and/or limited resources you should try to follow a sole-ownership pattern. Yes, you can still retain/release as normal to prevent premature deallocation, but the dominant owner should be well defined and responsible for clearing out the object when it's done - e.g. calling invalidate, or close, or whatever is appropriate. This makes perfect sense in a lot of the common cases - you generally know when you're done with a file or a socket, for example, so even if they happen to be encapsulated inside some wrapper class, you should just close them explicitly.
In some cases this can also help find bugs that would otherwise remain hidden, or be hard to track down. For example, if your file wrapper class raises an exception when read or write are called after the file is closed, you'll soon catch those cases. Whereas if you didn't close the file when you thought you were done with it, the reads and writes would just happen as usual and you might not notice that your file has unexpected data in it.
You can also use the same principles to break retain cycles.
Related
I have read Apple's memory management guide, and think I understand the practices that should be followed to ensure proper memory management in my application.
At present it looks like there are no memory leaks in my code. But as my code grows more complex, I wonder whether there is any particular pattern I should follow to keep track of allocations and deallocations of objects.
Does it make sense to create some kind of global object that is present throughout the execution of the application which contains a count of the number of active objects of a type? Each object could increment the count of their type in their init method, and decrement it in dealloc. The global object could verify at appropriate times if the count of a particular type is zero of not.
EDIT: I am aware of how to use the leaks too, as well as how to analyze the project using Xcode. The reason for this post is to keep track of cases which may not be detected through leaks or analyze easily.
EDIT: Also, it seems to make sense to have something like this so that leaks can be detected in builds early by running unit tests that check the global object. I guess that as an inexperienced objective-c programmer I would benefit from the views of others on this.
Each object could increment the count of their type in their init
method, and decrement it in dealloc.
To do that right, you'll have to do one of the following: 1) override behavior at some common point, such as NSObject's -init or , or 2) add the appropriate code to the designated initializer of every single class. Neither seems simple.
The global object could verify at appropriate times if the count of a
particular type is zero of not.
Sounds good, but can you elaborate a bit on "appropriate times"? How would you know at any given point in the life of your program which classes should have zero instances? You'd have a pretty good idea that there should be no objects at the end of the program, but Instruments could tell you the same thing in that case.
Objective-C has taken several steps to make memory management much simpler. Use properties and synthesized accessors where you can, as they essentially manage your objects for you. A more recent improvement is ARC, which goes even further toward automating most memory management tasks. You basically let the compiler figure out where to put the memory management calls -- it's like garbage collection without the garbage collector. Learn to use those tools well before you try to invent new ones.
Don't go that route... it's a pain in single inheritance. Most importantly, there are excellent tools at your disposal which you should master before thinking you must create some global counter. The global counter exists in a few tools already -- Learn them!
The way you combat it is to learn how to balance and manage everything correctly when it's written. It's really very simple in hindsight.
ARC is another option -- really that just postpones your understanding.
The first "design pattern" I recommend it to use release instead of autorelease where possible (although generally more useful for over-releases).
Next, run the leaks instrument/util regularly and fix all leaks/zombies immediately.
Third, learn the existing tools as you go! These tools can do really crazy stuff, like record the backtrace of every allocation and every reference count. You can pause your program's execution and view what allocations exist, alloc counts, backtraces, and all sorts of other stats.
I'm still trying to understand this piece of code that I found in a project I'm working on where the guy that created it left the company before I could ask.
This is the code:
-(void)releaseMySelf{
for (int i=myRetainCount; i>1; i--) {
[self release];
}
[self autorelease];
}
As far as I know, in Objective-C memory management model, the first rule is that the object that allocates another object, is also responsible to release it in the future. That's the reason I don't understand the meaning of this code. Is there is any meaning?
The author is trying to work around not understand memory management. He assumes that an object has a retain count that is increased by each retain and so tries to decrease it by calling that number of releases. Probably he has not implemented the "is also responsible to release it in the future." part of your understanding.
However see many answers here e.g. here and here and here.
Read Apple's memory management concepts.
The first link includes a quote from Apple
The retainCount method does not account for any pending autorelease
messages sent to the receiver.
Important: This method is typically of no value in debugging memory
management issues. Because any number of framework objects may have
retained an object in order to hold references to it, while at the
same time autorelease pools may be holding any number of deferred
releases on an object, it is very unlikely that you can get useful
information from this method. To understand the fundamental rules of
memory management that you must abide by, read “Memory Management
Rules”. To diagnose memory management problems, use a suitable tool:
The LLVM/Clang Static analyzer can typically find memory management
problems even before you run your program. The Object Alloc instrument
in the Instruments application (see Instruments User Guide) can track
object allocation and destruction. Shark (see Shark User Guide) also
profiles memory allocations (amongst numerous other aspects of your
program).
Since all answers seem to misread myRetainCount as [self retainCount], let me offer a reason why this code could have been written: It could be that this code is somehow spawning threads or otherwise having clients register with it, and that myRetainCount is effectively the number of those clients, kept separately from the actual OS retain count. However, each of the clients might get its own ObjC-style retain as well.
So this function might be called in a case where a request is aborted, and could just dispose of all the clients at once, and afterwards perform all the releases. It's not a good design, but if that's how the code works, (and you didn't leave out an int myRetainCount = [self retainCount], or overrides of retain/release) at least it's not necessarily buggy.
It is, however, very likely a bad distribution of responsibilities or a kludgey and hackneyed attempt at avoiding retain circles without really improving anything.
This is a dirty hack to force a memory release: if the rest of your program is written correctly, you never need to do anything like this. Normally, your retains and releases are in balance, so you never need to look at the retain count. What this piece of code says is "I don't know who retained me and forgot to release, I just want my memory to get released; I don't care that the others references would be dangling from now on". This is not going to compile with ARC (oddly enough, switching to ARC may just fix the error the author was trying to work around).
The meaning of the code is to force the object to deallocate right now, no matter what the future consequences may be. (And there will be consequences!)
The code is fatally flawed because it doesn't account for the fact that someone else actually "owns" that object. In other words, something "alloced" that object, and any number of other things may have "retained" that object (maybe a data structure like NSArray, maybe an autorelease pool, maybe some code on the stackframe that just does a "retain"); all those things share ownership in this object. If the object commits suicide (which is what releaseMySelf does), these "owners" suddenly point to bad memory, and this will lead to unexpected behavior.
Hopefully code written like this will just crash. Perhaps the original author avoided these crashes by leaking memory elsewhere.
Or, Why I Didn't Use retainCount On My Summer Vacation
This post is intended to solicit detailed write-ups about the whys and wherefores of that infamous method, retainCount, in order to consolidate the relevant information floating around SO.*
The basics: What are the official reasons to not use retainCount? Is there ever any situation at all when it might be useful? What should be done instead?** Feel free to editorialize.
Historical/explanatory: Why does Apple provide this method in the NSObject protocol if it's not intended to be used? Does Apple's code rely on retainCount for some purpose? If so, why isn't it hidden away somewhere?
For deeper understanding: What are the reasons that an object may have a different retain count than would be assumed from user code? Can you give any examples*** of standard procedures that framework code might use which cause such a difference? Are there any known cases where the retain count is always different than what a new user might expect?
Anything else you think is worth metioning about retainCount?
*
Coders who are new to Objective-C and Cocoa often grapple with, or at least misunderstand, the reference-counting scheme. Tutorial explanations may mention retain counts, which (according to these explanations) go up by one when you call retain, alloc, copy, etc., and down by one when you call release (and at some point in the future when you call autorelease).
A budding Cocoa hacker, Kris, could thus quite easily get the idea that checking an object's retain count would be useful in resolving some memory issues, and, lo and behold, there's a method available on every object called retainCount! Kris calls retainCount on a couple of objects, and this one is too high, and that one's too low, and what the heck is going on?! So Kris makes a post on SO, "What's wrong with my memory management?" and then a swarm of <bold>, <large> letters descend saying "Don't do that! You can't rely on the results.", which is well and good, but our intrepid coder may want a deeper explanation.
I'm hoping that this will turn into an FAQ, a page of good informational essays/lectures from any of our experts who are inclined to write one, that new Cocoa-heads can be pointed to when they wonder about retainCount.
** I don't want to make this too broad, but specific tips from experience or the docs on verifying/debugging retain and release pairings may be appropriate here.
***In dummy code; obviously the general public don't have access to Apple's actual code.
The basics: What are the official reasons to not use retainCount?
Autorelease management is the most obvious -- you have no way to be sure how many of the references represented by the retainCount are in a local or external (on a secondary thread, or in another thread's local pool) autorelease pool.
Also, some people have trouble with leaks, and at a higher level reference counting and how autorelease pools work at fundamental levels. They will write a program without (much) regard to proper reference counting, or without learning ref counting properly. This makes their program very difficult to debug, test, and improve -- it's also a very time consuming rectification.
The reason for discouraging its use (at the client level) is twofold:
The value may vary for so many reasons. Threading alone is reason enough to never trust it.
You still have to implement correct reference counting. retainCount will never save you from imbalanced reference counting.
Is there ever any situation at all when it might be useful?
You could in fact use it in a meaningful way if you wrote your own allocators or reference counting scheme, or if your object lived on one thread and you had access to any and all autorelease pools it could exist in. This also implies you would not share it with any external APIs. The easy way to simulate this is to create a program with one thread, zero autorelease pools, and do your reference counting the 'normal' way. It's unlikely that you'll ever need to solve this problem/write this program for anything other than "academic" reasons.
As a debugging aid: you could use it to verify that the retain count is not unusually high. If you take this approach, be mindful of the implementation variances (some are cited in this post), and don't rely on it. Don't even commit the tests to your SCM repository.
This may be a useful diagnostic in extremely rare circumstances. It can be used to detect:
Over-retaining: An allocation with a positive imbalance in retain count would not show up as a leak if the allocation is reachable by your program.
An object which is referenced by many other objects: One illustration of this problem is a (mutable) shared resource or collection which operates in a multithreaded context - frequent access or changes to this resource/collection can introduce a significant bottleneck in your program's execution.
Autorelease levels: Autoreleasing, autorelease pools, and retain/autorelease cycles all come with a cost. If you need to minimize or reduce memory use and/or growth, you could use this approach to detect excessive cases.
From commentary with Bavarious (below): a high value may also indicate an invalidated allocation (dealloc'd instance). This is completely an implementation detail, and again, not usable in production code. Messaging this allocation would result in a error when zombies are enabled.
What should be done instead?
If you're not responsible for returning the memory at self (that is, you did not write an allocator), leave it alone - it is useless.
You have to learn proper reference counting.
For a better understanding of release and autorelease usage, set up some breakpoints and understand how they are used, in what cases, etc. You'll still have to learn to use reference counting correctly, but this can aid your understanding of why it's useless.
Even simpler: use Instruments to track allocs and ref counts, then analyze the ref counting and callstacks of several objects in an active program.
Historical/explanatory: Why does Apple provide this method in the NSObject protocol if it's not intended to be used? Does Apple's code rely on retainCount for some purpose? If so, why isn't it hidden away somewhere?
We can assume that it is public for two primary reasons:
Reference counting proper in managed environments. It's fine for the allocators to use retainCount -- really. It's a very simple concept. When -[NSObject release] is called, the ref counter (unless overridden) may be called, and the object can be deallocated if retainCount is 0 (after calling dealloc). This is all fine at the allocator level. Allocators and zones are (largely) abstracted so... this makes the result meaningless for ordinary clients. See commentary with bbum (below) for details on why retainCount cannot be equal to 0 at the client level, object deallocation, deallocation sequences, and more.
To make it available to subclassers who want a custom behavior, and because the other reference counting methods are public. It may be handy in a few cases, but it's typically used for the wrong reasons (e.g. immortal singletons). If you need your own reference counting scheme, then this family may be worth overriding.
For deeper understanding: What are the reasons that an object may have a different retain count than would be assumed from user code? Can you give any examples*** of standard procedures that framework code might use which cause such a difference? Are there any known cases where the retain count is always different than what a new user might expect?
Again, a custom reference counting schemes and immortal objects. NSCFString literals fall into the latter category:
NSLog(#"%qu", [#"MyString" retainCount]);
// Logs: 1152921504606846975
Anything else you think is worth mentioning about retainCount?
It's useless as a debugging aid. Learn to use leak and zombie analyses, and use them often -- even after you have a handle on reference counting.
Update: bbum has posted an article entitled retainCount is useless. The article contains a thorough discussion of why -retainCount isn’t useful in the vast majority of cases.
The general rule of thumb is if you're using this method, you better be damn sure you know what you're doing. If you are using it for debugging a memory leak you're doing it wrong, if you're doing it to see what is going on with an object, you're doing it wrong.
There is one case where I have used it, and found it useful. That is in doing a shared object cache where I wanted to flush the object when nothing had a reference to it anymore. In this situation I waited until the retainCount is equal to 1, and then I can release it knowing that nothing else is holding onto it, this will obviously not work properly in garbage collected environments and there are better ways to do it. But this is still the only 'valid' use case I've seen for it, and isn't something a lot of people will be doing.
I was experimenting with ways to get rid of some memory leaks within my application the other day when I realized that I know virtually nothing about cleaning up my resources. I did some research, and hoped that just calling the .dispose() would solve all of my problems. We have a table in our database that contains about 65,000 records. Obviously when I fill my dataset from the data adapter, the memory usage can get pretty high. When I called the dispose method on the dataset, I was surprised to find out that NONE of the memory got released. Why did this happen? Clearing the dataset doesn't help either.
IDisposable and thus Dispose is not used to reduced memory pressure, although in some cases it might, but instead used for deterministic cleanup.
Consider this, you construct an object that maintains an active and open connection to your database server. This connection uses resources, both on your machine, and the server.
You could of course just leave the object be when you're done with it, and eventually it'll get picked up by the garbage collector, but suppose you want to make sure at least the resources gets freed, and thus the connection closed, when you're done with it. This is where IDisposable.Dispose comes into play.
It is used to clean up resources managed by the object.
It will, however, not free the managed memory allocated to the object. This is still left to the garbage collector, that will kick in at some later time to do that.
Do you actually have a memory problem, or do you just look at the memory usage in Task Manager or similar and go "that's a bit high."?
If the latter, then you should just leave it be for now. .NET will run garbage collection more often if you have less memory available, so unless you're in a situation where you get, or might suspect you will get soon, a memory overflow condition, you're probably not going to have any problems.
Let me explain what I mean by "run less often".
If you have 8GB of memory in your machine, and only have Windows and Notepad running, most of that memory will be available. When you now run your program, even if it loads minor data blocks into memory, you can keep doing that for a long time, and memory usage will steadily grow. Exactly when the GC will kick in and try to reduce your memory footprint I don't know, but I can almost guarantee you that you will wonder why it gets so high.
Let's just for the sake of the argument say that your program will eventually use 2GB of memory.
Now, if you run your program on a machine that has less memory available, GC will occur more often, and will kick in on a lower limit, which might keep the memory usage below 500MB or possibly even less.
The important part to note here is that in order for you to get an accurate picture of how much memory application actually requires, then you can't rely on Task Manager or similar ways to measure it, you need something more targetted.
Calling Dispose() will only release unmanaged resources, such as file handles, database connections, unmanaged memory, etc. It will not release garbage collected memory.
Garbage collected memory will only get released at the next collection. Usually when the application domain memory is deamed full.
I'm going to point out something here that hasn't been explicitly mentioned: calling Dispose() will only clean up (free) unmanaged resources if the developer of the component has coded it.
What I mean is this: if you suspect you have a memory leak, calling Dispose() is not going to fix it if the original developer has done a lousy job and not correctly freed up unmanaged resources. For a bit more info, check this blog post. Take note of the statement The behaviour of Dispose is defined by the developer.
Some objects will ask one or more other entities to do something on its behalf until further notice, to the detriment of other entities. If an object which did so were to disappear without informing the former entities that their services were no longer needed, those entities would continue to uselessly act on behalf of an object that no longer needed them, to the continuing detriment of other entities that would want to use them.
In many cases, for an object "George" to tell an outside entity "Joe" that its services were no longer needed, George would have to know that its services were no longer needed. There are two normal means via which that can happen in .NET, finalization and IDIsposable.
If an object overrides a method called Finalize, then when the object is created the .NET garbage collector will add it to a list of objects with registered finalizers. If the GC discovers that there exists no rooted reference to the object other than that list, the GC will remove the object from that list and add it to a strongly-rooted queue of objects which should have their Finalize method called as soon as possible. Such an object can then use its Finalize method to inform other entities that their services are no longer required.
Although finalization-based cleanup can sometimes work, there's no guarantee of timeliness. At one point during the design of .net Microsoft may have intended that finalization would be the primary cleanup method, but for a variety of reasons it cannot safely be relied upon.
The other cleanup approach, which should be the focus of one's efforts, is IDisposable. Basically, the idea behind IDisposable is simple: for every object that implements IDisposable, there should be one entity (generally either an object or a nested execution scope) which is responsible for ensuring that that object's IDisposable.Dispose method will get called sometime within the lifetime of the universe (which would imply sometime while a reference to the object still exists), and preferably as soon as code can tell that the object's services will no longer be required.
Note that IDisposable.Dispose generally promises that any outside entities which had been asked to do something on an object's behalf will be told that they no longer need to do so, but such a promise does not imply that the number of entities is non-zero. If an object hasn't asked any outside entities to do anything on its behalf, then delivering a message "all" such entities doesn't require doing anything at all. On the other hand, the fact that a Dispose method may do nothing in some cases doesn't mean that it's guaranteed never to do anything in any case, nor that failure to call it in those cases where it would do something won't have detrimental effects.
As is common knowledge, calls to alloc/copy/retain in Objective-C imply ownership and need to be balanced by a call to autorelease/release. How do you succinctly describe where this should happen? The word "succinct" is key. I can usually use intuition to guide me, but would like an explicit principle in case intuition fails and that can be use in discussions.
Properties simplify the matter (the rule is auto-/release happens in -dealloc and setters), but sometimes properties aren't a viable option (e.g. not everyone uses ObjC 2.0).
Sometimes the release should be in the same block. Other times the alloc/copy/retain happens in one method, which has a corresponding method where the release should occur (e.g. -init and -dealloc). It's this pairing of methods (where a method may be paired with itself) that seems to be key, but how can that be put into words? Also, what cases does the method-pairing notion miss? It doesn't seem to cover where you release properties, as setters are self-paired and -dealloc releases objects that aren't alloc/copy/retained in -init.
It feels like the object model is involved with my difficulty. There doesn't seem to be an element of the model that I can attach retain/release pairing to. Methods transform objects from valid state to valid state and send messages to other objects. The only natural pairings I see are object creation/destruction and method enter/exit.
Background:
This question was inspired by: "NSMutableDictionary does not get added into NSMutableArray". The asker of that question was releasing objects, but in such a way that might cause memory leaks. The alloc/copy/retain calls were generally balanced by releases, but in such a way that could cause memory leaks. The class was a delegate; some members were created in a delegate method (-parser:didStartElement:...) and released in -dealloc rather than in the corresponding (-parser:didEndElement:...) method. In this instance, properties seemed a good solution, but the question still remained of how to handle releasing when properties weren't involved.
Properties simplify the matter (the rule is auto-/release happens in -dealloc and setters), but sometimes properties aren't a viable option (e.g. not everyone uses ObjC 2.0).
This is a misunderstanding of the history of properties. While properties are new, accessors have always been a key part of ObjC. Properties just made it easier to write accessors. If you always use accessors, and you should, than most of these questions go away.
Before we had properties, we used Xcode's built-in accessor-writer (in the Script>Code menu), or with useful tools like Accessorizer to simplify the job (Accessorizer still simplifies property code). Or we just typed a lot of getters and setters by hand.
The question isn't where it should happen, it's when.
Release or autorelease an object if you have created it with +alloc, +new or -copy, or if you have sent it a -retain message.
Send -release when you don't care if the object continues to exist. Send -autorelease if you want to return it from the method you're in, but you don't care what happens to it after that.
I wouldn't say that dealloc is where you would call autorelease. And unless your object, whatever it may be, is linked to the life of a class, it doesn't necessarily need to be kept around for a retain in dealloc.
Here are my rules of thumb. You may do things in other ways.
I use release if the life of the
object I am using is limited to the
routine I am in now. Thus the object
gets created and released in that
routine. This is also the preferred
way if I am creating a lot of objects
in a routine, such as in a loop, and
I might want to release each object
before the next one is created in the
loop.
If the object I created in a method
needs to be passed back to the
caller, but I assume that the use of
the object will be transient and
limited to this run of the runloop, I
use autorelease. Here, I am trying to mimic many of Apple's convenience routines. (Want a quick string to use for a short period? Here you go, don't worry about owning it and it will get disposed appropriately.)
If I believe the object is to be kept
on a semi-permanent basis (like
longer than this run of the runloop),
I use create/new/copy in my method
name so the caller knows that they
are the owner of the object and will
have to release the object.
Any objects that are created by a
class and kept as a property with
retain (whether through the property
declaration or not), I release those
in dealloc (or in viewDidUnload as
appropriate).
Try not to let all this memory management overwhelm you. It is a lot easier than it sounds, and looking at a bunch of Apple's samples, and writing your own (and suffering bugs) will make you understand it better.