I have read Apple's memory management guide, and think I understand the practices that should be followed to ensure proper memory management in my application.
At present it looks like there are no memory leaks in my code. But as my code grows more complex, I wonder whether there is any particular pattern I should follow to keep track of allocations and deallocations of objects.
Does it make sense to create some kind of global object that is present throughout the execution of the application which contains a count of the number of active objects of a type? Each object could increment the count of their type in their init method, and decrement it in dealloc. The global object could verify at appropriate times if the count of a particular type is zero of not.
EDIT: I am aware of how to use the leaks too, as well as how to analyze the project using Xcode. The reason for this post is to keep track of cases which may not be detected through leaks or analyze easily.
EDIT: Also, it seems to make sense to have something like this so that leaks can be detected in builds early by running unit tests that check the global object. I guess that as an inexperienced objective-c programmer I would benefit from the views of others on this.
Each object could increment the count of their type in their init
method, and decrement it in dealloc.
To do that right, you'll have to do one of the following: 1) override behavior at some common point, such as NSObject's -init or , or 2) add the appropriate code to the designated initializer of every single class. Neither seems simple.
The global object could verify at appropriate times if the count of a
particular type is zero of not.
Sounds good, but can you elaborate a bit on "appropriate times"? How would you know at any given point in the life of your program which classes should have zero instances? You'd have a pretty good idea that there should be no objects at the end of the program, but Instruments could tell you the same thing in that case.
Objective-C has taken several steps to make memory management much simpler. Use properties and synthesized accessors where you can, as they essentially manage your objects for you. A more recent improvement is ARC, which goes even further toward automating most memory management tasks. You basically let the compiler figure out where to put the memory management calls -- it's like garbage collection without the garbage collector. Learn to use those tools well before you try to invent new ones.
Don't go that route... it's a pain in single inheritance. Most importantly, there are excellent tools at your disposal which you should master before thinking you must create some global counter. The global counter exists in a few tools already -- Learn them!
The way you combat it is to learn how to balance and manage everything correctly when it's written. It's really very simple in hindsight.
ARC is another option -- really that just postpones your understanding.
The first "design pattern" I recommend it to use release instead of autorelease where possible (although generally more useful for over-releases).
Next, run the leaks instrument/util regularly and fix all leaks/zombies immediately.
Third, learn the existing tools as you go! These tools can do really crazy stuff, like record the backtrace of every allocation and every reference count. You can pause your program's execution and view what allocations exist, alloc counts, backtraces, and all sorts of other stats.
Related
I need to implement a bit of functionality that can be used from a few different places in an application. It's basically sending something over the network, but I don't need it to be attached to any particular view - I can communicate everything to the user by UIAlertViews.
What I would like to do is encapsulating the functionality in an object (?) that can maintain it's own state for a while and then disappear all by itself. I've read in several similar topics that it's generally not advised to have an object that retains and then releases itself, but on the other hand you have singletons which apart from the fact that they never get released, are very similar in nature. You don't need to keep reference to them just to use them properly. In my situation however I feel it woud be somewhat wasteful to create a singleton and then keep it alive for something that takes a few seconds to execute.
What I came up with is a static dictionary local to the class, that keeps unique references to the instances of the class, and then, when an instance is done with its task, it performs selector 'removeObjectForKey' after delay which removes the only existing reference and effectively kills the object. This way I keep only a dictionary in memory which for the most time is empty anyway.
The question is: are there any unexpected side effects of such a solution that I should be aware of and are there any other good patterns for described situation?
So basically instead of a persistent object of your own class, you've got a persistent object of type NSDictionary? How does that help matters? Is your object unusually large? If you are making your codebase more complicated for the sake of a few bytes, that's not a good tradeoff.
Especially now ARC is commonplace, this kind of trickery is usually not a good idea. Have you measured how much memory a singleton approach takes and found it to be a problem? Unless you have done this, use a singleton. It's simpler code, and all other things being equal, simpler code is far better.
I'm still trying to understand this piece of code that I found in a project I'm working on where the guy that created it left the company before I could ask.
This is the code:
-(void)releaseMySelf{
for (int i=myRetainCount; i>1; i--) {
[self release];
}
[self autorelease];
}
As far as I know, in Objective-C memory management model, the first rule is that the object that allocates another object, is also responsible to release it in the future. That's the reason I don't understand the meaning of this code. Is there is any meaning?
The author is trying to work around not understand memory management. He assumes that an object has a retain count that is increased by each retain and so tries to decrease it by calling that number of releases. Probably he has not implemented the "is also responsible to release it in the future." part of your understanding.
However see many answers here e.g. here and here and here.
Read Apple's memory management concepts.
The first link includes a quote from Apple
The retainCount method does not account for any pending autorelease
messages sent to the receiver.
Important: This method is typically of no value in debugging memory
management issues. Because any number of framework objects may have
retained an object in order to hold references to it, while at the
same time autorelease pools may be holding any number of deferred
releases on an object, it is very unlikely that you can get useful
information from this method. To understand the fundamental rules of
memory management that you must abide by, read “Memory Management
Rules”. To diagnose memory management problems, use a suitable tool:
The LLVM/Clang Static analyzer can typically find memory management
problems even before you run your program. The Object Alloc instrument
in the Instruments application (see Instruments User Guide) can track
object allocation and destruction. Shark (see Shark User Guide) also
profiles memory allocations (amongst numerous other aspects of your
program).
Since all answers seem to misread myRetainCount as [self retainCount], let me offer a reason why this code could have been written: It could be that this code is somehow spawning threads or otherwise having clients register with it, and that myRetainCount is effectively the number of those clients, kept separately from the actual OS retain count. However, each of the clients might get its own ObjC-style retain as well.
So this function might be called in a case where a request is aborted, and could just dispose of all the clients at once, and afterwards perform all the releases. It's not a good design, but if that's how the code works, (and you didn't leave out an int myRetainCount = [self retainCount], or overrides of retain/release) at least it's not necessarily buggy.
It is, however, very likely a bad distribution of responsibilities or a kludgey and hackneyed attempt at avoiding retain circles without really improving anything.
This is a dirty hack to force a memory release: if the rest of your program is written correctly, you never need to do anything like this. Normally, your retains and releases are in balance, so you never need to look at the retain count. What this piece of code says is "I don't know who retained me and forgot to release, I just want my memory to get released; I don't care that the others references would be dangling from now on". This is not going to compile with ARC (oddly enough, switching to ARC may just fix the error the author was trying to work around).
The meaning of the code is to force the object to deallocate right now, no matter what the future consequences may be. (And there will be consequences!)
The code is fatally flawed because it doesn't account for the fact that someone else actually "owns" that object. In other words, something "alloced" that object, and any number of other things may have "retained" that object (maybe a data structure like NSArray, maybe an autorelease pool, maybe some code on the stackframe that just does a "retain"); all those things share ownership in this object. If the object commits suicide (which is what releaseMySelf does), these "owners" suddenly point to bad memory, and this will lead to unexpected behavior.
Hopefully code written like this will just crash. Perhaps the original author avoided these crashes by leaking memory elsewhere.
Can someone briefly explain to me how ARC works? I know it's different from Garbage Collection, but I was just wondering exactly how it worked.
Also, if ARC does what GC does without hindering performance, then why does Java use GC? Why doesn't it use ARC as well?
Every new developer who comes to Objective-C has to learn the rigid rules of when to retain, release, and autorelease objects. These rules even specify naming conventions that imply the retain count of objects returned from methods. Memory management in Objective-C becomes second nature once you take these rules to heart and apply them consistently, but even the most experienced Cocoa developers slip up from time to time.
With the Clang Static Analyzer, the LLVM developers realized that these rules were reliable enough that they could build a tool to point out memory leaks and overreleases within the paths that your code takes.
Automatic reference counting (ARC) is the next logical step. If the compiler can recognize where you should be retaining and releasing objects, why not have it insert that code for you? Rigid, repetitive tasks are what compilers and their brethren are great at. Humans forget things and make mistakes, but computers are much more consistent.
However, this doesn't completely free you from worrying about memory management on these platforms. I describe the primary issue to watch out for (retain cycles) in my answer here, which may require a little thought on your part to mark weak pointers. However, that's minor when compared to what you're gaining in ARC.
When compared to manual memory management and garbage collection, ARC gives you the best of both worlds by cutting out the need to write retain / release code, yet not having the halting and sawtooth memory profiles seen in a garbage collected environment. About the only advantages garbage collection has over this are its ability to deal with retain cycles and the fact that atomic property assignments are inexpensive (as discussed here). I know I'm replacing all of my existing Mac GC code with ARC implementations.
As to whether this could be extended to other languages, it seems geared around the reference counting system in Objective-C. It might be difficult to apply this to Java or other languages, but I don't know enough about the low-level compiler details to make a definitive statement there. Given that Apple is the one pushing this effort in LLVM, Objective-C will come first unless another party commits significant resources of their own to this.
The unveiling of this shocked developers at WWDC, so people weren't aware that something like this could be done. It may appear on other platforms over time, but for now it's exclusive to LLVM and Objective-C.
ARC is just play old retain/release (MRC) with the compiler figuring out when to call retain/release. It will tend to have higher performance, lower peak memory use, and more predictable performance than a GC system.
On the other hand some types of data structure are not possible with ARC (or MRC), while GC can handle them.
As an example, if you have a class named node, and node has an NSArray of children, and a single reference to its parent that "just works" with GC. With ARC (and manual reference counting as well) you have a problem. Any given node will be referenced from its children and also from its parent.
Like:
A -> [B1, B2, B3]
B1 -> A, B2 -> A, B3 -> A
All is fine while you are using A (say via a local variable).
When you are done with it (and B1/B2/B3), a GC system will eventually decide to look at everything it can find starting from the stack and CPU registers. It will never find A,B1,B2,B3 so it will finalize them and recycle the memory into other objects.
When you use ARC or MRC, and finish with A it have a refcount of 3 (B1, B2, and B3 all reference it), and B1/B2/B3 will all have a reference count of 1 (A's NSArray holds one reference to each). So all of those objects remain live even though nothing can ever use them.
The common solution is to decide one of those references needs to be weak (not contribute to the reference count). That will work for some usage patterns, for example if you reference B1/B2/B3 only via A. However in other patterns it fails. For example if you will sometimes hold onto B1, and expect to climb back up via the parent pointer and find A. With a weak reference if you only hold onto B1, A can (and normally will) evaporate, and take B2, and B3 with it.
Sometimes this isn't an issue, but some very useful and natural ways of working with complex structures of data are very difficult to use with ARC/MRC.
So ARC targets the same sort of problems GC targets. However ARC works on a more limited set of usage patterns then GC, so if you took a GC language (like Java) and grafted something like ARC onto it some programs wouldn't work any more (or at least would generate tons of abandoned memory, and may cause serious swapping issues or run out of memory or swap space).
You can also say ARC puts a bigger priority on performance (or maybe predictability) while GC puts a bigger priority on being a generic solution. As a result GC has less predictable CPU/memory demands, and a lower performance (normally) than ARC, but can handle any usage pattern. ARC will work much better for many many common usage patterns, but for a few (valid!) usage patterns it will fall over and die.
Magic
But more specifically ARC works by doing exactly what you would do with your code (with certain minor differences). ARC is a compile time technology, unlike GC which is runtime and will impact your performance negatively. ARC will track the references to objects for you and synthesize the retain/release/autorelease methods according to the normal rules. Because of this ARC can also release things as soon as they are no longer needed, rather than throwing them into an autorelease pool purely for convention sake.
Some other improvements include zeroing weak references, automatic copying of blocks to the heap, speedups across the board (6x for autorelease pools!).
More detailed discussion about how all this works is found in the LLVM Docs on ARC.
It varies greatly from garbage collection. Have you seen the warnings that tell you that you may be leaking objects on different lines? Those statements even tell you on what line you allocated the object. This has been taken a step further and now can insert retain/release statements at the proper locations, better than most programmers, almost 100% of the time. Occasionally there are some weird instances of retained objects that you need to help it out with.
Very well explained by Apple developer documentation. Read "How ARC Works"
To make sure that instances don’t disappear while they are still needed, ARC tracks how many properties, constants, and variables are currently referring to each class instance. ARC will not deallocate an instance as long as at least one active reference to that instance still exists.
To make sure that instances don’t disappear while they are still needed, ARC tracks how many properties, constants, and variables are currently referring to each class instance. ARC will not deallocate an instance as long as at least one active reference to that instance still exists.
To know Diff. between Garbage collection and ARC: Read this
ARC is a compiler feature that provides automatic memory management of objects.
Instead of you having to remember when to use retain, release, and autorelease, ARC evaluates the lifetime requirements of your objects and automatically inserts appropriate memory management calls for you at compile time. The compiler also generates appropriate dealloc methods for you.
The compiler inserts the necessary retain/release calls at compile time, but those calls are executed at runtime, just like any other code.
The following diagram would give you the better understanding of how ARC works.
Those who're new in iOS development and not having work experience on Objective C.
Please refer the Apple's documentation for Advanced Memory Management Programming Guide for better understanding of memory management.
Or, Why I Didn't Use retainCount On My Summer Vacation
This post is intended to solicit detailed write-ups about the whys and wherefores of that infamous method, retainCount, in order to consolidate the relevant information floating around SO.*
The basics: What are the official reasons to not use retainCount? Is there ever any situation at all when it might be useful? What should be done instead?** Feel free to editorialize.
Historical/explanatory: Why does Apple provide this method in the NSObject protocol if it's not intended to be used? Does Apple's code rely on retainCount for some purpose? If so, why isn't it hidden away somewhere?
For deeper understanding: What are the reasons that an object may have a different retain count than would be assumed from user code? Can you give any examples*** of standard procedures that framework code might use which cause such a difference? Are there any known cases where the retain count is always different than what a new user might expect?
Anything else you think is worth metioning about retainCount?
*
Coders who are new to Objective-C and Cocoa often grapple with, or at least misunderstand, the reference-counting scheme. Tutorial explanations may mention retain counts, which (according to these explanations) go up by one when you call retain, alloc, copy, etc., and down by one when you call release (and at some point in the future when you call autorelease).
A budding Cocoa hacker, Kris, could thus quite easily get the idea that checking an object's retain count would be useful in resolving some memory issues, and, lo and behold, there's a method available on every object called retainCount! Kris calls retainCount on a couple of objects, and this one is too high, and that one's too low, and what the heck is going on?! So Kris makes a post on SO, "What's wrong with my memory management?" and then a swarm of <bold>, <large> letters descend saying "Don't do that! You can't rely on the results.", which is well and good, but our intrepid coder may want a deeper explanation.
I'm hoping that this will turn into an FAQ, a page of good informational essays/lectures from any of our experts who are inclined to write one, that new Cocoa-heads can be pointed to when they wonder about retainCount.
** I don't want to make this too broad, but specific tips from experience or the docs on verifying/debugging retain and release pairings may be appropriate here.
***In dummy code; obviously the general public don't have access to Apple's actual code.
The basics: What are the official reasons to not use retainCount?
Autorelease management is the most obvious -- you have no way to be sure how many of the references represented by the retainCount are in a local or external (on a secondary thread, or in another thread's local pool) autorelease pool.
Also, some people have trouble with leaks, and at a higher level reference counting and how autorelease pools work at fundamental levels. They will write a program without (much) regard to proper reference counting, or without learning ref counting properly. This makes their program very difficult to debug, test, and improve -- it's also a very time consuming rectification.
The reason for discouraging its use (at the client level) is twofold:
The value may vary for so many reasons. Threading alone is reason enough to never trust it.
You still have to implement correct reference counting. retainCount will never save you from imbalanced reference counting.
Is there ever any situation at all when it might be useful?
You could in fact use it in a meaningful way if you wrote your own allocators or reference counting scheme, or if your object lived on one thread and you had access to any and all autorelease pools it could exist in. This also implies you would not share it with any external APIs. The easy way to simulate this is to create a program with one thread, zero autorelease pools, and do your reference counting the 'normal' way. It's unlikely that you'll ever need to solve this problem/write this program for anything other than "academic" reasons.
As a debugging aid: you could use it to verify that the retain count is not unusually high. If you take this approach, be mindful of the implementation variances (some are cited in this post), and don't rely on it. Don't even commit the tests to your SCM repository.
This may be a useful diagnostic in extremely rare circumstances. It can be used to detect:
Over-retaining: An allocation with a positive imbalance in retain count would not show up as a leak if the allocation is reachable by your program.
An object which is referenced by many other objects: One illustration of this problem is a (mutable) shared resource or collection which operates in a multithreaded context - frequent access or changes to this resource/collection can introduce a significant bottleneck in your program's execution.
Autorelease levels: Autoreleasing, autorelease pools, and retain/autorelease cycles all come with a cost. If you need to minimize or reduce memory use and/or growth, you could use this approach to detect excessive cases.
From commentary with Bavarious (below): a high value may also indicate an invalidated allocation (dealloc'd instance). This is completely an implementation detail, and again, not usable in production code. Messaging this allocation would result in a error when zombies are enabled.
What should be done instead?
If you're not responsible for returning the memory at self (that is, you did not write an allocator), leave it alone - it is useless.
You have to learn proper reference counting.
For a better understanding of release and autorelease usage, set up some breakpoints and understand how they are used, in what cases, etc. You'll still have to learn to use reference counting correctly, but this can aid your understanding of why it's useless.
Even simpler: use Instruments to track allocs and ref counts, then analyze the ref counting and callstacks of several objects in an active program.
Historical/explanatory: Why does Apple provide this method in the NSObject protocol if it's not intended to be used? Does Apple's code rely on retainCount for some purpose? If so, why isn't it hidden away somewhere?
We can assume that it is public for two primary reasons:
Reference counting proper in managed environments. It's fine for the allocators to use retainCount -- really. It's a very simple concept. When -[NSObject release] is called, the ref counter (unless overridden) may be called, and the object can be deallocated if retainCount is 0 (after calling dealloc). This is all fine at the allocator level. Allocators and zones are (largely) abstracted so... this makes the result meaningless for ordinary clients. See commentary with bbum (below) for details on why retainCount cannot be equal to 0 at the client level, object deallocation, deallocation sequences, and more.
To make it available to subclassers who want a custom behavior, and because the other reference counting methods are public. It may be handy in a few cases, but it's typically used for the wrong reasons (e.g. immortal singletons). If you need your own reference counting scheme, then this family may be worth overriding.
For deeper understanding: What are the reasons that an object may have a different retain count than would be assumed from user code? Can you give any examples*** of standard procedures that framework code might use which cause such a difference? Are there any known cases where the retain count is always different than what a new user might expect?
Again, a custom reference counting schemes and immortal objects. NSCFString literals fall into the latter category:
NSLog(#"%qu", [#"MyString" retainCount]);
// Logs: 1152921504606846975
Anything else you think is worth mentioning about retainCount?
It's useless as a debugging aid. Learn to use leak and zombie analyses, and use them often -- even after you have a handle on reference counting.
Update: bbum has posted an article entitled retainCount is useless. The article contains a thorough discussion of why -retainCount isn’t useful in the vast majority of cases.
The general rule of thumb is if you're using this method, you better be damn sure you know what you're doing. If you are using it for debugging a memory leak you're doing it wrong, if you're doing it to see what is going on with an object, you're doing it wrong.
There is one case where I have used it, and found it useful. That is in doing a shared object cache where I wanted to flush the object when nothing had a reference to it anymore. In this situation I waited until the retainCount is equal to 1, and then I can release it knowing that nothing else is holding onto it, this will obviously not work properly in garbage collected environments and there are better ways to do it. But this is still the only 'valid' use case I've seen for it, and isn't something a lot of people will be doing.
Sorry for long description, however the questions aren't so easy...
My project written without GC. Recently I found a memory leak that I can't find. I did use new Xcode Analyzer without a result. I did read my code line by line and verified all alloc/release/copy/autorelease/mutableCopy/retain and pools... - still nothing.
Preamble: Standard Instruments and Omni Leak Checker don't work for me by some reason (Omin Tool rejects my app, Instruments.app (Leaks) eats too many memory and CPU so I have no chance to use it).
So I wanna write and use my own code to hook & track "all" alloc/allocWithZone:/dealloc messages statistics to write some simple own leaks checking library (the main goal is only to mark objects' class names with possible leaks).
The main hooking technique that I use:
Method originalAllocWithZone = class_getClassMethod([NSObject class],#selector(allocWithZone:));
if (originalAllocWithZone)
{
imp_azo = (t_impAZOriginal)method_getImplementation(originalAllocWithZone);
if (imp_azo)
{
Method hookedAllocWithZone = class_getClassMethod([NSObject class],#selector(hookedAllocWithZone:));
if (hookedAllocWithZone)
{
method_setImplementation(originalAllocWithZone,method_getImplementation(hookedAllocWithZone));
fprintf(stderr,"Leaks Hook: allocWithZone: ; Installed\n");
}
}
}
code like this for hook the alloc method, and dealloc as NSObject category method.
I save IMP for previous methods implementation then register & calculate all alloc/allocWithZone: calls as increment (+1) stat-array NSInteger values, and dealloc calls as decrement (-1).
As end point I call previous implementation and return value.
In concept all works just fine.
If it needs, I can even detect when class are part of class cluster (like NSString, NSPathStore2; NSDate, __NSCFDate)... via some normalize-function (but it doesn't matter for the issues described bellow).
However this technique has some issues:
Not all classes can be caught, for
example, [NSDate date] doesn't catch
in alloc/allocWithZone: at all, however, I can see alloc call in GDB
Since I'm trying to use auto singleton detection technique (based on retainCount readind) to auto exclude some objects from final statistics, NSLocale creation freezes on pre-init stage when starting of full Cocoa application (actually, even simple Objective-C command line utility with the Foundation framework included has some additional initialization before main()) - by GDB there is allocWithZone: calls one after other,....
Full Concept-Project draft sources uploaded here: http://unclemif.com/external/DILeak.zip (3.5 Kb)
Run make from Terminal.app to compile it, run ./concept to show it in action.
The 1st Question: Why I can't catch all object allocations by hooking alloc & allocWithZone: methods?
The 2nd Question: Why hooked allocWithZone: freezes in CFGetRetainCount (or [inst retainCount]) for some classes...
Holy re-inventing the wheel, batman!
You are making this way harder than it needs to be. There is absolutely no need whatsoever to roll your own object tracking tools (though it is an interesting mental exercise).
Because you are using GC, the tools for tracking allocations and identifying leaks are all very mature.
Under GC, a leak will take one of two forms; either there will be a strong reference to the object that should long ago been destroyed or the object has been CFRetain'd without a balancing CFRelease.
The collector is quite adept at figuring out why any given object is remaining beyond its welcome.
Thus, you need to find some set of objects that are sticking around too long. Any object will do. Once you have the address of said object, you can use the Object Graph instrument in Instruments to figure out why it is sticking around; figure out what is still referring to it or where it was retained.
Or, from gdb, use info gc-roots 0xaddr to find all of the various things that are rooting the object. If you turn on malloc history (see the malloc man page), you can get the allocation histories of the objects that are holding the reference.
Oh, without GC, huh...
You are still left with a plethora of tools and no need to re-invent the wheel.
The leaks command line tool will often give you some good clues. Turn on MallocStackLoggingNoCompact to be able to use malloc_history (another command line tool).
Or use the ObjectAlloc instrument.
In any case, you need to identify an object or two that is being leaked. With that, you can figure out what is hanging on to it. In non-GC, that is entirely a case of figuring out why it there is a retain not balanced by a release.
Even without the Leaks instrument, Instruments can still help you.
Start with the Leaks template, then delete the Leaks instrument from it (since you say it uses too much memory). ObjectAlloc alone will tell you all of your objects' allocations and deallocations, and (with an option turned on, which it is by default in the Leaks template) all of their retentions and releases as well.
You can set the ObjectAlloc instrument to only show you objects that still exist; if you bring the application to the point where no objects (or no objects of a certain class) should exist, and such objects do still exist, then you have a leak. You can then drill down to find the cause of the leak.
This video may help.
Start from the Xcode templates. Don't try to roll your own main() routine for a cocoa app until you know what you're doing.