Strategy for a self-retaining and self-releasing object - objective-c

I need to implement a bit of functionality that can be used from a few different places in an application. It's basically sending something over the network, but I don't need it to be attached to any particular view - I can communicate everything to the user by UIAlertViews.
What I would like to do is encapsulating the functionality in an object (?) that can maintain it's own state for a while and then disappear all by itself. I've read in several similar topics that it's generally not advised to have an object that retains and then releases itself, but on the other hand you have singletons which apart from the fact that they never get released, are very similar in nature. You don't need to keep reference to them just to use them properly. In my situation however I feel it woud be somewhat wasteful to create a singleton and then keep it alive for something that takes a few seconds to execute.
What I came up with is a static dictionary local to the class, that keeps unique references to the instances of the class, and then, when an instance is done with its task, it performs selector 'removeObjectForKey' after delay which removes the only existing reference and effectively kills the object. This way I keep only a dictionary in memory which for the most time is empty anyway.
The question is: are there any unexpected side effects of such a solution that I should be aware of and are there any other good patterns for described situation?

So basically instead of a persistent object of your own class, you've got a persistent object of type NSDictionary? How does that help matters? Is your object unusually large? If you are making your codebase more complicated for the sake of a few bytes, that's not a good tradeoff.
Especially now ARC is commonplace, this kind of trickery is usually not a good idea. Have you measured how much memory a singleton approach takes and found it to be a problem? Unless you have done this, use a singleton. It's simpler code, and all other things being equal, simpler code is far better.

Related

When to use mutable objects?

There are tons of articles and blog posts over the internet telling that mutable objects are bad and that we shouldn't use them and therefore we shall make all our objects immutable.
I have nothing against this except that the topic has gone so far that some people might be "tricked" into thinking that mutable objects shall never be used at all.
When do we have to resort to use mutable objects? What are the common kinds of problems that are unsolvable without using mutable state?
As to your fear, it's common. Every concept gets taken by some as to mean that nothing else shall ever be done, for any reason.
These are the people who try to make requirements fit their ideology, rather than the other way around (a.k.a. they're not pragmatic).
When to use mutables? Basically when you feel like it, when you think it makes sense.
Prime example is in low memory and high performance situations where creating a new instance that's identical except for one little thing from the old one is too expensive in either memory and/or CPU cycles.

Weak "assign" references rather than singletons

I have many objects reference the same class of stored data. In previous programs, I've used singletons, but am trying to abandon that practice and only use them as a last resort when necessary, mainly due to the bad reputation they have (and indeed I've abused them in the past).
But I'm wondering just how much of an advantage my new technique is. I'm simply creating weak references to the same set of data so a bunch of classes point to the same memory to pull data as needed. Such as:
#property (nonatomic, assign) MyDataClass*mydata;
In a custom init of the class, I pass a reference as a method parameter, then the property assigns to this reference.
Is this a valid, acceptable way to do things? I'm having trouble finding much of an organizational advantage to doing this over using singletons.
The pattern you use is just fine, eventually this pattern is used in all standard C++ programs without reference counting or other advanced memory management tools. The only thing you have to ensure that your object hierarchy strictly respects the weakness of the reference i.e. the dependency of the object having the reference towards the object that is behind that reference. In other words, you always have to ensure that the owner of the reference is deleted before the reference and you have to do ensure manually since you're not using the reference counting.
This means more responsibility for you, the programmer as you always have to have total control over the lifetime of your objects. It is quite easy to make mistake with your pattern since you can't know from the code having the weak reference whether the original object still exist or it is deleted. You have to ensure this with your design pattern.
For this reason I don't suggest to "mix" the two approaches, i.e. having weak references to an object that might be freed out-of-control by a retain type property (when the value of the property changes from your object), an autorelease or by ARC.
Reference counting was introduced to take away this responsibility from the programmer and make it easier to write safe code. Your pattern is fine, it is used by millions of C++ programs but you have to be conscious of your responsibility.
As a rule, you should only use a singleton class for objects where it wouldn't make sense for more than one of them to exist. Otherwise, it's best to avoid them because they introduce coupling: every class that uses the singleton ends up tightly coupled to the singleton.
Passing references to stuff is fine. They don't have to be weak references of course, except where necessary in order to avoid retain cycles.
If you do find that you're passing the same objects around between a large number of classes, you might want to consider thinking about division of responsibility and restructuring your app.
Retain/Assign and using Singletons aren't mutually exclusive patterns. I wouldn't be so hesitant against the singleton, I was at first when I started iOS.
Coming from being a java web apps developer, Singletons were bad due to the tight coupling, and especially because if you wanted to distribute your application across a load balancer / (cloud now)... then your singleton would become a bottleneck - and wouldn't be easily scalable.
There were also problems with Unit testing with Singletons, and having to reset it's state during tests, or even trying to mock it out.
However, in Objective-C and iOS development - I'm not really seeing that many disadvantages to singletons. Your app isn't going to be scaled, and the sdk is already littered with singletons to hinder your unit tests.

Design pattern for catching memory leaks in objective-c?

I have read Apple's memory management guide, and think I understand the practices that should be followed to ensure proper memory management in my application.
At present it looks like there are no memory leaks in my code. But as my code grows more complex, I wonder whether there is any particular pattern I should follow to keep track of allocations and deallocations of objects.
Does it make sense to create some kind of global object that is present throughout the execution of the application which contains a count of the number of active objects of a type? Each object could increment the count of their type in their init method, and decrement it in dealloc. The global object could verify at appropriate times if the count of a particular type is zero of not.
EDIT: I am aware of how to use the leaks too, as well as how to analyze the project using Xcode. The reason for this post is to keep track of cases which may not be detected through leaks or analyze easily.
EDIT: Also, it seems to make sense to have something like this so that leaks can be detected in builds early by running unit tests that check the global object. I guess that as an inexperienced objective-c programmer I would benefit from the views of others on this.
Each object could increment the count of their type in their init
method, and decrement it in dealloc.
To do that right, you'll have to do one of the following: 1) override behavior at some common point, such as NSObject's -init or , or 2) add the appropriate code to the designated initializer of every single class. Neither seems simple.
The global object could verify at appropriate times if the count of a
particular type is zero of not.
Sounds good, but can you elaborate a bit on "appropriate times"? How would you know at any given point in the life of your program which classes should have zero instances? You'd have a pretty good idea that there should be no objects at the end of the program, but Instruments could tell you the same thing in that case.
Objective-C has taken several steps to make memory management much simpler. Use properties and synthesized accessors where you can, as they essentially manage your objects for you. A more recent improvement is ARC, which goes even further toward automating most memory management tasks. You basically let the compiler figure out where to put the memory management calls -- it's like garbage collection without the garbage collector. Learn to use those tools well before you try to invent new ones.
Don't go that route... it's a pain in single inheritance. Most importantly, there are excellent tools at your disposal which you should master before thinking you must create some global counter. The global counter exists in a few tools already -- Learn them!
The way you combat it is to learn how to balance and manage everything correctly when it's written. It's really very simple in hindsight.
ARC is another option -- really that just postpones your understanding.
The first "design pattern" I recommend it to use release instead of autorelease where possible (although generally more useful for over-releases).
Next, run the leaks instrument/util regularly and fix all leaks/zombies immediately.
Third, learn the existing tools as you go! These tools can do really crazy stuff, like record the backtrace of every allocation and every reference count. You can pause your program's execution and view what allocations exist, alloc counts, backtraces, and all sorts of other stats.

What is the difference between NSNotificationCenter and the Key Value Observing technique?

I just read a couple of tutorials regarding KVO, but I have not yet discovered the reason of its existence. Isn't NSNotificationCenter an easier way to observe objects?
I am new to Stackoverflow, so just tell me if there is something wrong in the way I am asking this question!
Notifications and KVO serve similar functions, but with different trade-offs.
Notifications are easy to understand. KVO is... challenging... to understand (at least to understand how to use it well).
Notifications require modification to the observed code. The observed must explicitly generate every notification it offers. KVO is transparent to the observed code as long as the observed code conforms to KVC (which it should anyway).
Notifications have overhead even if you don't use them. Every time the observed code posts a notification, it must be checked against every observation in the system, even if no one is observing that object (even if no one is observing anything). This can be very non-trivial if there are more than a few hundred observations in the system. It can be a serious problem if there are a few thousand. KVO has zero overhead for any object that is not actually observed.
In general, I discourage KVO because of some specific implementation problems that I believe make it hard to use correctly. It's difficult to observe an object that your superclass also observes without special knowledge of your superclass. Its heavy reliance of string literals make small typos hard to catch at compile time. In general, I find code that relies heavily on it becomes complex and hard to read, and begins to pick up spooky-action-at-a-distance bugs. NSNotification code tends to be more straightforward and you can see what's happening. Random code doesn't just run when you didn't expect it.
All that said, KVO is an important feature and developers need to understand it. More and more low-level objects rely on it because of it's zero-overhead advantages. But for new developers, I typically recommend that they rely more on notification rather than KVO.
There is a third way. You can keep a list of listeners, and send them messages when things change, just like delegate methods. Some people call these "multicast delegates" but "listeners" is more correct here because they don't modify the object's behavior as a delegate does. Doing it this way can be dramatically faster than NSNotification if you need a lot of observation in a system, without adding the complexity of KVO.

Calling -retainCount Considered Harmful

Or, Why I Didn't Use retainCount On My Summer Vacation
This post is intended to solicit detailed write-ups about the whys and wherefores of that infamous method, retainCount, in order to consolidate the relevant information floating around SO.*
The basics: What are the official reasons to not use retainCount? Is there ever any situation at all when it might be useful? What should be done instead?** Feel free to editorialize.
Historical/explanatory: Why does Apple provide this method in the NSObject protocol if it's not intended to be used? Does Apple's code rely on retainCount for some purpose? If so, why isn't it hidden away somewhere?
For deeper understanding: What are the reasons that an object may have a different retain count than would be assumed from user code? Can you give any examples*** of standard procedures that framework code might use which cause such a difference? Are there any known cases where the retain count is always different than what a new user might expect?
Anything else you think is worth metioning about retainCount?
*
Coders who are new to Objective-C and Cocoa often grapple with, or at least misunderstand, the reference-counting scheme. Tutorial explanations may mention retain counts, which (according to these explanations) go up by one when you call retain, alloc, copy, etc., and down by one when you call release (and at some point in the future when you call autorelease).
A budding Cocoa hacker, Kris, could thus quite easily get the idea that checking an object's retain count would be useful in resolving some memory issues, and, lo and behold, there's a method available on every object called retainCount! Kris calls retainCount on a couple of objects, and this one is too high, and that one's too low, and what the heck is going on?! So Kris makes a post on SO, "What's wrong with my memory management?" and then a swarm of <bold>, <large> letters descend saying "Don't do that! You can't rely on the results.", which is well and good, but our intrepid coder may want a deeper explanation.
I'm hoping that this will turn into an FAQ, a page of good informational essays/lectures from any of our experts who are inclined to write one, that new Cocoa-heads can be pointed to when they wonder about retainCount.
** I don't want to make this too broad, but specific tips from experience or the docs on verifying/debugging retain and release pairings may be appropriate here.
***In dummy code; obviously the general public don't have access to Apple's actual code.
The basics: What are the official reasons to not use retainCount?
Autorelease management is the most obvious -- you have no way to be sure how many of the references represented by the retainCount are in a local or external (on a secondary thread, or in another thread's local pool) autorelease pool.
Also, some people have trouble with leaks, and at a higher level reference counting and how autorelease pools work at fundamental levels. They will write a program without (much) regard to proper reference counting, or without learning ref counting properly. This makes their program very difficult to debug, test, and improve -- it's also a very time consuming rectification.
The reason for discouraging its use (at the client level) is twofold:
The value may vary for so many reasons. Threading alone is reason enough to never trust it.
You still have to implement correct reference counting. retainCount will never save you from imbalanced reference counting.
Is there ever any situation at all when it might be useful?
You could in fact use it in a meaningful way if you wrote your own allocators or reference counting scheme, or if your object lived on one thread and you had access to any and all autorelease pools it could exist in. This also implies you would not share it with any external APIs. The easy way to simulate this is to create a program with one thread, zero autorelease pools, and do your reference counting the 'normal' way. It's unlikely that you'll ever need to solve this problem/write this program for anything other than "academic" reasons.
As a debugging aid: you could use it to verify that the retain count is not unusually high. If you take this approach, be mindful of the implementation variances (some are cited in this post), and don't rely on it. Don't even commit the tests to your SCM repository.
This may be a useful diagnostic in extremely rare circumstances. It can be used to detect:
Over-retaining: An allocation with a positive imbalance in retain count would not show up as a leak if the allocation is reachable by your program.
An object which is referenced by many other objects: One illustration of this problem is a (mutable) shared resource or collection which operates in a multithreaded context - frequent access or changes to this resource/collection can introduce a significant bottleneck in your program's execution.
Autorelease levels: Autoreleasing, autorelease pools, and retain/autorelease cycles all come with a cost. If you need to minimize or reduce memory use and/or growth, you could use this approach to detect excessive cases.
From commentary with Bavarious (below): a high value may also indicate an invalidated allocation (dealloc'd instance). This is completely an implementation detail, and again, not usable in production code. Messaging this allocation would result in a error when zombies are enabled.
What should be done instead?
If you're not responsible for returning the memory at self (that is, you did not write an allocator), leave it alone - it is useless.
You have to learn proper reference counting.
For a better understanding of release and autorelease usage, set up some breakpoints and understand how they are used, in what cases, etc. You'll still have to learn to use reference counting correctly, but this can aid your understanding of why it's useless.
Even simpler: use Instruments to track allocs and ref counts, then analyze the ref counting and callstacks of several objects in an active program.
Historical/explanatory: Why does Apple provide this method in the NSObject protocol if it's not intended to be used? Does Apple's code rely on retainCount for some purpose? If so, why isn't it hidden away somewhere?
We can assume that it is public for two primary reasons:
Reference counting proper in managed environments. It's fine for the allocators to use retainCount -- really. It's a very simple concept. When -[NSObject release] is called, the ref counter (unless overridden) may be called, and the object can be deallocated if retainCount is 0 (after calling dealloc). This is all fine at the allocator level. Allocators and zones are (largely) abstracted so... this makes the result meaningless for ordinary clients. See commentary with bbum (below) for details on why retainCount cannot be equal to 0 at the client level, object deallocation, deallocation sequences, and more.
To make it available to subclassers who want a custom behavior, and because the other reference counting methods are public. It may be handy in a few cases, but it's typically used for the wrong reasons (e.g. immortal singletons). If you need your own reference counting scheme, then this family may be worth overriding.
For deeper understanding: What are the reasons that an object may have a different retain count than would be assumed from user code? Can you give any examples*** of standard procedures that framework code might use which cause such a difference? Are there any known cases where the retain count is always different than what a new user might expect?
Again, a custom reference counting schemes and immortal objects. NSCFString literals fall into the latter category:
NSLog(#"%qu", [#"MyString" retainCount]);
// Logs: 1152921504606846975
Anything else you think is worth mentioning about retainCount?
It's useless as a debugging aid. Learn to use leak and zombie analyses, and use them often -- even after you have a handle on reference counting.
Update: bbum has posted an article entitled retainCount is useless. The article contains a thorough discussion of why -retainCount isn’t useful in the vast majority of cases.
The general rule of thumb is if you're using this method, you better be damn sure you know what you're doing. If you are using it for debugging a memory leak you're doing it wrong, if you're doing it to see what is going on with an object, you're doing it wrong.
There is one case where I have used it, and found it useful. That is in doing a shared object cache where I wanted to flush the object when nothing had a reference to it anymore. In this situation I waited until the retainCount is equal to 1, and then I can release it knowing that nothing else is holding onto it, this will obviously not work properly in garbage collected environments and there are better ways to do it. But this is still the only 'valid' use case I've seen for it, and isn't something a lot of people will be doing.