Most iPhone code examples use the nonatmoc attribute in their properties. Even those that involve [NSThread detachNewThreadSelector:....]. However, is this really an issue if you are not accessing those properties on the separate thread?
If that is the case, how can you be sure nonatomic properties won't be accessed on this different in the future, at which point you may forget those properties are set as nonatomic. This can create difficult bugs.
Besides setting all properties to atomic, which can be impractical in a large app and may introduce new bugs, what is the best approach in this case?
Please note these these questions are specifically for iOS and not Mac in general.
First,know that atomicity by itself does not insure thread safety for your class, it simply generates accessors that will set and get your properties in a thread safe way. This is a subtle distinction. To create thread safe code, you will very likely need to do much more than simply use atomic accessors.
Second, another key point to know is that your accessors can be called from background or foreground threads safely regardless of atomicity. The key here is that they must never be called from two threads simultaneously. Nor can you call the setter from one thread while simultaneously calling the getter from another, etc. How you prevent that simultaneous access depends on what tools you use.
That said, to answer your question, you can't know for sure that your accessors won't be accessed on another thread in the future. This is why thread safety is hard, and a lot of code isn't thread safe. In general, if youre making a framework or library, yeah, you can try to make your code thread safe for the purposes of "defensive programming", or you can leave it non-thread safe. The atomicity of your properties is only a small part of that. Whichever you choose, though, be sure to document it so users of your library don't have to wonder.
Related
Question
I am working on a project where I am concerned about the thread safety of an object's properties. I know that when a property is an object such as an NSString, I can run into situations where multiple threads are reading and writing simultaneously. In this case you can get a corrupt read and the app will either crash or result in corrupted data.
My question is for primitive value type properties such as BOOLs or NSIntegers. I am wondering if I can get into a similar situation where I read a corrupt value when reading and writing from multiple threads (and the app will crash)? In either case, I am interested in why.
Clarification - 1/13/17
I am mostly interested in if a primitive value type property is differently susceptible to crashing due to multiple threads accessing it at the same time than an object such as NSMutableString, custom created object, etc. In addition, if there is a difference when accessing memory on the stack vs heap relative to multithreading.
Clarification - 12/1/17
Thank you to #Rob for pointing me to the answer here: stackoverflow.com/a/34386935/1271826! This answer has a great example that shows that depending on the type of architecture you are on (32-bit vs 64-bit), you can get an undefined result when using a primitive property.
Although this is a great step towards answering my question, I still wonder two things:
If there is a multithreading difference when accessing a primitive value property on the stack vs heap (as noted in my previous clarification)?
If you restrict a program to running on one architecture, can you still find yourself in an undefended state when access a primitive value property and why?
I should note that here has been a lot of conversation around atomic vs nonatomic in response to this question. Although this is generally an important concept, this question has little to do with preventing undefined multithreading behavior by using the atomic property modifier or any other thread safety approach such as using GCD.
If your primitive value type property is atomic, then you're assured it cannot be corrupted because your reading it from one thread while setting it from another (as long as you only use the accessor methods, and not interact with the backing ivar directly). That's the entire purpose of atomic. And, as you suggest, this only applicable to fundamental data types (or objects that are both immutable and stateless). But in these narrow cases, atomic can be useful.
Having said that, this is a far cry from concluding that the app is thread-safe. It only assures you that the access to that one property is thread-safe. But often thread-safety must be considered within a broader context. (I know you assure us that this is not the case here, but I qualify this for future readers who too quickly jump to the conclusion that atomic is sufficient to achieve thread-safety. It often is not.)
For example, if your NSInteger property is "how many items are in this cache object", then not only must that NSInteger have its access synchronized, but it must be also be synchronized in conjunction with all interactions with the cache object (e.g. the "add item to cache" and "remove item from cache" tasks, too). And, in these cases, since you'll synchronize all interaction with this broader object somehow (e.g. with GCD queue, locks, #synchronized directive, whatever), making the NSInteger property atomic then becomes redundant and therefore modestly less efficient.
Bottom line, in limited situations, atomic can provide thread-safety for fundamental data types, but frequently it is insufficient when viewed in a broader context.
You later say that you don't care about race conditions. For what it's worth, Apple argues that there is no such thing as a benign race. See WWDC 2016 video Thread Sanitizer and Static Analysis (about 14:40 into it).
Anyway, you suggest you are merely concerned whether the value can be corrupted or whether the app will crash:
I am wondering if I can get into a similar situation where I read a corrupt value when reading and writing from multiple threads (and the app will crash)?
The bottom line is that if you're reading from one thread while mutating on another, the behavior is simply undefined. It could vary. You are simply well advised to avoid this scenario.
In practice, it's a function of the target architecture. For example on 64-bit type (e.g. long long) on 32-bit x86 target, you can easily retrieve a corrupt value, where one half of the 64-bit value is set and the other is not. (See https://stackoverflow.com/a/34386935/1271826 for example.) This results in merely non-sensical, invalid numeric values when dealing with primitive types. For pointers to objects, this obviously would have catestrophic implications.
But even if you're in an environment where no problems are manifested, it's an incredibly fragile approach to eschew synchronization to achieve thread-safety. It could easily break when run on new, unanticipated hardware architectures or compiled under different configuration. I'd encourage you to watch that Thread Sanitizer and Static Analysis video for more information.
If you drag a new outlet from Interface Builder to an interface (header) file, Xcode 4.6 will automatically create a property for you...
On iOS (Cocoa Touch) it will look like this:
#property (weak, nonatomic) SomeClass *someProperty; //nonatomic accessors
Whereas on OS X (Cocoa) it will look like this:
#property (weak) SomeClass *someProperty; //atomic accessors (implicitly)
Why?
EDIT: I am not asking about what atomic does or doesn't do, I'm well aware of the synchronize directive and the underlying mutex (or lock or whatever) that guarantees atomicity of the setter and getter. I know that on iOS, accessors are nonatomic because UIKit is not thread safe, and so there is nothing to be gained by making them atomic, it's just a waste of processor time and battery life. I am talking about the default case here, programmers who know what they are doing will know when they need to make their accessors atomic.
So I'm asking why they are atomic by default on OS X. I was under the impression that Appkit was not thread safe either. And having atomic accessors doesn't guarantee thread safety, I'd even go as far as to say it goes the opposite way in that it can give the illusion of thread safety to novice programmers and make bug tracking harder in concurrent apps by deferring crashes to a later time and in so doing making them harder to trace. And just because desktop computers are relatively powerful doesn't mean resources should be wasted (note I am not talking about premature optimization here), and since it stands to reason that Apple engineers are reasonable programmers, there must be a good reason why they have decided to make the properties synthesize atomic accessors by default.
In this context the atomic specifier tells the compiler that the setter and accessor should be synthesised so as to be safe to call from multiple threads. This adds a small overhead by requiring the methods to take out a lock before a properties value can be written or read.
Since user interface elements of both UIKit and Cocoa are only ever intended to be accessed from the main thread the extra lock is unnecessary. The overhead of making a property atomic is pretty minimal, but in the more constrained environment of iOS every little ounce of speed is valuable.. hence why iOS defaults to using nonatomic properties for IB Outlets.
Edited in response to your expanded question: My feeling is that the cost of using atomic properties is worth the overhead on the Mac. Theres an argument that using atomic properties masks a collection of bugs and is therefore a bad thing. I'd argue that from a users perspective Apple should set the defaults so that even badly coded applications crash less. It puts the onus on advanced programers to know when it's safe to use nonatomic properties in exchange for a performance advantage.
Without hearing from someone on the team at the time we can only speculate about their thought process but I'm sure it was a considered decission.
Simple as hell: atomic causes overhead (negligible one on OSX) with its implicit mutex mechanisms.
iOS (as an embedded system on a ARM chip) can't afford this overhead, hence IBOutlets defaulting to nonatomic.
In one word: Performance.
As to why they default to atomic on OSX, thread-safety on properties is a nice thing to have in a massively multi-threaded, multi-application environment like OSX (especially compared to iOS, apps are more likely to interact with each other on OSX than on iOS).
And as said before, the overhead is really negligible on OSX, thus they defaulted it like this.
There was a rather lengthy debate about atomic vs. non-atomic properties in this question: What's the difference between the atomic and nonatomic attributes? and I would wager that it has more to do with the relative complexity of the interfaces generally found in OSX apps vs. iOS apps. It is fairly common to have an OSX app running on multiple threads all the time. The interfaces thus lend themselves to operating more commonly in a multi-threaded environment. In iOS, while apps are certainly gaining complexity as the system matures, they are still running on a much more basic OS that currently lends itself favorably to a non-threaded environment.
There is also some talk about non-atomic properties generally having less overhead vs. atomic ones and with smaller CPUs and less memory generally found in iOS devices, it would make sense to default properties to nonatomic unless the extra overhead is warranted.
OSX can control multiple threads, and most of the application uses multiple threads. Therefore the default is set to atomic.
While in case of iOS, rarely you go for multiple threads so non-atomic serves you.
Question
We're developing a custom EventEmitter inspired message system in Objective-C. For listeners to provide callbacks, should we require blocks or selectors and why?
Which would you rather use, as a developer consuming a third party library? Which seems most in line with Apple's trajectory, guidelines and practices?
Background
We're developing a brand new iOS SDK in Objective-C which other third parties will use to embed functionality into their app. A big part of our SDK will require the communication of events to listeners.
There are five patterns I know of for doing callbacks in Objective-C, three of which don't fit:
NSNotificationCenter - can't use because it doesn't guarantee the order observers will be notified and because there's no way for observers to prevent other observers from receiving the event (like stopPropagation() would in JavaScript).
Key-Value Observing - doesn't seem like a good architectural fit since what we really have is message passing, not always "state" bound.
Delegates and Data Sources - in our case, there usually will be many listeners, not a single one which could rightly be called the delegate.
And two of which that are contenders:
Selectors - under this model, callers provide a selector and a target which are collectively invoked to handle an event.
Blocks - introduced in iOS 4, blocks allow functionality to be passed around without being bound to an object like the observer/selector pattern.
This may seem like an esoteric opinion question, but I feel there is an objective "right" answer that I am simply too inexperienced in Objective-C to determine. If there's a better StackExchange site for this question, please help me by moving it there.
UPDATE #1 — April 2013
We chose blocks as the means of specifying callbacks for our event handlers. We're largely happy with this choice and don't plan to remove block-based listener support. It did have two notable drawbacks: memory management and design impedance.
Memory Management
Blocks are most easily used on the stack. Creating long-lived blocks by copying them onto the heap introduces interesting memory management issues.
Blocks which make calls to methods on the containing object implicitly boost self's reference count. Suppose you have a setter for the name property of your class, if you call name = #"foo" inside a block, the compiler treats this as [self setName:#"foo"] and retains self so that it won't be deallocated while the block is still around.
Implementing an EventEmitter means having long-lived blocks. To prevent the implicit retain, the user of the emitter needs to create a __block reference to self outside of the block, ex:
__block *YourClass this = self;
[emitter on:#"eventName" callBlock:...
[this setName:#"foo"];...
}];
The only problem with this approach is that this may be deallocated before the handler is invoked. So users must unregister their listeners when being deallocated.
Design Impedance
Experienced Objective-C developers expect to interact with libraries using familiar patterns. Delegates are a tremendously familiar pattern, and so canonical developers expect to use it.
Fortunately, the delegate pattern and block-based listeners are not mutually exclusive. Although our emitter must be able to be handle listeners from many places (having a single delegate won't work) we could still expose an interface which would allow developers to interact with the emitter as though their class was the delegate.
We haven't implemented this yet, but we probably will based on requests from users.
UPDATE #2 — October 2013
I'm no longer working on the project that spawned this question, having quite happily returned to my native land of JavaScript.
The smart developers who took over this project decided correctly to retire our custom block-based EventEmitter entirely.
The upcoming release has switched to ReactiveCocoa.
This gives them a higher level signaling pattern than our EventEmitter library previously afforded, and allows them to encapsulate state inside of signal handlers better than our block-based event handlers or class-level methods did.
Personally, I hate using delegates. Because of how objective-C is structured, It really clutters code up If I have to create a separate object / add a protocol just to be notified of one of your events, and I have to implement 5/6. For this reason, I prefer blocks.
While they (blocks) do have their disadvantages (e.x. memory management can be tricky). They are easily extendable, simple to implement, and just make sense in most situations.
While apple's design structures may use the sender-delegate method, this is only for backwards compatibility. More recent Apple APIs have been using blocks (e.x. CoreData), because they are the future of objective-c. While they can clutter code when used overboard, it also allows for simpler 'anonymous delegates', which is not possible in objective C.
In the end though, it really boils down to this:
Are you willing to abandon some older, more dated platforms in exchange for using blocks vs. a delegate? One major advantage of a delegate is that it is guaranteed to work in any version of the objc-runtime, whereas blocks are a more recent addition to the language.
As far as NSNotificationCenter/KVO is concerned, they are both useful, and have their purposes, but as a delegate, they are not intended to be used. Neither can send a result back to the sender, and for some situations, that is vital (-webView:shouldLoadRequest: for example).
I think the right thing to do is to implement both, use it as a client, and see what feels most natural. There are advantages to both approaches, and it really depends on the context and how you expect the SDK to be used.
The primary advantage of selectors is simple memory management--as long as the client registers and unregisters correctly, it doesn't need to worry about memory leaks. With blocks, memory management can get complex, depending on what the client does inside the block. It's also easier to unit test the callback method. Blocks can certainly be written to be testable, but it's not common practice from what I've seen.
The primary advantage of blocks is flexibility--the client can easily reference local variables without making them ivars.
So I think it just depends on the use case--there is no "objective right answer" to such a general design question.
Great writeup!
Coming from writing lots of JavaScript, event-driven programming feels way cleaner than having delegates back and forth, in my personal opinion.
Regarding the memory-managing aspect of listeners, my attempt at solving this (drawing heavily from Mike Ash's MAKVONotificationCenter), swizzles both the caller and emitter's dealloc implementation (as seen here) in order to safely remove listeners in both ways.
I'm not entirely sure how safe this approach is, but the idea is to try it 'til it breaks.
A thing about a library is, that you can only to some extend anticipate, how it will be used. so you need to provide a solution, that is as simple and open as possible — and familiar to the users.
For me all this fits best to delegation. Although you are right, that it can only have on listener (delegate), this means no limitation, as the user can write a class as delegate, that knows about all desired listeners and informs them. Of course you can provide a registering class. that will call the delegate methods on all registered objects.
Blocks are as good.
what you name selectors is called target/action and simple yet powerful.
KVO seems to be a not optimal solution for me as-well, as it would possibly weaken encapsulation, or lead to a wrog mental model of how using your library's classes.
NSNotifications are nice to inform about certain events, but the users should not be forced to use them, as they are quite informal. and your classes wont be able to know, if there is someone tuned-in.
some useful thoughts on API-Design: http://mattgemmell.com/2012/05/24/api-design/
I have many objects reference the same class of stored data. In previous programs, I've used singletons, but am trying to abandon that practice and only use them as a last resort when necessary, mainly due to the bad reputation they have (and indeed I've abused them in the past).
But I'm wondering just how much of an advantage my new technique is. I'm simply creating weak references to the same set of data so a bunch of classes point to the same memory to pull data as needed. Such as:
#property (nonatomic, assign) MyDataClass*mydata;
In a custom init of the class, I pass a reference as a method parameter, then the property assigns to this reference.
Is this a valid, acceptable way to do things? I'm having trouble finding much of an organizational advantage to doing this over using singletons.
The pattern you use is just fine, eventually this pattern is used in all standard C++ programs without reference counting or other advanced memory management tools. The only thing you have to ensure that your object hierarchy strictly respects the weakness of the reference i.e. the dependency of the object having the reference towards the object that is behind that reference. In other words, you always have to ensure that the owner of the reference is deleted before the reference and you have to do ensure manually since you're not using the reference counting.
This means more responsibility for you, the programmer as you always have to have total control over the lifetime of your objects. It is quite easy to make mistake with your pattern since you can't know from the code having the weak reference whether the original object still exist or it is deleted. You have to ensure this with your design pattern.
For this reason I don't suggest to "mix" the two approaches, i.e. having weak references to an object that might be freed out-of-control by a retain type property (when the value of the property changes from your object), an autorelease or by ARC.
Reference counting was introduced to take away this responsibility from the programmer and make it easier to write safe code. Your pattern is fine, it is used by millions of C++ programs but you have to be conscious of your responsibility.
As a rule, you should only use a singleton class for objects where it wouldn't make sense for more than one of them to exist. Otherwise, it's best to avoid them because they introduce coupling: every class that uses the singleton ends up tightly coupled to the singleton.
Passing references to stuff is fine. They don't have to be weak references of course, except where necessary in order to avoid retain cycles.
If you do find that you're passing the same objects around between a large number of classes, you might want to consider thinking about division of responsibility and restructuring your app.
Retain/Assign and using Singletons aren't mutually exclusive patterns. I wouldn't be so hesitant against the singleton, I was at first when I started iOS.
Coming from being a java web apps developer, Singletons were bad due to the tight coupling, and especially because if you wanted to distribute your application across a load balancer / (cloud now)... then your singleton would become a bottleneck - and wouldn't be easily scalable.
There were also problems with Unit testing with Singletons, and having to reset it's state during tests, or even trying to mock it out.
However, in Objective-C and iOS development - I'm not really seeing that many disadvantages to singletons. Your app isn't going to be scaled, and the sdk is already littered with singletons to hinder your unit tests.
I just read a couple of tutorials regarding KVO, but I have not yet discovered the reason of its existence. Isn't NSNotificationCenter an easier way to observe objects?
I am new to Stackoverflow, so just tell me if there is something wrong in the way I am asking this question!
Notifications and KVO serve similar functions, but with different trade-offs.
Notifications are easy to understand. KVO is... challenging... to understand (at least to understand how to use it well).
Notifications require modification to the observed code. The observed must explicitly generate every notification it offers. KVO is transparent to the observed code as long as the observed code conforms to KVC (which it should anyway).
Notifications have overhead even if you don't use them. Every time the observed code posts a notification, it must be checked against every observation in the system, even if no one is observing that object (even if no one is observing anything). This can be very non-trivial if there are more than a few hundred observations in the system. It can be a serious problem if there are a few thousand. KVO has zero overhead for any object that is not actually observed.
In general, I discourage KVO because of some specific implementation problems that I believe make it hard to use correctly. It's difficult to observe an object that your superclass also observes without special knowledge of your superclass. Its heavy reliance of string literals make small typos hard to catch at compile time. In general, I find code that relies heavily on it becomes complex and hard to read, and begins to pick up spooky-action-at-a-distance bugs. NSNotification code tends to be more straightforward and you can see what's happening. Random code doesn't just run when you didn't expect it.
All that said, KVO is an important feature and developers need to understand it. More and more low-level objects rely on it because of it's zero-overhead advantages. But for new developers, I typically recommend that they rely more on notification rather than KVO.
There is a third way. You can keep a list of listeners, and send them messages when things change, just like delegate methods. Some people call these "multicast delegates" but "listeners" is more correct here because they don't modify the object's behavior as a delegate does. Doing it this way can be dramatically faster than NSNotification if you need a lot of observation in a system, without adding the complexity of KVO.