Thread-Safe lazy initialization in getter - objective-c

I would like to know if both of the following solutions for lazy initialization are correct.
I have a class AppContext that is supposed to hold references to other class that should only exist once (Avoiding making every single one of these classes a singleton). Let's say one of these other classes is called ReferencedClass. That being said, I would like to lazy-initialize the references with defaults, in a thread-safe way.
It has been discussed before, and I have read a lot about it, but I am still unsure. Personal preferences aside, what I would like know is: Are these two solutions a correct way to implemented my desired behavior?
Solution 1: Originally I wanted to implement it like this:
// Getter with lazy initialized default value
- (ReferencedClass *)referencedClass {
// Check if nil. If yes, wait for lock and check again after locking.
if (_referencedClass == nil) {
#synchronized(self) {
if (_referencedClass == nil) {
// Prevent _referencedClass pointing to partially initialized objects
ReferencedClass *temp = [[ReferencedClass alloc] init];
_referencedClass = temp;
}
}
}
return _referencedClass;
}
// Setter
- (void)setReferencedClass:(ReferencedClass *)referencedClass {
#synchronized(self) {
_referencedClass = referencedClass;
}
}
Solution 2: Then I decided to go with GCD instead, so I wrote this:
// Getter with lazy initialized default value
- (ReferencedClass *)referencedClass {
// Check if nil. If yes, wait for "lock" and check again after "locking".
if (_referencedClass == nil) {
dispatch_sync(syncDispatchQueue, ^{
if (_referencedClass == nil) {
// Prevent _referencedClass pointing to partially initialized objects
ReferencedClass *temp = [[ReferencedClass alloc] init];
_referencedClass = temp;
}
});
}
return _referencedClass;
}
// Setter
- (void)setReferencedClass:(ReferencedClass *)referencedClass {
dispatch_sync(syncDispatchQueue, ^{
_referencedClass = referencedClass;
});
}
Of course, somewhere (for example in the init-Method) I have initialized the syncDispatchQueue with something like:
syncDispatchQueue = dispatch_queue_create("com.stackoverflow.lazy", NULL);
Is this correct, thread-safe and deadlock-free code? Can I use the double-checked-locking together with the temp-variable? If this double-checked-locking is not safe, would my code in both cases be safe if I removed the outer checks? I guess so, right?
Thanks very much in advance!
[Side note: I am aware of dispatch_once and that some people say that (in contrary to the Apple documentation) it can also be used with instance variables. For now I would like to use one of these two options though. If possible. ]

As far as I understand it, your "double-checked locking" mechanism is not thread-safe,
because the assigment _referencedClass = ... is not atomic. So one thread might read a partially initialized variable in the outer if (_referencedClass == nil) check.
If you remove the outer checks, both versions look OK to me.
You may be interested in
What advantage(s) does dispatch_sync have over #synchronized?
which has a great answer explaining the differences in implementation and performance.

Related

Objective-C: Methods called by a swizzled method should call the original implementation

I'm fixing bugs in someone else's closed-source app.
In macOS, scrollbars can be set in System Preferences to display "always" (NSScrollerStyleLegacy), "when scrolling" (NSScrollerStyleOverlay), or "automatically based on mouse or trackpad" (NSScrollerStyleOverlay if a trackpad is connected, otherwise NSScrollerStyleLegacy). To check which style is in use, apps are supposed to do something like:
if ([NSScroller preferredScrollerStyle] == NSScrollerStyleLegacy)
addPaddingForLegacyScrollbars();
Unfortunately, for some reason, this app is reading the value from NSUserDefaults instead (confirmed using a decompiler).
NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults];
if ([[defaults objectForKey:#"AppleShowScrollBars"] isEqual: #"Always"])
addPaddingForLegacyScrollbars();
This code incorrectly assumes any value of AppleShowScrollBars other than "Always" is equivalent to NSScrollerStyleOverlay. This will be wrong if the default is set to "Automatic" and no Trackpad is connected.
To fix this, I used the ZKSwizzle library to swizzle the NSUserDefaults objectForKey: method:
- (id)objectForKey:(NSString *)defaultName {
if ([defaultName isEqual: #"AppleShowScrollBars"]) {
if ([NSScroller preferredScrollerStyle] == NSScrollerStyleLegacy) {
return #"Always";
} else {
return #"WhenScrolling";
}
}
return ZKOrig(id, defaultName);
}
Unfortunately, this led to a stack overflow, because [NSScroller preferredScrollerStyle] will itself initially call [NSUserDefaults objectForKey:#"AppleShowScrollBars"] to check the user's preference. After some searching, I came across this answer on how to obtain the class name of a caller, and wrote:
- (id)objectForKey:(NSString *)defaultName {
if ([defaultName isEqual: #"AppleShowScrollBars"]) {
NSString *caller = [[[NSThread callStackSymbols] objectAtIndex:1] substringWithRange:NSMakeRange(4, 6)];
if (![caller isEqualToString:#"AppKit"]) {
if ([NSScroller preferredScrollerStyle] == NSScrollerStyleLegacy) {
return #"Always";
} else {
return #"WhenScrolling";
}
}
}
return ZKOrig(id, defaultName);
}
This works perfectly! However, obtaining the caller uses the backtrace_symbols API intended for debugging, and comments on the aforementioned answer suggest this is a very bad idea. And, in general, returning different values depending on the caller feels yucky.
Obviously, if this was my own code, I would rewrite it to use preferredScrollerStyle instead of NSUserDefaults in the first place, but it's not, so I can only make changes at method boundaries.
What I fundamentally want is for this method to be swizzled only when it's called above me in the stack. Any calls further down the stack should use the original implementation.
Is there a way to do this, or is my current solution reasonable?
This approach is probably ok (within the context of "I've already decided to swizzle"), but it does feel a bit fragile as you note, and callStackSymbols can be very slow, and what information is available depends on whether debug symbols are available (which probably won't ever break this particular use case, but if it does, the bug will be very confusing).
I think you can make this more robust and much faster by short-circuiting recursion with a static variable.
- (id)objectForKey:(NSString *)defaultName {
static BOOL isRunning = false;
if (!isRunning && [defaultName isEqual: #"AppleShowScrollBars"]) {
isRunning = true;
NSScrollerStyle scrollerStyle = [NSScroller preferredScrollerStyle];
isRunning = false;
if (scrollerStyle == NSScrollerStyleLegacy) {
return #"Always";
} else {
return #"WhenScrolling";
}
}
return ZKOrig(id, defaultName);
}
static variables within a function retain their value between calls, so you can use this to detect that recursion is happening. (This is not thread-safe, but that shouldn't be a problem in this use case. Also note that all instances of this class share the same static variable. That shouldn't matter here since you're swizzling a specific object.)
If this function is reentered, then it'll just skip down to the original implementation.
Rob's answer is good, but if I've understood your requirements correctly, there may be an alternative solution that may simplify things a bit. You can avoid re-entrancy (and avoid having NSUserDefaults be swizzled in all contexts) by swizzling the app method, using that as an entry point to know when the app is about to read from NSUserDefaults, and temporarily swizzle NSUserDefaults for the duration of that call:
// This can be an ivar, static variable, etc.
// You can initialize this with `dispatch_once` if the method only reads the scroller
// style only once (e.g. on initialization), or leave this mutable if you want to check
// every time the app method is called.
static NSScrollerStyle effectiveScrollerStyle = NSScrollerStyleLegacy;
// Replace this dummy method with whatever the actual interface is for the app method
// in question.
- (void)whateverTheInterfaceIsForTheAppMethod:(id)whatever {
// Call this _prior_ to swizzling `NSUserDefaults`.
effectiveScrollerStyle = [NSScrollerStyle preferredScrollerStyle];
/* swizzle NSUserDefalts with the implementation below */
ZKOrid(void, whatever);
/* restore NSUserDefaults */
}
// --------------------------------------------------- //
- (id)objectForKey:(NSString *)defaultName {
if ([defaultName isEqual:#"AppleShowScrollBars"]) {
if (effectiveScrollerStyle == NSScrollerStyleLegacy) {
return #"Always";
} else {
return #"WhenScrolling";
}
}
return ZKOrig(id, defaultName);
}
I'm not familiar with ZKSwizzle so I'm not sure of the exact syntax you use to swizzle NSUserDefaults, but hopefully the concept is clear, and does what you want.

Can a static variable used as #synchronized parameter?

We want to guarantee thread safety for a static variable.
We used another static variable as object in the #synchronized directive. Like this:
static NSString *_saveInProgressLock = #"SaveInProgressLock";
static BOOL _saveInProgress;
+ (BOOL)saveInProgress {
#synchronized(_saveInProgressLock) {
return _saveInProgress;
}
}
+ (void)setSaveInProgress:(BOOL)save {
#synchronized(_saveInProgressLock) {
_saveInProgress = save;
}
}
We are experiencing issues in the app currently on the store, which may be reproduced by preventing the _saveInProgress variable to be set to NO.
Do you see any problem with the above code?
How does it differ from this?
static BOOL _saveInProgress;
+ (BOOL)saveInProgress {
#synchronized([MyClass class]) {
return _saveInProgress;
}
}
+ (void)setSaveInProgress:(BOOL)save {
#synchronized([MyClass class]) {
_saveInProgress = save;
}
}
tl;dr: This is perfectly safe as long as the string literal is unique. If it is not unique there may be (benign) problems, but usually only in Release mode. There may be an easier way to implement this, though.
#synchronized blocks are implemented using the runtime functions objc_sync_enter and objc_sync_exit (source). These functions are implemented using a global (but objc-internal) side table of locks that is keyed by pointer values. On the C-API-level, you could also lock on (void *)42, or in fact any pointer value. It doesn't even matter wether the object is alive, because the pointer is never dereferenced. However, the objc compiler refuses to compile a #synchronized(obj) expression if the obj does not statically typecheck to an id type (of which NSString * is a subtype, so it's okay) and maybe it retains the object (I'm not sure about that), so you should only use it with objects.
There are two critical points to consider though:
if the obj on which you synchronize is the NULL-pointer (nil in Objective C), then objc_sync_enter and objc_sync_exit are no-ops, and this leads to the undesirable situation where the block is performed with absolutely no locking.
If you use the same string value for different #synchronized blocks, the compiler may be smart enough to map them to the same pointer address. Maybe the compiler does not do this now, but it is a perfectly valid optimization that Apple may introduce in the future. So you should make sure that you use unique names. If this happens, two different #synchronized blocks may accidentally use the same lock where the programmer wanted to use different locks. By the way, you can also use [NSObject new] as a lock object.
Synchronizing on a class object ([MyClass class]) is perfectly safe and okay too.
Now for the easier way. If you just have a single BOOL variable that you want to be atomic, you may use lock-free programming:
static BOOL _saveInProgress;
+ (BOOL)saveInProgress {
__sync_synchronize();
return _saveInProgress;
}
+ (void)setSaveInProgress:(BOOL)save {
_saveInProgress = save;
__sync_synchronize();
}
This has much better performance and is just as threadsafe. __sync_synchronize() is a memory barrier.
Note however, that the safety of both solutions depend on how you use them. If you have a save-method somewhere that looks like this:
+ (void)save { // line 21
if(![self saveInProgress]) { // line 22
[self setSaveInProgress:YES]; // line 23
// ... do stuff ...
[self setSaveInProgress:NO]; // line 40
}
}
that +save method is not threadsafe at all, because there is a race condition between line 22 and line 23. (Don't wanna go into details here.. just ask a new question if you need more information.)

Is the Singleton Class NetworkManager in Apple's Sample MVCNetworking correct?

Here is the link to the sample code http://developer.apple.com/library/ios/#samplecode/MVCNetworking/Introduction/Intro.html
Below is the code snippet from the file NetworkManager.m
+ (NetworkManager *)sharedManager
// See comment in header.
{
static NetworkManager * sNetworkManager;
// This can be called on any thread, so we synchronise. We only do this in
// the sNetworkManager case because, once sNetworkManager goes non-nil, it can
// never go nil again.
if (sNetworkManager == nil) {
#synchronized (self) {
sNetworkManager = [[NetworkManager alloc] init];
assert(sNetworkManager != nil);
}
}
return sNetworkManager;
}
Obviously there are thread safe issues here. Two NetworkManager instance may be created when there are more than one thread. So Apple made a mistake, right?
Yes, you are right. It will have problem in concurrency environment. A better way is using double check before alloc:
+ (NetworkManager *)sharedManager
{
static NetworkManager * sNetworkManager;
if (sNetworkManager == nil) {
#synchronized (self) {
if (sNetworkManager == nil) {
sNetworkManager = [[NetworkManager alloc] init];
assert(sNetworkManager != nil);
}
}
}
return sNetworkManager;
}
And there are lots of way to write singleton using Ojbective-C, check this post: What should my Objective-C singleton look like?
Update
BobCromwell is right. The double check lock is not recommended by apple, the document in apple's Threading Programming Guide:
A double-checked lock is an attempt to reduce the overhead of taking a lock by testing the locking criteria prior to taking the lock. Because double-checked locks are potentially unsafe, the system does not provide explicit support for them and their use is discouraged.`
Yes, it's wrong. Start with sNetworkManager as nil, and consider two threads T1 and T2.
One possible, if unlikely, scenario is:
T1: Determines (sNetworkManager == nil) is true
T2: Determines (sNetworkManager == nil) is true
T1: Takes the #synchronized lock
Creates a NetworkManager
Sets sNetworkManager
Releases the lock
T2: Takes the #synchronized lock
Creates a NetworkManager
Sets sNetworkManager, LEAKING the first one
Releases the lock
This question has some safer ways of doing it.
There is no mistake in this code. Only one sNetworkManager is created for the simple reason that the word "static" is used. The static keyword is used here to define the variable as global but only visible to that function. The variable is allocated in the first call of + (NetworkManager *)sharedManager then it's no more to null and not initialized any more.

How to handle memory management for Singleton pattern in Objective-C?

My code is:
static Class1 *onlyInstance;
+(Class1*) getInstance {
#synchronized([Class1 class]) {
if(onlyInstance == nil)
onlyInstance = [[Class1 alloc]init];
return onlyInstance;
}
return nil;
}
How to manage memory with sington pattern in Objective-C?
It's a singleton, you don't really need to release it at any given time, it is supposed to be around whenever you need it.
If you need to release it you can do that from within the class itself.
simply release the onlyInstance and set it to nil, so once the method that created it in the first place is called again it will recreate it.

Should I include the managed object context as a parameter to a method?

Problem
I'm including the managed object context as a parameter of a method when I work with Core Data.
Although this makes the code easier to test, it's messy.
Questions
Is this good or bad practice?
Is there a neater, easier way of doing this that keeps methods testable?
Background
The example below is a background operation that has it's own context.
Any advice from more experienced coders would be much appreciated!
Code
#interface JGTrainingGroupCleanupOperation : JGCoreDataOperation {
NSManagedObjectContext *imoc;
}
...
#implementation JGTrainingGroupCleanupOperation
-(void)main {
[self startOperation]; // Setting up the local context etc
[self cleanupTrainingGroupsInMOC:imoc];
[self finishOperation];
}
-(void)cleanupTrainingGroupsInMOC:(NSManagedObjectContext *)moc {
NSSet *trainedGroups = [self fetchAllTrainedGroupsInMOC:moc];
[self deleteDescendantsOfGroups:trainedGroups fromMOC:moc];
[self removeStubAncestorsOfGroups:trainedGroups fromMOC:moc];
}
-(NSSet *)fetchAllTrainedGroupsInMOC:(NSManagedObjectContext *)moc_ {
return [moc_ fetchObjectsForEntityName:kTrainingGroup withPredicate:[NSPredicate predicateWithFormat:#"projectEditedAtTopLevel == nil"]];
}
-(void)deleteDescendantsOfGroups:(NSSet *)trainedGroups fromMOC:(NSManagedObjectContext *)moc_ {
// More code here
}
-(void)deleteDescendantsOfGroup:(JGTrainingGroup *)trainedGroup fromMOC:(NSManagedObjectContext *)moc_ {
// More code here
}
In my (not so humble) opinion I'd say it's mostly a matter of style. You can do it this way or you can #synthesize the moc and call [self moc] or self.moc.
Me? I'd go the accessor route personally, mostly because class members shouldn't have to be told where to find an object dereferenced by an iVar anyway. If you're accessing something that's an iVar within the same class, I'd use the iVar directly or an accessor.
I believe the difference in performance would be negligible, so I wouldn't really bother much on that front (even though you didn't ask).