terminating a secondary thread from the main thread (cocoa) - objective-c

I'm working on a small app written in objective-c with the help of the cocoa framework and I am having a multithreading issue.
I would really appreciate it if somebody could help me with some guidance on how terminate a secondary(worker) thread from the main thread?
- (IBAction)startWorking:(id)sender {
[NSThread detachNewThreadSelector:#selector(threadMain:) toTarget:self withObject:nil];
}
- (void)threadMain
{
// do a lot of boring, time consuming I/O here..
}
- (IBAction)stop:(id)sender {
// what now?
}
I've found something on apple's docs but what is missing from this example is the part where the runloop input source changes the exitNow value.
Also, I won't be using many threads in my app so I would prefer a simple solution (with less overhead) rather than a more complex one that is able to manage many threads easily, but with more overhead generated (eg. using locks maybe(?) instead of runloops)
Thanks in advance

I think the easiest way is to use NSThread's -(void)cancel method. You'll need a reference to the thread you've created, as well. Your example code would look something like this, if you can do the worker thread as a loop:
- (IBAction)startWorking:(id)sender {
myThread = [[NSThread alloc] initWithTarget:self selector:#selector(threadMain:) object:nil];
[myThread start];
}
- (void)threadMain
{
while(1)
{
// do IO here
if([[NSThread currentThread] isCancelled])
break;
}
}
- (IBAction)stop:(id)sender {
[myThread cancel];
[myThread release];
myThread = nil;
}
Of course, this will only cancel the thread between loop iterations. So, if you're doing some long blocking computation, you'll have to find a way to break it up into pieces so you can check isCancelled periodically.

Also take a look at the NSOperation and NSOperationQueue classes. It's another set of threading classes that make developing a worker thread model very easy to do.

Related

Find matching NSThread for NSRunLoop (needed to fix Socket Rocket)

I'm working on a fix for a race condition in Socket Rocket.
This bug was reported a long time ago and is still not fixed.
More than a year ago I've written a fix which breaks API (only shared thread can be used) and this code is running successfully on production (no crashes at all when there are lots of users).
Now I want to tweak my fix in such way that it will not break API of SRWebSocket.
For that, I need to find matching NSThread forgiven NSRunLoop. This is one to one relation, but I have problem finding an API which could help me.
PS. A fix is quite simple. Every operation made on NSRunLoop must be done from a respective thread. There is no NSRunLoop or CFRunLoopAPI that can be safely used from another thread. So I've added such API toSRRunLoopThread`:
- (void)scheduleBlock: (void(^)())block
{
[self performSelector: #selector(_runBlock:)
onThread: self
withObject: [block copy]
waitUntilDone: NO];
}
- (void)_runBlock: (void(^)())block
{
block();
}
and use it in every place where something is done on this NSRunLoop.
This fix shows why I need to find matching NSThread.
Note documentations states that performSelector:onThread:withObject:waitUntilDone: is thread safe
You can use this method to deliver messages to other threads in your application.
I have to stress again that documentation warns clearly that NSRunLoop API is NOT threaded safe:
Warning
The NSRunLoop class is generally not considered to be thread-safe and
its methods should only be called within the context of the current
thread. You should never try to call the methods of a NSRunLoop
object running in a different thread, as doing so might cause
unexpected results.
Since CFRunLoop is just the same thing which can by toil free bridged to/from NSRunLoop, so it has exactly same weaknesses.
So if documentation doesn't say that API it threads safe, then it is not threaded safe and I can't use it in that context (so proposed answer from #DisableR is obviously invalid).
You can perform blocks having a runloop without need of a thread.
Block will be performed asynchronously on a thread that is associated with the run loop.
CFRunLoopPerformBlock([myNSRunLoop getCFRunLoop], kCFRunLoopCommonModes, block);
CFRunLoopWakeUp([myNSRunLoop getCFRunLoop]);
Here is a discussion about implementation of -performSelectorOnMainThread: method of NSThread on macOS Tiger where it was not available, the problem that is quite similar to yours:
http://www.cocoabuilder.com/archive/cocoa/112261-cfrunlooptimer-firing-delay.html
Verifying the thread safety:
import Foundation
var runLoopFromThread: CFRunLoop!
let lock = NSLock()
lock.lock()
let thread = Thread {
autoreleasepool {
runLoopFromThread = CFRunLoopGetCurrent()
lock.unlock()
while true {
autoreleasepool {
CFRunLoopRun()
}
}
}
}
thread.start()
lock.lock()
let runLoop = runLoopFromThread
lock.unlock()
let racingLock = NSLock()
for _ in 0..<10000 {
DispatchQueue.global().async {
let acquired = racingLock.try()
CFRunLoopPerformBlock(runLoop, CFRunLoopMode.commonModes.rawValue) {
var i = 0
i += 1
}
CFRunLoopWakeUp(runLoop);
if acquired {
racingLock.unlock()
} else {
print("Potential situation in which issues with thread safety may arise. No crash / thread sanitizer warnings means thread safety is present.")
}
}
}
dispatchMain()

Barrier operations in NSOperationQueue

How can we implement dispatch_barrier_async's equivalent behavior using NSOperationQueue or any user-defined data-structure based on NSOperationQueue?
The requirement is, whenever a barrier operation is submitted it should wait until all non-barrier operations submitted earlier finish their execution and blocks other operations submitted after that.
Non-barrier operations should be able to perform concurrently.
Barrier operations should execute serially.
NB: Not using GCD,as it doesn't provide(or atleast difficult) much access over the operations, like cancelling single operation, etc.
This is more or less what jeffamaphone was saying, but I put up a gist that should, in rough outline, do what you ask.
I create a NSMutableArray of NSOperationQueues, which serves as a "queue of queues". Every time you add a BarrierOperation object, you tack a fresh suspended op queue on the end. That becomes the addingQueue, to which you add subsequent operations.
- (void)addOperation:(NSOperation *)op {
#synchronized (self) {
if ([op isKindOfClass:[BarrierOperation class]]) {
[self addBarrierOperation:(id)op];
} else {
[[self addingQueue] addOperation:op];
}
}
}
// call only from #synchronized block in -addOperation:
- (void)addBarrierOperation:(BarrierOperation *)barrierOp {
[[self addingQueue] setSuspended:YES];
for (NSOperation *op in [[self addingQueue] operations]) {
[barrierOp addDependency:op];
}
[[self addingQueue] addOperation:barrierOp];
// if you are free to set barrierOp.completionBlock, you could skip popCallback and do that
__block typeof(self) weakSelf = self;
NSOperation *popCallback = [NSBlockOperation blockOperationWithBlock:^{
[weakSelf popQueue];
}];
[popCallback addDependency:barrierOp];
[[self addingQueue] addOperation:popCallback];
[[self addingQueue] setSuspended:NO];
NSOperationQueue *opQueue = [[NSOperationQueue alloc] init];
[opQueue setSuspended:YES];
[_queueOfQueues addObject:opQueue]; // fresh empty queue to add to
}
When one NSOperationQueue finishes, it gets popped and the next one starts running.
- (void)popQueue
{
#synchronized (self) {
NSAssert([_queueOfQueues count], #"should always be one to pop");
[_queueOfQueues removeObjectAtIndex:0];
if ([_queueOfQueues count]) {
// first queue is always running, all others suspended
[(NSOperationQueue *)_queueOfQueues[0] setSuspended:NO];
}
}
}
I might have missed something crucial. The devil's in the details.
This smells a bit like a homework assignment to me. If so, tell me what grade I get. :)
Addendum: Via abhilash1912's comment, a different but similar approach. That code is tested, so it already wins. But it is a bit stale (2 years or so as of today; some deprecated method usage). Moreover, I question whether inheriting from NSOperationQueue is the best path, though it has the virtue of retaining familiarity. Regardless, if you've read this far, it's probably worth looking over.
If you create or find the world's greatest BarrierQueue class, please let us know in the comments or otherwise, so it can be linked up.
Create an NSOperation that is your barrier, then use:
- (void)addDependency:(NSOperation *)operation
To make that barrier operation dependent on all the ones you want to come before it.
I don't think it's possible to create an NSOperation object that's gives you the same sort of functionality, barriers have more to do with the way the queue operates.
The main difference between using a barrier and the dependency mechanism of NSOperations is, in the case of a barrier, the thread queue waits until all running concurrent operations have completed, and then it runs your barrier block, while making sure that any new blocks submitted and any blocks waiting do not run until the critical block has passed.
With an NSOperationQueue, it's impossible to set up the queue in such a way that it'll enforce a proper barrier: all NSOperations added to the queue before your critical NSOperation must be explicitly registered as a dependency with the critical job, and once the critical job has started, you must explicitly guard the NSOperationQueue to make sure no other clients push jobs onto it before the critical job has finished; you guard the queue by adding the critical job as a dependency for the subsequent operations.
(In the case where you know there's only one critical job at a time, this sounds sorta easy, but there will probably be n critical jobs waiting at any one time, which means keeping track of the order jobs are submitted, managing the relative dependency of critical jobs relative to their dependent jobs -- some critical jobs can wait for others, some must be executed in a particular order relative to others... yikes.)
It might be possible to get this level of functionality by setting up an NSOperationQueue with a concurrent job max of one, but that sorta defeats the purpose of doing this, I think. You might also be able to make this work by wrapping an NSOperationQueue in a facade object that protects NSOperations that are submitted "critically."
Just another way... don't hurt me.
Todo: save origin completion and self.maxConcurrentOperationCount = 1 sets queue to serial on adding. But should before execution.
#import "NSOperationQueue+BarrierOperation.h"
#implementation NSOperationQueue (BarrierOperation)
- (void)addOperationAsBarrier:(NSOperation *)op
{
//TODO: needs to save origin completion
// if (op.completionBlock)
// {
// originBlock = op.completionBlock;
// }
NSOperationQueue* qInternal = [NSOperationQueue new];
NSInteger oldMaxConcurrentOperationCount = self.maxConcurrentOperationCount;
op.completionBlock = ^{
self.maxConcurrentOperationCount = oldMaxConcurrentOperationCount;
NSLog(#"addOperationAsBarrier maxConcurrentOperationCount restored");
};
[self addOperationWithBlock:^{
self.maxConcurrentOperationCount = 1;
NSLog(#"addOperationAsBarrier maxConcurrentOperationCount = 1");
}];
[qInternal addOperationWithBlock:^{
NSLog(#"waitUntilAllOperationsAreFinished...");
[self waitUntilAllOperationsAreFinished];
}];
NSLog(#"added OperationAsBarrier");
[self addOperation:op];
}
#end

Threaded Obj-C code with ARC enabled -- why it works this way?

I need an extra thread in background to listen to requests from socket.
The code is put into a singleton class; it will be called in main.m before NSApplicationMain() like this:
[[SKSocketThread getSingleton] runThread];
And runThread is defined as follow:
- (void) runThread {
[NSThread detachNewThreadSelector:#selector(socketThreadMainLoop:)
toTarget:self
withObject:[self quitLock]];
}
- (void) socketThreadMainLoop:(id)param {
NSLock *lock = (NSLock *)param;
while (![lock tryLock]) {
NSLog(#"Yay! We are in socketThreadMainLoop now!");
[NSThread sleepForTimeInterval:2];
}
NSLog(#"Terminating the socket thread...");
[lock unlock]; // is it really necessary?
}
It compiled successfully with no warning, but will throw an error in runtime:
autoreleased with no pool in place.
I did some googling, tried to pack code in runThread and socketThreadMainLoop with #autoreleasepool, but the error is still there. Finally I wrapped call to runThread with it in main.m, and that worked!
I don't know why it only works this way...
You should wrap your code with #autoreleasepool block.
...
- (void) socketThreadMainLoop:(id)param {
#autoreleasepool
{
NSLock *lock = (NSLock *)param;
while (![lock tryLock]) {
NSLog(#"Yay! We are in socketThreadMainLoop now!");
[NSThread sleepForTimeInterval:2];
}
NSLog(#"Terminating the socket thread...");
[lock unlock]; // is it really necessary?
}
}
Read more:
NSAutoreleasePool Class Reference
Set a breakpoint on objc_autoreleaseNoPool and post the backtrace. You need an #autoreleasepool{...} in all threads that don't us run loops, including the main thread (in your main.m, if you aren't calling into NSApplicationMain()).
Some additional feedback; that you named the method getSingleton indicates that you are new to iOS development (don't name methods get* anything). That you are using sleep in a while loop indicates that you are a bit new to the whole networking thing, too.
Also, spinning up a thread prior to the call into NSApplicationMain() is totally the wrong thing to do; you should be doing the networking goop as a normal part of application startup... see below.
You really really really don't want to do networking using a handrolled while() loop with sleep. Polling is an awful pattern on mobile devices; it is battery hungry and that sleep is just going to make things unresponsive.
Use a proper run loop and/or dispatch sources and/or CFStream APIs and/or NSFileHandles.

Locked up waiting for #synchronized

I have this (rare) odd case where my objective-c iOS program is locking up. When I break into the debugger, there are two threads and both of them are stuck at a #synchronized().
Unless I am completely misunderstanding #synchronized, I didn't think that was possible and the whole point of the command.
I have a main thread and worker thread that both need access to a sqlite database, so I wrap the chunks of code that are accessing the db in #synchronized(myDatabase) blocks. Not much else happens in these blocks except the db access.
I'm also using the FMDatabase framework to access sqlite, I don't know if that matters.
The myDatabase is a global variable that contains the FMDatabase object. It is created once at the start of the program.
I know I'm late to the party with this, but I've found a strange combination of circumstances that #synchronized handles poorly and is probably responsible for your problem. I don't have a solution to it, besides to change the code to eliminate the cause once you know what it is.
I will be using this code below to demonstrate.
- (int)getNumberEight {
#synchronized(_lockObject) {
// Point A
return 8;
}
}
- (void)printEight {
#synchronized(_lockObject) {
// Point B
NSLog(#"%d", [self getNumberEight]);
}
}
- (void)printSomethingElse {
#synchronized(_lockObject) {
// Point C
NSLog(#"Something Else.");
}
}
Generally, #synchronized is a recursively-safe lock. As such, calling [self printEight] is ok and will not cause deadlocks. What I've found is an exception to that rule. The following series of events will cause deadlock and is extremely difficult to track down.
Thread 1 enters -printEight and acquires the lock.
Thread 2 enters -printSomethingElse and attempts to acquire the lock. The lock is held by Thread 1, so it is enqueued to wait until the lock is available and blocks.
Thread 1 enter -getNumberEight and attempts to acquire the lock. The lock is held already and someone else is in the queue to get it next, so Thread 1 blocks. Deadlock.
It appears that this functionality is an unintended consequence of the desire to bound starvation when using #synchronized. The lock is only recursively safe when no other thread is waiting for it.
The next time you hit deadlock in your code, examine the call stacks on each thread to see if either of the deadlocked threads already holds the lock. In the sample code above, by adding long sleeps at Point A, B, and C, the deadlock can be recreated with almost 100% consistency.
EDIT:
I'm no longer able to demonstrate the previous issue, but there is a related situation that still causes issues. It has to do with the dynamic behavior of dispatch_sync.
In this code, there are two attempts to acquire the lock recursively. The first calls from the main queue into a background queue. The second calls from the background queue into the main queue.
What causes the difference in behavior is the distinction between dispatch queues and threads. The first example calls onto a different queue, but never changes threads, so the recursive mutex is acquired. The second changes threads when it changes queues, so the recursive mutex cannot be acquired.
To emphasize, this functionality is by design, but it behavior may be unexpected to some that do not understand GCD as well as they could.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
NSObject *lock = [[NSObject alloc] init];
NSTimeInterval delay = 5;
NSLog(#"Example 1:");
dispatch_async(queue, ^{
NSLog(#"Starting %d seconds of runloop for example 1.", (int)delay);
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:delay]];
NSLog(#"Finished executing runloop for example 1.");
});
NSLog(#"Acquiring initial Lock.");
#synchronized(lock) {
NSLog(#"Acquiring recursive Lock.");
dispatch_sync(queue, ^{
NSLog(#"Deadlock?");
#synchronized(lock) {
NSLog(#"No Deadlock!");
}
});
}
NSLog(#"\n\nSleeping to clean up.\n\n");
sleep(delay);
NSLog(#"Example 2:");
dispatch_async(queue, ^{
NSLog(#"Acquiring initial Lock.");
#synchronized(lock) {
NSLog(#"Acquiring recursive Lock.");
dispatch_sync(dispatch_get_main_queue(), ^{
NSLog(#"Deadlock?");
#synchronized(lock) {
NSLog(#"Deadlock!");
}
});
}
});
NSLog(#"Starting %d seconds of runloop for example 2.", (int)delay);
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:delay]];
NSLog(#"Finished executing runloop for example 2.");
I stumbled into this recently, assuming that #synchronized(_dataLock) does what it's supposed to do, since it is such a fundamental thing after all.
I went on investigating the _dataLock object, in my design I have several Database objects that will do their locking independently so I was simply creating _dataLock = [[NSNumber numberWithInt:1] retain] for each instance of Database.
However the [NSNumber numberWithInt:1] returns the same object, as in same pointer!!!
Which means what I thought was a localized lock for only one instance of Database is not a global lock for all instances of Database.
Of course this was never the intended design and I am sure this was the cause of issues.
I will change the
_dataLock = [[NSNumber numberWithInt:1] retain]
with
_dataLock = [[NSUUID UUID] UUIDString] retain]

iOS threading - callback best practice

I wanted to clean up one of my projects and extracted parts of my source that I often reuse, in a single class.
This class handles some requests to a web service, everything is fine so far ;). Until I extracted the code to its own class, I handled those requests with threads and callbacks in the calling class.
Now I have a "best practice" question:
In my code I do something like(simplified):
(void)foo{
Helper *h =[[Helper alloc]init];
[h doRequest];
}
doRequest performs a network action(in its own class)and I have to wait until this is request is finished. So I need a callback or something like this.
Should I simply thread doRequest incl. waituntildone=YES?
Do I have to thread the networking in the Helper class too? Or is it enough to call the method threaded something like this:
[NSThread detachNewThreadSelector:#selector(h doRequest) toTarget:self withObject:nil];
What is the best practice to get a callback from doRequest to the caller class after it has completed it’s tasks so that I can handle the returned values from the web service?
Thanks in advance.
Johannes
Given doRequest does not return until the request is done you could do
- (void)fooCompletion:(void (^)(void))completion {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
Helper *h =[[Helper alloc]init];
[h doRequest];
if (completion) {
dispatch_async(dispatch_get_main_queue(), ^{
// doRequest is done
completion();
});
}
});
}
To call the method:
[self fooCompletion:^{
// do something after doRequest is done
}];
I personally prefer calling performSelectorOnMainThread:withObject:waitUntilDone: at the end of any helper threads that need to send information back.
[self performSelectorOnMainThread:#selector(infoFromService:) withObject:aDictionaryWithInfo waitUntilDone:NO];
- (void)infoFromService:(NSDictionary *)aDictionary {
//Process all the information and update UI
}
Be sure to always use the main thread for any UI updates even if they happen in the middle of the worker thread, for example updating a count of how much information has been downloaded. Use the same technique to call the main thread with the relevant information.