What would happen if I dispatch_barrier_(a)sync-ed to a queue that targets a global concurrent queue in GCD? - objective-c

I have a question about dispatch_barrier and the target queue.
I have a custom serial queue and custom concurrent queue and I set the target queue of the serial queue to a concurrent queue which then targets a global concurrent queue:
(serial queue) -> (concurrent queue) -> (global concurrent queue)
What happens when I dispatch_barrier blocks on the serial queue? Will It block the execution of the blocks submitted to the concurrent queue too, or just the execution blocks in the serial queue only? Or if I dispatch_barrier blocks to the non-global concurrent queue, will It block the execution of blocks submitted to the serial queue too, or just the execution of blocks in the non-global concurrent queue only?
Thank you for your interest. :)

Submitting a dispatch_barrier_async to a serial queue is no different than dispatch_async because the queue is serial, so there couldn't be any readers to keep out, since only one block can execute on a serial queue at a time. Put differently, every block is a "barrier block" on a serial queue.
If you dispatch_barrier_async to the non-global concurrent queue, then readers will be kept out of THAT queue, but not the global queue it targets. It acts as a barrier only to the queue it is submitted to.
If you want to further convince yourself, think of it this way: All queues ultimately target one of the global concurrent queues (background, low, default, and high priority). With that in mind, if dispatch_barrier* to any queue transitively caused a barrier on the global queue that the submitted-to queue ultimately targeted, then it would be trivial to use dispatch_barrier* to starve out all other clients of GCD (by submitting a barrier block to 4 private concurrent queues, each of which targeted a different priority global queue.) That would be totally bogus.
Coming at it from the other direction: dispatch_barrier* is useful specifically because you can create arbitrary units of mutual exclusion (i.e. non-global concurrent queues).
In short: the queue you submit to is the unit of "protection" (or "barrier-ness").
EDIT: If you're willing to take the above at face value, you can stop reading, but in an effort to give some more clarity here, I coded up a quick example to prove my claims. As some background, this is from Apple's documentation:
If the queue you pass to this function [disaptch_barrier_async] is a
serial queue or one of the global concurrent queues, this function
behaves like the dispatch_async function.
This means that a disaptch_barrier_async submitted to a serial queue will have no external effect, nor will a disaptch_barrier_async submitted to a global queue. Rather than merely appeal to authority, I'll prove these two claims.
Barrier block submitted to private serial queue
Here's the code:
static void FakeWork(NSString* name, NSTimeInterval duration, dispatch_group_t groupToExit);
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
dispatch_queue_t privateSerialQueue = dispatch_queue_create("", DISPATCH_QUEUE_SERIAL);
dispatch_queue_t privateConcurQueue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_t globalConcurQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_set_target_queue(privateSerialQueue, privateConcurQueue);
dispatch_set_target_queue(privateConcurQueue, globalConcurQueue);
// Barrier block submitted to serial queue. Per the docs, we expect this to have no effect
// and behave like dispatch_async. So, we expect this to run to completion in 15s.
{
NSString* testDesc = #"Checking for effects of barrier block on serial queue";
dispatch_suspend(globalConcurQueue);
dispatch_group_t group = dispatch_group_create();
NSDate* start = [NSDate date];
NSLog(#"%#\nStarting test run at: %#", testDesc, start);
// We expect these to take 15s total
dispatch_group_enter(group); dispatch_group_enter(group); dispatch_group_enter(group);
dispatch_async(privateSerialQueue, ^{ FakeWork(#"A1: 5s Job on privateSerialQueue", 5.0, group); });
dispatch_barrier_async(privateSerialQueue, ^{ FakeWork(#"A2: 5s BARRIER Job on privateSerialQueue", 5.0, group); });
dispatch_async(privateSerialQueue, ^{ FakeWork(#"A3: 5s Job on privateSerialQueue", 5.0, group); });
// So we'll make 3 15s jobs each for the privateConcurrentQueue and globalConcurrentQueue
dispatch_group_enter(group); dispatch_group_enter(group); dispatch_group_enter(group);
dispatch_async(privateConcurQueue, ^{ FakeWork(#"B1: 15s Job on privateConcurQueue", 15.0, group); });
dispatch_async(privateConcurQueue, ^{ FakeWork(#"B2: 15s Job on privateConcurQueue", 15.0, group); });
dispatch_async(privateConcurQueue, ^{ FakeWork(#"B3: 15s Job on privateConcurQueue", 15.0, group); });
dispatch_group_enter(group); dispatch_group_enter(group); dispatch_group_enter(group);
dispatch_async(globalConcurQueue, ^{ FakeWork(#"C1: 15s Job on globalConcurQueue", 15.0, group); });
dispatch_async(globalConcurQueue, ^{ FakeWork(#"C2: 15s Job on globalConcurQueue", 15.0, group); });
dispatch_async(globalConcurQueue, ^{ FakeWork(#"C3: 15s Job on globalConcurQueue", 15.0, group); });
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
NSDate* end = [NSDate date];
NSLog(#"Test run finished at: %# duration: %#", end, #([end timeIntervalSinceDate: start]));
}
}
static void FakeWork(NSString* name, NSTimeInterval duration, dispatch_group_t groupToExit)
{
NSDate* start = [NSDate date];
NSLog(#"Starting task: %# withDuration: %# at: %#", name, #(duration), start);
while (1) #autoreleasepool
{
NSTimeInterval t = [[NSDate date] timeIntervalSinceDate: start];
if (t >= duration)
{
break;
}
else if ((t + 0.0005) < duration)
{
usleep(50);
}
}
NSDate* end = [NSDate date];
duration = [end timeIntervalSinceDate: start];
NSLog(#"Finished task: %# withRealDuration: %# at: %#", name, #(duration), end);
if (groupToExit)
{
dispatch_group_leave(groupToExit);
}
}
If a dispatch_barrier_async had any effect at all on the targeted queue, we would expect this to take more than 15 seconds, but here's the output:
Checking for effects of barrier block on serial queue
Starting test run at: 2013-09-19 12:16:25 +0000
Starting task: C1: 15s Job on globalConcurQueue withDuration: 15 at: 2013-09-19 12:16:25 +0000
Starting task: C2: 15s Job on globalConcurQueue withDuration: 15 at: 2013-09-19 12:16:25 +0000
Starting task: C3: 15s Job on globalConcurQueue withDuration: 15 at: 2013-09-19 12:16:25 +0000
Starting task: A1: 5s Job on privateSerialQueue withDuration: 5 at: 2013-09-19 12:16:25 +0000
Starting task: B1: 15s Job on privateConcurQueue withDuration: 15 at: 2013-09-19 12:16:25 +0000
Starting task: B2: 15s Job on privateConcurQueue withDuration: 15 at: 2013-09-19 12:16:25 +0000
Starting task: B3: 15s Job on privateConcurQueue withDuration: 15 at: 2013-09-19 12:16:25 +0000
Finished task: A1: 5s Job on privateSerialQueue withRealDuration: 5 at: 2013-09-19 12:16:30 +0000
Starting task: A2: 5s BARRIER Job on privateSerialQueue withDuration: 5 at: 2013-09-19 12:16:30 +0000
Finished task: A2: 5s BARRIER Job on privateSerialQueue withRealDuration: 5 at: 2013-09-19 12:16:35 +0000
Starting task: A3: 5s Job on privateSerialQueue withDuration: 5 at: 2013-09-19 12:16:35 +0000
Finished task: C1: 15s Job on globalConcurQueue withRealDuration: 15.00000900030136 at: 2013-09-19 12:16:40 +0000
Finished task: C2: 15s Job on globalConcurQueue withRealDuration: 15 at: 2013-09-19 12:16:40 +0000
Finished task: C3: 15s Job on globalConcurQueue withRealDuration: 15 at: 2013-09-19 12:16:40 +0000
Finished task: B1: 15s Job on privateConcurQueue withRealDuration: 15 at: 2013-09-19 12:16:40 +0000
Finished task: B2: 15s Job on privateConcurQueue withRealDuration: 15 at: 2013-09-19 12:16:40 +0000
Finished task: A3: 5s Job on privateSerialQueue withRealDuration: 5 at: 2013-09-19 12:16:40 +0000
Finished task: B3: 15s Job on privateConcurQueue withRealDuration: 15 at: 2013-09-19 12:16:40 +0000
Test run finished at: 2013-09-19 12:16:40 +0000 duration: 15.00732499361038
Barrier block submitted to global concurrent queue
Let's also verify the point from the documentation that barrier blocks submitted to the global concurrent queue have no barrier effect. Here's some code (just the differences from the first example):
{
NSString* testDesc = #"Barrier block submitted to globalConcurQueue";
dispatch_group_t group = dispatch_group_create();
NSDate* start = [NSDate date];
NSLog(#"%#\nStarting test run at: %#", testDesc, start);
dispatch_group_enter(group); dispatch_group_enter(group); dispatch_group_enter(group);
dispatch_async(privateSerialQueue, ^{ FakeWork(#"A1: 5s Job on privateSerialQueue", 5.0, group); });
dispatch_async(privateSerialQueue, ^{ FakeWork(#"A2: 5s Job on privateSerialQueue", 5.0, group); });
dispatch_async(privateSerialQueue, ^{ FakeWork(#"A3: 5s Job on privateSerialQueue", 5.0, group); });
dispatch_group_enter(group); dispatch_group_enter(group); dispatch_group_enter(group);
dispatch_async(privateConcurQueue, ^{ FakeWork(#"B1: 15s Job on privateConcurQueue", 15.0, group); });
dispatch_async(privateConcurQueue, ^{ FakeWork(#"B2: 15s Job on privateConcurQueue", 15.0, group); });
dispatch_async(privateConcurQueue, ^{ FakeWork(#"B3: 15s Job on privateConcurQueue", 15.0, group); });
dispatch_group_enter(group); dispatch_group_enter(group); dispatch_group_enter(group);
dispatch_async(globalConcurQueue, ^{ FakeWork(#"C1: 15s Job on globalConcurQueue", 15.0, group); });
dispatch_barrier_async(globalConcurQueue, ^{ FakeWork(#"C2: 15s BARRIER Job on globalConcurQueue", 15.0, group); });
dispatch_async(globalConcurQueue, ^{ FakeWork(#"C3: 15s Job on globalConcurQueue", 15.0, group); });
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
NSDate* end = [NSDate date];
NSLog(#"Test run finished at: %# duration: %#", end, #([end timeIntervalSinceDate: start]));
}
If the barrier block submitted to the global concurrent queue had any effect, we would expect this to take longer than 15s, but here's the output:
Barrier block submitted to globalConcurQueue
Starting test run at: 2013-09-19 12:33:28 +0000
Starting task: C1: 15s Job on globalConcurQueue withDuration: 15 at: 2013-09-19 12:33:28 +0000
Starting task: C2: 15s BARRIER Job on globalConcurQueue withDuration: 15 at: 2013-09-19 12:33:28 +0000
Starting task: C3: 15s Job on globalConcurQueue withDuration: 15 at: 2013-09-19 12:33:28 +0000
Starting task: B1: 15s Job on privateConcurQueue withDuration: 15 at: 2013-09-19 12:33:28 +0000
Starting task: A1: 5s Job on privateSerialQueue withDuration: 5 at: 2013-09-19 12:33:28 +0000
Starting task: B2: 15s Job on privateConcurQueue withDuration: 15 at: 2013-09-19 12:33:28 +0000
Starting task: B3: 15s Job on privateConcurQueue withDuration: 15 at: 2013-09-19 12:33:28 +0000
Finished task: A1: 5s Job on privateSerialQueue withRealDuration: 5 at: 2013-09-19 12:33:33 +0000
Starting task: A2: 5s Job on privateSerialQueue withDuration: 5 at: 2013-09-19 12:33:33 +0000
Finished task: A2: 5s Job on privateSerialQueue withRealDuration: 5 at: 2013-09-19 12:33:38 +0000
Starting task: A3: 5s Job on privateSerialQueue withDuration: 5 at: 2013-09-19 12:33:38 +0000
Finished task: C1: 15s Job on globalConcurQueue withRealDuration: 15 at: 2013-09-19 12:33:43 +0000
Finished task: C2: 15s BARRIER Job on globalConcurQueue withRealDuration: 15 at: 2013-09-19 12:33:43 +0000
Finished task: C3: 15s Job on globalConcurQueue withRealDuration: 15 at: 2013-09-19 12:33:43 +0000
Finished task: B2: 15s Job on privateConcurQueue withRealDuration: 15 at: 2013-09-19 12:33:43 +0000
Finished task: B3: 15s Job on privateConcurQueue withRealDuration: 15 at: 2013-09-19 12:33:43 +0000
Finished task: B1: 15s Job on privateConcurQueue withRealDuration: 15 at: 2013-09-19 12:33:43 +0000
Finished task: A3: 5s Job on privateSerialQueue withRealDuration: 5 at: 2013-09-19 12:33:43 +0000
Test run finished at: 2013-09-19 12:33:43 +0000 duration: 15.00729995965958
Barrier block submitted to private concurrent queue
The next thing to test is the effects of barrier blocks submitted to the private concurrent queue. Since the serial queue targets the private concurrent queue, I expect blocks submitted to the serial queue to be held up for barrier blocks submitted to the private concurrent queue. And indeed, that's the case. Here's the code:
// Barrier block submitted to private concurrent queue.
{
NSString* testDesc = #"Checking for effects of barrier block on private concurrent queue";
dispatch_suspend(globalConcurQueue);
dispatch_group_t group = dispatch_group_create();
NSDate* start = [NSDate date];
NSLog(#"%#\nStarting test run at: %#", testDesc, start);
// Make 3 5s jobs on the private concurrent queue and make the middle one a barrier, which should serialize them
dispatch_group_enter(group); dispatch_group_enter(group); dispatch_group_enter(group);
dispatch_group_enter(group); dispatch_group_enter(group); dispatch_group_enter(group);
dispatch_async(privateSerialQueue, ^{ FakeWork(#"A1: 5s Job on privateSerialQueue", 5.0, group); });
dispatch_async(privateConcurQueue, ^{ FakeWork(#"B1: 5s Job on privateConcurQueue", 5.0, group); });
dispatch_async(privateSerialQueue, ^{ FakeWork(#"A2: 5s Job on privateSerialQueue", 5.0, group); });
dispatch_barrier_async(privateConcurQueue, ^{ FakeWork(#"B2: 5s BARRIER Job on privateConcurQueue", 5.0, group); });
dispatch_async(privateSerialQueue, ^{ FakeWork(#"A3: 5s Job on privateSerialQueue", 5.0, group); });
dispatch_async(privateConcurQueue, ^{ FakeWork(#"B3: 5s Job on privateConcurQueue", 5.0, group); });
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
NSDate* end = [NSDate date];
NSLog(#"Test run finished at: %# duration: %#", end, #([end timeIntervalSinceDate: start]));
}
And here's the output:
Checking for effects of barrier block on private concurrent queue
Starting test run at: 2013-09-19 12:24:17 +0000
Starting task: B1: 5s Job on privateConcurQueue withDuration: 5 at: 2013-09-19 12:24:17 +0000
Starting task: A1: 5s Job on privateSerialQueue withDuration: 5 at: 2013-09-19 12:24:17 +0000
Finished task: A1: 5s Job on privateSerialQueue withRealDuration: 5 at: 2013-09-19 12:24:22 +0000
Finished task: B1: 5s Job on privateConcurQueue withRealDuration: 5 at: 2013-09-19 12:24:22 +0000
Starting task: A2: 5s Job on privateSerialQueue withDuration: 5 at: 2013-09-19 12:24:22 +0000
Finished task: A2: 5s Job on privateSerialQueue withRealDuration: 5 at: 2013-09-19 12:24:27 +0000
Starting task: A3: 5s Job on privateSerialQueue withDuration: 5 at: 2013-09-19 12:24:27 +0000
Finished task: A3: 5s Job on privateSerialQueue withRealDuration: 5 at: 2013-09-19 12:24:32 +0000
Starting task: B2: 5s BARRIER Job on privateConcurQueue withDuration: 5 at: 2013-09-19 12:24:32 +0000
Finished task: B2: 5s BARRIER Job on privateConcurQueue withRealDuration: 5 at: 2013-09-19 12:24:37 +0000
Starting task: B3: 5s Job on privateConcurQueue withDuration: 5 at: 2013-09-19 12:24:37 +0000
Finished task: B3: 5s Job on privateConcurQueue withRealDuration: 5 at: 2013-09-19 12:24:42 +0000
Test run finished at: 2013-09-19 12:24:42 +0000 duration: 25.00404000282288
Not surprisingly, when the barrier block is executing, it is the only block submitted to either queue that is executing. That's because the "unit of protection" is the private concurrent queue of which the private serial queue is a "sub-unit" of. The curious thing that we see here is that task A3, which was submitted to the private serial queue AFTER task B2 was submitted to the private concurrent queue, executes before B2. I'm not sure why that is, but the fundamental unit of protection (i.e. the private concurrent queue) was not violated. Based on that, I conclude that you can't count on the ordering of tasks submitted to two different queues, even if you happen to know that one queue targets the other.
So there you have it. We've proved that dispatch_barrier_async is the same as dispatch_sync on serial and global concurrent queues, just like the documentation said it would be, which left only one operation left to test (a dispatch_barrier_async to the private concurrent queue), and we've illustrated that the unit of protection is preserved in that case including operations submitted to other private queues that target it.
If there's some case you're still not clear on, please comment.

Related

NSCondtion with multiple threads

method1:
- (void) method1
{
[_condition lock];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(5 * NSEC_PER_MSEC)), dispatch_get_main_queue(), ^{
//Fetach data from remote, when finished call method2
[self fetchData];
});
[_condition waitUntilDate:[NSDate dateWithTimeIntervalSinceNow:30.0]];
// Do something.
[_condition unlock];
}
method2:
- (void) method2
{
[_condition lock];
[_condition signal];
[_condition unlock];
}
If Thread 1 is in method1, by executing [_condition waitUtilDate ...]; it unlocks its lock. Thread 2 entered this area and also wait on the condition by executing [_condition waitUtilDate ...].
Both Thread 1 and Thread 2 enqueued a Block(request 1 request 2) to fetch the same data from remote. When request 1 finishes, it calls method2 to signal _condition:
My questions are:
Which will be signaled, Thread 1 or Thread 2 ?
'cause request 1 and request 2 are doing the same thing, I can signal both threads(broadcast) and cancel request 2 when request 1 finishes. But, a better way is to refuse Thread 2 to enter the critical area, after request 1 is sent out. But I coundn't lock twice before entering the critical area. So what can I do?
Thanks.
If you're trying to prevent duplicate requests you can do this with a boolean flag that you set when starting the request and then clear when you're done. You'd protect your critical regions with a mutex/semaphore/NSLock/etc.
[_lock lock];
if (_fetching) {
[_lock unlock];
return;
}
_fetching = YES;
[self startRequestWithCompletion: ^{
[_lock lock];
_fetching = NO;
[_lock unlock];
}];
[_lock unlock];
Or you can use an NSOperationQueue and NSOperation to handle this more elegantly. You'd test the queue to see if there are any pending or running operations before starting a new one.
if (_operationQueue.operationCount)
return;
[_operationQueue addOperationWithBlock: ^{
// get your data
}];
How you do this really depends on exactly what you're fetching and whether the response may change based on input data. For example, I use NSOperations extensively to manage background computations. However, user input could change the result of those computations and invalidate any previous work. I hang on to a reference the NSOperation itself so I can cancel it and restart a new one where appropriate.

dispatch_sync not working with main queue

please consider following code:
dispatch_queue_t myQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_t myGroup = dispatch_group_create();
dispatch_group_async(myGroup, myQueue, ^{
NSLog(#"First block of code");
});
dispatch_group_async(myGroup, myQueue, ^{
NSLog(#"Second block of code");
});
dispatch_group_async(myGroup, myQueue, ^{
dispatch_sync(dispatch_get_main_queue(), ^{ //problem here
dispatch_time_t myTime = dispatch_time(DISPATCH_TIME_NOW, 10ull * NSEC_PER_SEC);
dispatch_after(myTime, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSLog(#"Third block of code work");
});
});
});
dispatch_group_wait(myGroup, DISPATCH_TIME_FOREVER);
NSLog(#" All done ?");
Problem is in dispatch_sync(dispatch_get_main_queue(), ^{
When i make dispatch_async(dispatch_get_main_queue(), ^{, it works as it should. Point of making that sync, is blocking main thread for 10 seconds, simulating "hard work". But, unfortunately, its not work. I wonder why?
What i want is: dispatch_sync block main thread for only 10 seconds, then execution continues.
What in fact happening: dispatch sync block whole application from any further execution.
If that code is running on the main event loop, then the dispatch_group_wait() is going to block the main queue. That prevents the synchronous execution of any block on the main queue and leads to deadlock.
You can verify this by pausing into the debugger. You'll likely see the main thread/queue blocked on the call to wait and a secondary thread/queue blocked on dispatch_sync.
Your code is presumably run on the main queue. It dispatches three blocks on the background, then waits until all three blocks have finished running. Since this is done on the main queue, other blocks will only start running on the main queue afterwards, that is after all the three async blocks that you queued up have finished.
Now the third of these three blocks calls the main thread with a sync call. What that means is it adds a block to the main queue, then it waits until the block starts executing, then it waits further until the block finishes executing, then the dispatch_sync call returns to its caller.
Well, that block dispatched to the main thread cannot start executing, because the main thread is busy waiting for the three async blocks to finish, and one of them won't finish because it is waiting for the block dispatched to the main thread to start executing, which it can't because and so on and so on forever. It doesn't actually matter one bit what you are trying to do in the synchronous block, because that block never gets as far as starting to execute.

Run 3 methods one after each other

I need to run 3 methods one after each other in a seperate thread that calls an api (NSURLSessionDataTask async). I have looked into dispatch groups but this seems to run method 1 and 2 at the same time and then runs method 3 when they finish:
dispatch_group_t group = dispatch_group_create();
//METHOD 1
dispatch_group_enter(group);
[self method1WithCompletion:^(BOOL success){
dispatch_group_leave(group);
}];
//METHOD 2
dispatch_group_enter(group);
[self method2WithCompletion:^(BOOL success){
dispatch_group_leave(group);
}];
dispatch_group_notify(group,dispatch_get_main_queue(),^{
//METHOD 3
});
I need it to run method 1 and when that completes run method 2, and when that completes finally run method 3 (queue the methods).
I know I could chain the methods on each completion to run the next but I thought there would be a better approach to this...any ideas?
You are close.
dispatch_group_enter and company only handle the bit that makes it so everything that dispatch_group_enters into the dispatch_group_t will complete before calling the block passed into dispatch_group_notify.
So we can use them to good use when waiting for tasks. Rather than having just 1 group that waits for every task to finish, we can use a group for each asynchronous task that needs to be completed:
// You can get a global queue or create your own, it doesn't really matter
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
// Create a group for each task you want to wait for
dispatch_group_t group1 = dispatch_group_create();
dispatch_group_t group2 = dispatch_group_create();
// Call your first task on the first group
dispatch_group_enter(group1);
[self method1WithCompletion:^(BOOL success){
// Task is completed, so signal that it has finished
dispatch_group_leave(group1);
}];
// Call your 2nd task on the 2nd group
dispatch_group_enter(group2);
// Make the second task wait until group1 has completed before running
dispatch_group_notify(group1, queue, ^{
// METHOD 2
[self method2WithCompletion:^(BOOL success){
// Signal to the group that the task has completed
dispatch_group_leave(group2);
}];
});
// Nothing is waiting on the 3rd task, so no need to create a group for it
dispatch_group_notify(group2,dispatch_get_main_queue(),^{
//METHOD 3
// Do whatever you need to do in here
});
Here is some more information about Dispatch Queues and how you can use them.
Edit: Sorry, I completely changed my answer. Once you left your comment, it hit me that the tasks you were calling were asynchronous, and using a serial dispatch_queue_t would not make a difference! (The blocks are run serially, but method2 would be run immediately after method1, not wait for completion)
EDITED: I don't think so anymore, cjwirth have refuted my assumption below in comments
I think there still can be approach with serial queue (But it's only supposition, I didn't check it). The main idea is to dispatch group's task into serial queue, with using dispatch_group_enter(group) before dispatch and dispatch_group_leave(group) on your method1..2 completes. Let me show what it's supposed to be:
dispatch_queue_t queue = dispatch_queue_create("com.example.MyQueue", NULL);
dispatch_group_t group = dispatch_group_create();
//METHOD 1
dispatch_group_enter(group);
dispatch_group_async(group, queue, ^{
[self method1WithCompletion:^(BOOL success){
dispatch_async(queue, ^{
dispatch_group_leave(group);
});
}];
});
//METHOD 2
dispatch_group_enter(group);
dispatch_group_async(group, queue, ^{
[self method2WithCompletion:^(BOOL success){
dispatch_async(queue, ^{
dispatch_group_leave(group);
});
}];
});
dispatch_group_notify(group,dispatch_get_main_queue(),^{
//METHOD 3
});
NOTE: This kind of implementation looks like something wrong. As we had NSOperationQueue already, so I would recommend you to wrap your NSURLSessionDataTask's stuff into NSOperation and perform them sequentially

NSOperation crash on isCancelled

I have implemented a concurrent nsoperation and have ARC enabled. Now my customer is experiencing a crash which I cannot reproduce. He sent me the follow crash log :
Date/Time: 2013-04-24 12:23:34.925 -0400
OS Version: Mac OS X 10.8.3 (12D78)
Report Version: 10
Interval Since Last Report: 30946 sec
Crashes Since Last Report: 1
Per-App Interval Since Last Report: 33196 sec
Per-App Crashes Since Last Report: 1
Anonymous UUID: FB8460EE-5199-C6FB-55DC-F927D7F81A80
Crashed Thread: 15 Dispatch queue: com.apple.root.default-priority
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: EXC_I386_GPFLT
Application Specific Information:
objc_msgSend() selector name: isCancelled
Thread 15 Crashed:: Dispatch queue: com.apple.root.default-priority
0 libobjc.A.dylib 0x00007fff877f1250 objc_msgSend + 16
1 Myapp 0x000000010a608807 0x10a601000 + 30727
2 Myapp 0x000000010a650575 0x10a601000 + 324981
3 com.apple.Foundation 0x00007fff8b66212f -[NSBlockOperation main] + 124
4 com.apple.Foundation 0x00007fff8b638036 -[__NSOperationInternal start] + 684
5 com.apple.Foundation 0x00007fff8b63f861 __block_global_6 + 129
6 libdispatch.dylib 0x00007fff832d0f01 _dispatch_call_block_and_release + 15
7 libdispatch.dylib 0x00007fff832cd0b6 _dispatch_client_callout + 8
8 libdispatch.dylib 0x00007fff832ce1fa _dispatch_worker_thread2 + 304
9 libsystem_c.dylib 0x00007fff87d19d0b _pthread_wqthread + 404
10 libsystem_c.dylib 0x00007fff87d041d1 start_wqthread + 13
My code looks like this:
-(void)start
{
// Always check for cancellation before launching the task.
if ([self isCancelled])
{
// Must move the operation to the finished state if it is canceled.
[self onCancelSyncOperation];
return;
}
// If the operation is not canceled, begin executing the task.
[self willChangeValueForKey:#"isExecuting"];
[NSThread detachNewThreadSelector:#selector(main) toTarget:self withObject:nil];
executing = YES;
[self didChangeValueForKey:#"isExecuting"];
}
- (void)onCancelSyncOperation
{
[self willChangeValueForKey:#"isFinished"];
[self willChangeValueForKey:#"isExecuting"];
executing = NO;
finished = YES;
[self didChangeValueForKey:#"isExecuting"];
[self didChangeValueForKey:#"isFinished"];
}
It seems like the nsoperation is already released? when it tries to check for isCancelled?
Is this possible?
I don't think anyone can tell you why your app crashes by looking at this log. I haven't looked at your code, but you cut your crash-log, so it only shows system modules (lib.dispatch..., com.apple...). Typically the error is in the first occurrence of "com.myname...".
If this kind of crash EXC_BAD_ACCESS (SIGSEGV) appears along with an objc_msgSend(), it probably means you're trying to message to an object (or in other words: call a method of an object) that isn't there anymore. If you call that object, chances are very good that you'll find it, if you call it delayed or on another Thread or from a block, it will be a bit more complicated.
Your best chance to find the cause of this is to inspect your app with Instrument, using the Allocations or Leaks tool with NSZombies enabled (which is the default). You can launch Instruments from within Xcode. Then try to reproduce your crash. If you succeed, you might be able to find the class and location where the crash occurs.
If you don't know what to do with this answer, then check out the WWDC Developer Videos from Apple, there are some that show how to profile your app with Instruments (you'll need a [free] Dev Account to access the videos).
Good luck!

How do you test an asynchronous method?

I have an object that fetches XML or JSON over a network. Once this fetching is complete it calls a selector, passing in the returned data. So, for example I'd have something like:
-(void)testResponseWas200
{
[MyObject get:#"foo.xml" withTarget:self selector:#selector(dataFinishedLoading:)];
}
I tried the route of implementing dataFinishedLoading in the Test class and attempting to test inside that method, but the test suite is just locking up. This seems like it's a case for mocking, but I'm wondering if others have encountered this and how they handled it.
FYI: I'm using gh-unit for testing and any method prefixed with test* is executed automatically.
Three ways that come to mind are: NSRunLoop, semaphores, and groups.
NSRunLoop
__block bool finished = false;
// For testing purposes we create this asynchronous task
// that starts after 3 seconds and takes 1 second to execute.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0UL);
dispatch_time_t threeSeconds = dispatch_time(DISPATCH_TIME_NOW, 3LL * NSEC_PER_SEC);
dispatch_after(threeSeconds, queue, ^{
sleep(1); // replace this with your task
finished = true;
});
// loop until the flag is set from inside the task
while (!finished) {
// spend 1 second processing events on each loop
NSDate *oneSecond = [NSDate dateWithTimeIntervalSinceNow:1];
[[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:oneSecond];
}
A NSRunLoop is a loop that processes events like network ports, keyboard, or any other input source you plug in, and returns after processing those events, or after a time limit. When there are no events to process, the run loop puts the thread to sleep. All Cocoa and Core Foundation applications have a run loop underneath. You can read more about run loops in Apple's Threading Programming Guide: Run Loops, or in Mike Ash Friday Q&A 2010-01-01: NSRunLoop Internals.
In this test, I'm just using the NSRunLoop to sleep the thread for a second. Without it, the constant looping in the while would consume 100% of a CPU core.
If the block and the boolean flag are created in the same lexical scope (eg: both inside a method), then the flag needs the __block storage qualifier to be mutable. Had the flag been a global variable, it wouldn't need it.
If the test crashes before setting the flag, the thread is stuck waiting forever. Add a time limit to avoid that:
NSDate *timeout = [NSDate dateWithTimeIntervalSinceNow:2];
while (!finished && [timeout timeIntervalSinceNow]>0) {
[[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode
beforeDate:[NSDate dateWithTimeIntervalSinceNow:1]];
}
if (!finished) NSLog(#"test failed with timeout");
If you are using this code for unit testing, an alternative way to insert a timeout is to dispatch a block with an assert:
// taken from https://github.com/JaviSoto/JSBarrierOperationQueue/blob/master/JSBarrierOperationQueueTests/JSBarrierOperationQueueTests.m#L118
dispatch_time_t timeout = dispatch_time(DISPATCH_TIME_NOW, 2LL * NSEC_PER_SEC);
dispatch_after(timeout, dispatch_get_main_queue(), ^(void){
STAssertTrue(done, #"Should have finished by now");
});
Semaphore
Similar idea but sleeping until a semaphore changes, or until a time limit:
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
// signal the semaphore after 3 seconds using a global queue
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0UL);
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, 3LL*NSEC_PER_SEC), queue, ^{
sleep(1);
dispatch_semaphore_signal(semaphore);
});
// wait with a time limit of 5 seconds
dispatch_time_t timeout = dispatch_time(DISPATCH_TIME_NOW, 5LL*NSEC_PER_SEC);
if (dispatch_semaphore_wait(semaphore, timeout)==0) {
NSLog(#"success, semaphore signaled in time");
} else {
NSLog(#"failure, semaphore didn't signal in time");
}
dispatch_release(semaphore);
If instead we waited forever with dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER); we would be stuck until getting a signal from the task, which keeps running on the background queue.
Group
Now imagine you have to wait for several blocks. You can use an int as flag, or create a semaphore that starts with a higher number, or you can group the blocks and wait until the group is finished. In this example I do the later with just one block:
dispatch_group_t group = dispatch_group_create();
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0UL);
// dispatch work to the given group and queue
dispatch_group_async(group,queue,^{
sleep(1); // replace this with your task
});
// wait two seconds for the group to finish
dispatch_time_t timeout = dispatch_time(DISPATCH_TIME_NOW, 2LL*NSEC_PER_SEC);
if (dispatch_group_wait(group, timeout)==0) {
NSLog(#"success, dispatch group completed in time");
} else {
NSLog(#"failure, dispatch group did not complete in time");
}
dispatch_release(group);
If for some reason (to clean up resources?) you want to run a block after the group is finished, use dispatch_group_notify(group,queue, ^{/*...*/});
Asynchronous callbacks often require a message loop to run. It is a frequent pattern to stop the message loop after callback was called in the test code. Otherwise the loop is just waiting for next tasks, and there will be none.
#jano Thank you I made of this little util from your post
In PYTestsUtils.m
+ (void)waitForBOOL:(BOOL*)finished forSeconds:(int)seconds {
NSDate *timeout = [NSDate dateWithTimeIntervalSinceNow:seconds];
while (!*finished && [timeout timeIntervalSinceNow]>0) {
[[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode
beforeDate:[NSDate dateWithTimeIntervalSinceNow:1]];
}
}
in my test file
- (void)testSynchronizeTime
{
__block BOOL finished = NO;
[self.connection synchronizeTimeWithSuccessHandler:^(NSTimeInterval serverTime) {
NSLog(#"ServerTime %f", serverTime);
finished = YES;
} errorHandler:^(NSError *error) {
STFail(#"Cannot get ServerTime %#", error);
finished = YES;
}];
[PYTestsUtils waitForBOOL:&finished forSeconds:10];
if (! finished)
STFail(#"Cannot get ServerTime within 10 seconds");
}
variation
add in PYTestsUtils.m
+ (void)execute:(PYTestExecutionBlock)block ifNotTrue:(BOOL*)finished afterSeconds:(int)seconds {
[self waitForBOOL:finished forSeconds:seconds];
if (! *finished) block();
}
usage:
- (void)testSynchronizeTime
{
__block BOOL finished = NO;
[self.connection synchronizeTimeWithSuccessHandler:^(NSTimeInterval serverTime) {
NSLog(#"ServerTime %f", serverTime);
finished = YES;
} errorHandler:^(NSError *error) {
STFail(#"Cannot get ServerTime %#", error);
finished = YES;
}];
[PYTestsUtils execute:^{
STFail(#"Cannot get ServerTime within 10 seconds");
} ifNotTrue:&finished afterSeconds:10];
}
One of the best ways to test asynchronous and multi-threaded code is with event logging. Your code should log events at interesting or useful times. Often an event alone is enough information to prove that logic is working correctly. Somtimes events will need payloads, or other meta information so they can be paired or chained.
This is most useful when the run-time or the operating system supports an efficient and robust eventing mechanism. This enables your product to ship with events in the 'retail' version. In this scenario, your events are only enabled when you need to debug a problem, or run a unit test to prove thins are working correctly.
Having the events in the retail (production) code lets you test and debug on any platform. This is huge benefit over debug or 'checked' code.
Note, like asserts, be careful where you put events - they can be expensive if logged to often. But the good news is that modern OSes and some application frameworks support eventing mechanisms that support 10's of thousands of events easily. Some support taking a stack trace on selected events. This can be very powerful, but usually requires that symbols are available at some point in time - either at logging, or trace post processing time on the target system.