In my code, I am running a local server (CocoaHTTPServer). When the server receives a request, it creates a thread and passes control to a certain method ( - (NSObject<HTTPResponse> *)httpResponseForMethod:(NSString *)method URI:(NSString *)path, perhaps irrelevant here).
I need to read a list of local assets and return the result. The API call ( [assetsLibrary enumerateGroupsWithTypes:ALAssetsGroupAll usingBlock:^(ALAssetsGroup *group, BOOL *stop) ... ) is asynchronous.
Since the HTTPResponse needs to wait until the API call has finished, I have created a flag called _isProcessing , which I set before making the API call. After the call is finished, I am unsetting the flag and returning the HTTP request. The code to wait looks like:
// the API call is non-blocking. hence, wait in a loop until the command has finished
samCommand->isProcessing = YES;
while (samCommand->isProcessing) {
usleep(100*1000);
}
The API call calls a delegate method upon finishing its task as follows:
// to be called at the end of an asynch operation (eg: reading local asset list)
- (void) commandDidFinish {
// flag to open the lock
isProcessing = NO;
}
This works, but will perhaps require performance enhancements. How can I use anything (run-loop etc) here to improve upon the performance.
Edit following weichsel solution using dispatch_semaphore
Following weichsel's solution, I created a semaphore. The sequence of my code is:
CocoaHTTPServer receives a request and hence creates a new thread
It calls the static method of a Command class to execute the request
The Command class creates a new command Object calls another class (using reflection) which calls ALAsset APIs and passes the command Object to it
Upon returning, the ALAsset API call calls the delegate method of
the command class
I have hence embedded semaphores in appropriate locations. However, the semaphore's wait loop just doesnt end sometimes. The normal output should be:
2014-02-07 11:27:23:214 MM2Beta[7306:1103] HTTPServer: Started HTTP server on port 1978
2014-02-07 11:27:23:887 MM2Beta[7306:6303] created semaphore 0x1f890670->0x1f8950a0
2014-02-07 11:27:23:887 MM2Beta[7306:6303] calling execute with 0x1f890670
2014-02-07 11:27:23:887 MM2Beta[7306:6303] starting wait loop 0x1f890670->0x1f8950a0
2014-02-07 11:27:23:887 MM2Beta[7306:907] calling getAssetsList with delegate 0x1f890670
2014-02-07 11:27:24:108 MM2Beta[7306:907] calling delegate [0x1f890670 commandDidFinish]
2014-02-07 11:27:24:108 MM2Beta[7306:907] releasing semaphore 0x1f890670->0x1f8950a0
2014-02-07 11:27:24:109 MM2Beta[7306:6303] ending wait loop 0x1f890670->0x0
In every few runs, the last step ( ending wait loop 0x1f890670->0x0 doesnt occur). Hence, the wait loop never ends. Sometimes the code crashes too, exactly at the same point. Any clue what is wrong here.
My code is as follows:
#implementation SAMCommand {
NSData* resultData;
dispatch_semaphore_t semaphore; // a lock to establish whether the command has been processed
}
// construct the object, ensuring that the "command" field is present in the jsonString
+(NSData*) createAndExecuteCommandWithJSONParamsAs:(NSString *)jsonString {
SAMCommand* samCommand = [[SAMCommand alloc] init];
samCommand.commandParams = [jsonString dictionaryFromJSON];
if(COMPONENT==nil || COMMAND==nil){
DDLogError(#"command not found in %#",jsonString);
return nil;
}
samCommand->semaphore = dispatch_semaphore_create(0);
DDLogInfo(#"created semaphore %p->%p",samCommand,samCommand->semaphore);
// to execute a command contained in the jsonString, we use reflection.
DDLogInfo(#"calling execute with %p",samCommand);
[NSClassFromString(COMPONENT) performSelectorOnMainThread:NSSelectorFromString([NSString stringWithFormat:#"%#_%#_%#:",COMMAND,MEDIA_SOURCE,MEDIA_TYPE]) withObject:samCommand waitUntilDone:NO];
// the above calls are non-blocking. hence, wait in a loop until the command has finished
DDLogInfo(#"starting wait loop %p->%p",samCommand,samCommand->semaphore);
dispatch_semaphore_wait(samCommand->semaphore, DISPATCH_TIME_FOREVER);
DDLogInfo(#"ending wait loop %p->%p",samCommand,samCommand->semaphore);
DDLogInfo(#"");
// return the data
return samCommand->resultData;
}
// to be called at the end of an asynch operation (eg: reading local asset list)
- (void) commandDidFinish {
// flag to release the lock
DDLogInfo(#"releasing semaphore %p->%p",self,semaphore);
dispatch_semaphore_signal(semaphore);
semaphore = nil;
}
#end
I got it to work :)
Finally, what seems to work stably is creating the semaphore, and passing it to the ALAsset asynch API calls, and releasing it at the end of the call. Earlier, I was calling a delegate method of the class where I had created the semaphore, and the semaphore object was somehow getting releases. Unsure of what was happening there really.
You can use semaphore to block execution of the current queue until another one returns.
The basic pattern is:
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
[assetsLibrary enumerateAssetsUsingBlock^(ALAsset *result, NSUInteger index, BOOL *stop):^{
...
dispatch_semaphore_signal(semaphore);
}];
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
dispatch_release(semaphore);
You can find a full example of this method in Apple's MTAudioProcessingTap Xcode Project:
https://developer.apple.com/library/ios/samplecode/AudioTapProcessor
The relevant lines start at MYViewController.m:86
NSRunLoop has a method called runUntilDate: which should work for you with dates in near future like 1s ahead or so. That way you can replace your sleep call in the while loop for example with:
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeintervalSinveNow:1]
I'm trying to figure out how to get a database fetch to run in the background. Below are the foreground and background version of the same function. The foreground version works. But in the background version the local variable retval never gets assigned. Putting a breakpoint in the pageInfoForPageKey function tells me that function is never called.
Is self available inside the block?
//foreground version
- (PageInfo*)objectAtIndex:(NSInteger)idx
{
return [[self dataController] pageInfoForPageKey:[[[self pageIDs] objectAtIndex:idx] integerValue]];
}
//background version
- (PageInfo*)objectAtIndex:(NSInteger)idx
{
__block PageInfo* retval = nil;
__block NSInteger pageID = [[[self pageIDs] objectAtIndex:idx] integerValue];
dispatch_queue_t aQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(aQueue, ^{
retval = [[self dataController] pageInfoForPageKey:pageID];
});
return retval;
}
By using dispatch_async, you are telling the system that you want it to run your block some time soon, and that you don't want to wait for your block to finish (or even start) before dispatch_async returns. That is the definition of asynchronous. That is the definition of “in the background”.
The system is doing what you told it to: it is arranging for your block to run, and then it is returning immediately, before the block has run. So the block doesn't set retval before you return retval, because the block hasn't run yet.
If you want to run the database fetch in the background, you need to change your API to pass retval back (to whoever needs it) at a later time, after the block has run. One way is to pass a completion block as a message argument. This is a common pattern for performing fetches in the background. For example, look at +[NSURLConnection sendAsynchronousRequest:queue:completionHandler:].
You might do it like this:
- (void)fetchObjectAtIndex:(NSIndex)idx completion:(void (^)(PageInfo *))block {
block = [block copy]; // unnecessary but harmless if using ARC
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
NSInteger pageKey = [[[self pageIDs] objectAtIndex:idx] integerValue];
PageInfo* pageInfo = [[self dataController] pageInfoForPageKey:pageKey];
dispatch_async(dispatch_get_main_queue(), ^{
block(pageInfo);
});
});
}
Then you need to change the caller of objectAtIndex: to use this new API instead.
So I have a function that prepares a code block and schedules it to run synchronously on a serial dispatch queue. I'm aware that this serial queue will be run on a new thread. The problem, however, is that the code block modifies a variable that it communicates back to the function which in turn is expected to return it as its return value. The code below will help clarify the situation:
-(DCILocation*) getLocationsByIdentifier: (NSString*) identifier andQualifier: (NSString*) qualifier {
__block DCILocation* retval = nil;
NSString* queryStr = [self baseQueryWithFilterSet:nil];
queryStr = [queryStr stringByAppendingString:#" (identifier = ? OR icao = ?) AND qualifier = ?"];
[self.queue inDatabase:^(FMDatabase *db) {
FMResultSet* results = [db executeQuery:queryStr,
[identifier uppercaseString],
[identifier uppercaseString],
[qualifier uppercaseString]];
if ((nil != results) && [results next]) {
dispatch_async(dispatch_get_main_queue(), ^{
retval = [DCIAirportEnumerator newAirportForStatement:results];
});
[results close];
}
}];
return retval;
}
"self.queue" is the serial dispatch queue that the block will run on. Notice that the block modifies "retval" and updates it by nesting a dispatch_async call to the main thread. The concern however, is that "return retval" (the last line of the function) could be possibly called before the block of code running on the serial dispatch queue is able to modify it. This will result in "nil" being returned.
Any ideas as to how it can be made sure that the function doesn't return until retval as been modified by the block executing on the serial queue?
Any ideas as to how it can be made sure that the function doesn't
return until retval as been modified by the block executing on the
serial queue?
If you need to wait for the result, then your code is still synchronous and you might as well run it in your method instead of on a serial queue. So that's the first option: don't use blocks here.
The second option would be to restructure your code so that it really does run asynchronously. Find the code that depends on the return value retval and break that out into a separate method or block. Then have the block that sets retval call that passing in retval when it's finished.
After your block, you can add
while(YES) {
if(variable) {
break;
}
}
And then add, in your dispatch_async, after you define retval
variable = YES;
You'll just have to define __block BOOL variable = NO; before the block.
I'm calling four methods that I want to execute in synchronous order, the first two methods are synchronous, the last two methods are asynchronous (data fetching from URLs).
Pseudo-code:
- (void)syncData {
// Show activity indicator
[object sync]; // Synchronous method
[object2 sync]; // Synchronous method
BOOL object3Synced = [object3 sync]; // Async method
BOOL object4Synced = [object4 sync]; // Async method
// Wait for object3 and object4 has finished and then hide activity indicator
}
How can I achieve this?
Use a barrier:
void dispatch_barrier_async(dispatch_queue_t queue, dispatch_block_t block);
Submits a barrier block for asynchronous execution and returns immediately.
When the barrier block reaches the front of a private concurrent
queue, it is not executed immediately.
Instead, the queue waits until its currently executing blocks finish executing.
At that point, the
queue executes the barrier block by itself. Any blocks submitted after
the barrier block are not executed until the barrier block completes.
This example outputs 1 2 3 4 done although being asynchronous, it could be 1 2 4 3 done. Since I understand you want to handle an activity indicator, this shouldn't matter.
#import <Foundation/Foundation.h>
int main(int argc, char *argv[]) {
#autoreleasepool {
dispatch_queue_t queue = dispatch_queue_create("com.myqueue", 0);
dispatch_sync(queue, ^(){NSLog(#"1");} );
dispatch_sync(queue, ^(){NSLog(#"2");});
dispatch_async(queue, ^(){NSLog(#"3");});
dispatch_async(queue, ^(){NSLog(#"4");});
dispatch_barrier_sync(queue, ^(){NSLog(#"done");});
}
}
For other ways to test asynchronous code, see: https://stackoverflow.com/a/11179523/412916
Assuming you actually have some sort of way of knowing when the asynchronous methods are done, what you probably want is something like:
- (void)syncData {
// Show activity indicator
[object sync]; // Synchronous method
[object2 sync]; // Synchronous method
_object3Synced = _object4Synced = NO;
[object3 syncWithCompletionHandler:
^{
_object3Synced = YES;
[self considerHidingActivityIndicator];
}]; // Async method
[object4 syncWithCompletionHandler:
^{
_object4Synced = YES;
[self considerHidingActivityIndicator];
}]; // Async method
}
- (void)considerHidingActivityIndicator
{
if(_object3Synced && _object4Synced)
{
// hide activity indicator, etc
}
}
You can make a subclass of UIActivityInidicator, add an activityCount property and implement these two additional methods:
- (void)incrementActivityCount
{
if(_activityCount == 0)
{
[self startAnimating];
}
_activityCount++;
}
- (void)decrementActivityCount
{
_activityCount--;
if(_activityCount <= 0)
{
_activityCount = 0;
[self stopAnimating];
}
}
Now whenever you start something that uses the activity counter call incrementActivityCount and in its completion block (or otherwise when it finishes) call decrementActivityCount. You can do other things if you want in these methods, the above is just a simple example which is probably sufficient in most cases (especially if you set hidesWhenStopped = YES).
You would need to launch the first Async method and use a completion block. In the completion block of the first async method, you would launch your second async method. Though this kind of makes using async methods irrelevant.
When using methods which return blocks they can be very convenient.
However, when you have to string a few of them together it gets messy really quickly
for instance, you have to call 4 URLs in succession:
[remoteAPIWithURL:url1 success:^(int status){
[remoteAPIWithURL:url2 success:^(int status){
[remoteAPIWithURL:url3 success:^(int status){
[remoteAPIWithURL:url2 success:^(int status){
//succes!!!
}];
}];
}];
}];
So for every iteration I go one level deeper, and I don't even handle errors in the nested blocks yet.
It gets worse when there is an actual loop. For instance, say I want to upload a file in 100 chunks:
- (void) continueUploadWithBlockNr:(int)blockNr
{
if(blocknr>=100)
{
//success!!!
}
[remoteAPIUploadFile:file withBlockNr:blockNr success:^(int status)
{
[self continueUploadWithBlockNr:blockNr];
}];
}
This feels very unintuitive, and gets very unreadable very quick.
In .Net they solved all this using the async and await keyword, basically unrolling these continuations into a seemingly synchronous flow.
What is the best practice in Objective C?
Your question immediately made me think of recursion. Turns out, Objective-c blocks can be used in recursion. So I came up with the following solution, which is easy to understand and can scale to N tasks pretty nicely.
// __block declaration of the block makes it possible to call the block from within itself
__block void (^urlFetchBlock)();
// Neatly aggregate all the urls you wish to fetch
NSArray *urlArray = #[
[NSURL URLWithString:#"http://www.google.com"],
[NSURL URLWithString:#"http://www.stackoverflow.com"],
[NSURL URLWithString:#"http://www.bing.com"],
[NSURL URLWithString:#"http://www.apple.com"]
];
__block int urlIndex = 0;
// the 'recursive' block
urlFetchBlock = [^void () {
if (urlIndex < (int)[urlArray count]){
[self remoteAPIWithURL:[urlArray objectAtIndex:index]
success:^(int theStatus){
urlIndex++;
urlFetchBlock();
}
failure:^(){
// handle error.
}];
}
} copy];
// initiate the url requests
urlFetchBlock();
One way to reduce nesting is to define methods that return the individual blocks. In order to facilitate the data sharing which is done "auto-magically" by the Objective C compiler through closures, you would need to define a separate class to hold the shared state.
Here is a rough sketch of how this can be done:
typedef void (^WithStatus)(int);
#interface AsyncHandler : NSObject {
NSString *_sharedString;
NSURL *_innerUrl;
NSURL *_middleUrl;
WithStatus _innermostBlock;
}
+(void)handleRequest:(WithStatus)innermostBlock
outerUrl:(NSURL*)outerUrl
middleUrl:(NSURL*)middleUrl
innerUrl:(NSURL*)innerUrl;
-(WithStatus)outerBlock;
-(WithStatus)middleBlock;
#end
#implementation AsyncHandler
+(void)handleRequest:(WithStatus)innermostBlock
outerUrl:(NSURL*)outerUrl
middleUrl:(NSURL*)middleUrl
innerUrl:(NSURL*)innerUrl {
AsyncHandler *h = [[AsyncHandler alloc] init];
h->_innermostBlock = innermostBlock;
h->_innerUrl = innerUrl;
h->_middleUrl = middleUrl;
[remoteAPIWithURL:outerUrl success:[self outerBlock]];
}
-(WithStatus)outerBlock {
return ^(int success) {
_sharedString = [NSString stringWithFormat:#"Outer: %i", success];
[remoteAPIWithURL:_middleUrl success:[self middleBlock]];
};
}
-(WithStatus)middleBlock {
return ^(int success) {
NSLog("Shared string: %#", _sharedString);
[remoteAPIWithURL:_innerUrl success:_innermostBlock];
};
}
#end
Note: All of this assumes ARC; if you are compiling without it, you need to use Block_copy in the methods returning blocks. You would also need to do a copy in the calling code below.
Now your original function can be re-written without the "Russian doll" nesting, like this:
[AsyncHandler
handleRequest:^(int status){
//succes!!!
}
outerUrl:[NSURL #"http://my.first.url.com"]
middleUrl:[NSURL #"http://my.second.url.com"]
innerUrl:[NSURL #"http://my.third.url.com"]
];
Iterative algorithm:
Create a __block variable (int urlNum) to keep track of the current URL (inside an NSArray of them).
Have the onUrlComplete block fire off the next request until all URLs have been loaded.
Fire the first request.
When all URLs have been loaded, do the "//success!" dance.
Code written without the aid of XCode (meaning, there may be compiler errors -- will fix if necessary):
- (void)loadUrlsAsynchronouslyIterative:(NSArray *)urls {
__block int urlNum = 0;
void(^onUrlComplete)(int) = nil; //I don't remember if you can call a block from inside itself.
onUrlComplete = ^(int status) {
if (urlNum < urls.count) {
id nextUrl = urls[urlNum++];
[remoteAPIWithURL:nextUrl success:onUrlComplete];
} else {
//success!
}
}
onUrlComplete(0); //fire first request
}
Recursive algorithm:
Create a method to load all the remaining URLs.
When remaining URLs is empty, fire "onSuccess".
Otherwise, fire request for the next URL and provide a completion block that recursively calls the method with all but the first remaining URLs.
Complications: we declared the "onSuccess" block to accept an int status parameter, so we pass the last status variable down (including a "default" value).
Code written without the aid of XCode (bug disclaimer here):
- (void)loadUrlsAsynchronouslyRecursive:(NSArray *)remainingUrls onSuccess:(void(^)(int status))onSuccess lastStatus:(int)lastStatus {
if (remainingUrls.count == 0) {
onSuccess(lastStatus);
return;
}
id nextUrl = remainingUrls[0];
remainingUrls = [remainingUrls subarrayWithRange:NSMakeRange(1, remainingUrls.count-1)];
[remoteAPIWithUrl:nextUrl onSuccess:^(int status) {
[self loadUrlsAsynchronouslyRecursive:remainingUrls onSuccess:onSuccess lastStatus:status];
}];
}
//fire first request:
[self loadUrlsAsynchronouslyRecursive:urls onSuccess:^(int status) {
//success here!
} lastStatus:0];
Which is better?
The iterative algorithm is simple and concise -- if you're comfortable playing games with __block variables and scopes.
Alternatively, the recursive algorithm doesn't require __block variables and is fairly simple, as recursive algorithms go.
The recursive implementation is more re-usable that the iterative one (as implemented).
The recursive algorithm might leak (it requires a reference to self), but there are several ways to fix that: make it a function, use __weak id weakSelf = self;, etc.
How easy would it be to add error-handling?
The iterative implementation can easily be extended to check the value of status, at the cost of the onUrlComplete block becoming more complex.
The recursive implementation is perhaps not as straight-forward to extend -- primarily because it is re-usable. Do you want to cancel loading more URLs when the status is such-and-such? Then pass down a status-checking/error-handling block that accepts int status and returns BOOL (for example YES to continue, NO to cancel). Or perhaps modify onSuccess to accept both int status and NSArray *remainingUrls -- but you'll need to call loadUrlsAsynchronouslyRecursive... in your onSuccess block implementation.
You said (in a comment), “asynchronous methods offer easy asynchronisity without using explicit threads.” But your complaint seems to be that you're trying to do something with asynchronous methods, and it's not easy. Do you see the contradiction here?
When you use a callback-based design, you sacrifice the ability to express your control flow directly using the language's built-in structures.
So I suggest you stop using a callback-based design. Grand Central Dispatch (GCD) makes it easy (that word again!) to perform work “in the background”, and then call back to the main thread to update the user interface. So if you have a synchronous version of your API, just use it in a background queue:
- (void)interactWithRemoteAPI:(id<RemoteAPI>)remoteAPI {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// This block runs on a background queue, so it doesn't block the main thread.
// But it can't touch the user interface.
for (NSURL *url in #[url1, url2, url3, url4]) {
int status = [remoteAPI syncRequestWithURL:url];
if (status != 0) {
dispatch_async(dispatch_get_main_queue(), ^{
// This block runs on the main thread, so it can update the
// user interface.
[self remoteRequestFailedWithURL:url status:status];
});
return;
}
}
});
}
Since we're just using normal control flow, it's straightforward to do more complicated things. Say we need to issue two requests, then upload a file in chunks of at most 100k, then issue one more request:
#define AsyncToMain(Block) dispatch_async(dispatch_get_main_queue(), Block)
- (void)uploadFile:(NSFileHandle *)fileHandle withRemoteAPI:(id<RemoteAPI>)remoteAPI {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
int status = [remoteAPI syncRequestWithURL:url1];
if (status != 0) {
AsyncToMain(^{ [self remoteRequestFailedWithURL:url1 status:status]; });
return;
}
status = [remoteAPI syncRequestWithURL:url2];
if (status != 0) {
AsyncToMain(^{ [self remoteRequestFailedWithURL:url2 status:status]; });
return;
}
while (1) {
// Manage an autorelease pool to avoid accumulating all of the
// 100k chunks in memory simultaneously.
#autoreleasepool {
NSData *chunk = [fileHandle readDataOfLength:100 * 1024];
if (chunk.length == 0)
break;
status = [remoteAPI syncUploadChunk:chunk];
if (status != 0) {
AsyncToMain(^{ [self sendChunkFailedWithStatus:status]; });
return;
}
}
}
status = [remoteAPI syncRequestWithURL:url4];
if (status != 0) {
AsyncToMain(^{ [self remoteRequestFailedWithURL:url4 status:status]; });
return;
}
AsyncToMain(^{ [self uploadFileSucceeded]; });
});
}
Now I'm sure you're saying “Oh yeah, that looks great.” ;^) But you might also be saying “What if RemoteAPI only has asynchronous methods, not synchronous methods?”
We can use GCD to create a synchronous wrapper for an asynchronous method. We need to make the wrapper call the async method, then block until the async method calls the callback. The tricky bit is that perhaps we don't know which queue the async method uses to invoke the callback, and we don't know if it uses dispatch_sync to call the callback. So let's be safe by calling the async method from a concurrent queue.
- (int)syncRequestWithRemoteAPI:(id<RemoteAPI>)remoteAPI url:(NSURL *)url {
__block int outerStatus;
dispatch_semaphore_t sem = dispatch_semaphore_create(0);
[remoteAPI asyncRequestWithURL:url completion:^(int status) {
outerStatus = status;
dispatch_semaphore_signal(sem);
}];
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
dispatch_release(sem);
return outerStatus;
}
UPDATE
I will respond to your third comment first, and your second comment second.
Third Comment
Your third comment:
Last but not least, your solution of dedicating a separate thread to wrap around the synchronous version of a call is more costly than using the async alternatives. a Thread is an expensive resource, and when it is blocking you basically have lost one thread. Async calls (the ones in the OS libraries at least) are typically handled in a much more efficient way. (For instance, if you would request 10 urls at the same time, chances are it will not spin up 10 threads (or put them in a threadpool))
Yes, using a thread is more expensive than just using the asynchronous call. So what? The question is whether it's too expensive. Objective-C messages are too expensive in some scenarios on current iOS hardware (the inner loops of a real-time face detection or speech recognition algorithm, for example), but I have no qualms about using them most of the time.
Whether a thread is “an expensive resource” really depends on the context. Let's consider your example: “For instance, if you would request 10 urls at the same time, chances are it will not spin up 10 threads (or put them in a threadpool)”. Let's find out.
NSURL *url = [NSURL URLWithString:#"http://1.1.1.1/"];
NSURLRequest *request = [NSURLRequest requestWithURL:url];
for (int i = 0; i < 10; ++i) {
[NSURLConnection sendAsynchronousRequest:request queue:[NSOperationQueue mainQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) {
NSLog(#"response=%# error=%#", response, error);
}];
}
So here I am using Apple's own recommended +[NSURLConnection sendAsynchronousRequest:queue:completionHandler:] method to send 10 requests asynchronously. I've chosen the URL to be non-responsive, so I can see exactly what kind of thread/queue strategy Apple uses to implement this method. I ran the app on my iPhone 4S running iOS 6.0.1, paused in the debugger, and took a screen shot of the Thread Navigator:
You can see that there are 10 threads labeled com.apple.root.default-priority. I've opened three of them so you can see that they are just normal GCD queue threads. Each calls a block defined in +[NSURLConnection sendAsynchronousRequest:…], which just turns around and calls +[NSURLConnection sendSynchronousRequest:…]. I checked all 10, and they all have the same stack trace. So, in fact, the OS library does spin up 10 threads.
I bumped the loop count from 10 to 100 and found that GCD caps the number of com.apple.root.default-priority threads at 64. So my guess is the other 36 requests I issued are queued up in the global default-priority queue, and won't even start executing until some of the 64 “running” requests finish.
So, is it too expensive to use a thread to turn an asynchronous function into a synchronous function? I'd say it depends on how many of these you plan to do simultaneously. I would have no qualms if the number's under 10, or even 20.
Second Comment
Which brings me to your second comment:
However, when you have: do these 3 things at the same time, and when 'any' of them is finished then ignore the rest and do these 3 calls at the same time and when 'all' of them finish then succes.
These are cases where it's easy to use GCD, but we can certainly combine the GCD and async approaches to use fewer threads if you want, while still using the languages native tools for control flow.
First, we'll make a typedef for the remote API completion block, just to save typing later:
typedef void (^RemoteAPICompletionBlock)(int status);
I'll start the control flow the same way as before, by moving it off the main thread to a concurrent queue:
- (void)complexFlowWithRemoteAPI:(id<RemoteAPI>)remoteAPI {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
First we want to issue three requests simultaneously and wait for one of them to succeed (or, presumably, for all three to fail).
So let's say we have a function, statusOfFirstRequestToSucceed, that issues any number of asynchronous remote API requests and waits for the first to succeed. This function will provide the completion block for each async request. But the different requests might take different arguments… how can we pass the API requests to the function?
We can do it by passing a literal block for each API request. Each literal block takes the completion block and issues the asynchronous remote API request:
int status = statusOfFirstRequestToSucceed(#[
^(RemoteAPICompletionBlock completion) {
[remoteAPI requestWithCompletion:completion];
},
^(RemoteAPICompletionBlock completion) {
[remoteAPI anotherRequestWithCompletion:completion];
},
^(RemoteAPICompletionBlock completion) {
[remoteAPI thirdRequestWithCompletion:completion];
}
]);
if (status != 0) {
AsyncToMain(^{ [self complexFlowFailedOnFirstRoundWithStatus:status]; });
return;
}
OK, now we've issued the three first parallel requests and waited for one to succeed, or for all of them to fail. Now we want to issue three more parallel requests and wait for all to succeed, or for one of them to fail. So it's almost identical, except I'm going to assume a function statusOfFirstRequestToFail:
status = statusOfFirstRequestToFail(#[
^(RemoteAPICompletionBlock completion) {
[remoteAPI requestWithCompletion:completion];
},
^(RemoteAPICompletionBlock completion) {
[remoteAPI anotherRequestWithCompletion:completion];
},
^(RemoteAPICompletionBlock completion) {
[remoteAPI thirdRequestWithCompletion:completion];
}
]);
if (status != 0) {
AsyncToMain(^{ [self complexFlowFailedOnSecondRoundWithStatus:status]; });
return;
}
Now both rounds of parallel requests have finished, so we can notify the main thread of success:
[self complexFlowSucceeded];
});
}
Overall, that seems like a pretty straightforward flow of control to me, and we just need to implement statusOfFirstRequestToSucceed and statusOfFirstRequestToFail. We can implement them with no extra threads. Since they are so similar, we'll make them both call on a helper function that does the real work:
static int statusOfFirstRequestToSucceed(NSArray *requestBlocks) {
return statusOfFirstRequestWithStatusPassingTest(requestBlocks, ^BOOL (int status) {
return status == 0;
});
}
static int statusOfFirstRequestToFail(NSArray *requestBlocks) {
return statusOfFirstRequestWithStatusPassingTest(requestBlocks, ^BOOL (int status) {
return status != 0;
});
}
In the helper function, I'll need a queue in which to run the completion blocks, to prevent race conditions:
static int statusOfFirstRequestWithStatusPassingTest(NSArray *requestBlocks,
BOOL (^statusTest)(int status))
{
dispatch_queue_t completionQueue = dispatch_queue_create("remote API completion", 0);
Note that I will only put blocks on completionQueue using dispatch_sync, and dispatch_sync always runs the block on the current thread unless the queue is the main queue.
I'll also need a semaphore, to wake up the outer function when some request has completed with a passing status, or when all requests have finished:
dispatch_semaphore_t enoughJobsCompleteSemaphore = dispatch_semaphore_create(0);
I'll keep track of the number of jobs not yet finished and the status of the last job to finish:
__block int jobsLeft = requestBlocks.count;
__block int outerStatus = 0;
When jobsLeft becomes 0, it means that either I've set outerStatus to a status that passes the test, or that all jobs have completed. Here's the completion block where I'll the work of tracking whether I'm done waiting. I do it all on completionQueue to serialize access to jobsLeft and outerStatus, in case the remote API dispatches multiple completion blocks in parallel (on separate threads or on a concurrent queue):
RemoteAPICompletionBlock completionBlock = ^(int status) {
dispatch_sync(completionQueue, ^{
I check to see if the outer function is still waiting for the current job to complete:
if (jobsLeft == 0) {
// The outer function has already returned.
return;
}
Next, I decrement the number of jobs remaining and make the completed job's status available to the outer function:
--jobsLeft;
outerStatus = status;
If the completed job's status passes the test, I set jobsLeft to zero to prevent other jobs from overwriting my status or singling the outer function:
if (statusTest(status)) {
// We have a winner. Prevent other jobs from overwriting my status.
jobsLeft = 0;
}
If there are no jobs left to wait on (because they've all finished or because this job's status passed the test), I wake up the outer function:
if (jobsLeft == 0) {
dispatch_semaphore_signal(enoughJobsCompleteSemaphore);
}
Finally, I release the queue and the semaphore. (The retains will be later, when I loop through the request blocks to execute them.)
dispatch_release(completionQueue);
dispatch_release(enoughJobsCompleteSemaphore);
});
};
That's the end of the completion block. The rest of the function is trivial. First I execute each request block, and I retain the queue and the semaphore to prevent dangling references:
for (void (^requestBlock)(RemoteAPICompletionBlock) in requestBlocks) {
dispatch_retain(completionQueue); // balanced in completionBlock
dispatch_retain(enoughJobsCompleteSemaphore); // balanced in completionBlock
requestBlock(completionBlock);
}
Note that the retains aren't necessary if you're using ARC and your deployment target is iOS 6.0 or later.
Then I just wait for one of the jobs to wake me up, release the queue and the semaphore, and return the status of the job that woke me:
dispatch_semaphore_wait(enoughJobsCompleteSemaphore, DISPATCH_TIME_FOREVER);
dispatch_release(completionQueue);
dispatch_release(enoughJobsCompleteSemaphore);
return outerStatus;
}
Note that the structure of statusOfFirstRequestWithStatusPassingTest is fairly generic: you can pass any request blocks you want, as long as each one calls the completion block and passes in an int status. You could modify the function to handle a more complex result from each request block, or to cancel outstanding requests (if you have a cancellation API).
While researching this myself I bumped into a port of Reactive Extensions to Objective-C. Reactive Extensions is like having the ability to querying a set of events or asynchronous operations. I know it has had a big uptake under .Net and JavaScript, and now apparently there is a port for Objective-C as well
https://github.com/blog/1107-reactivecocoa-for-a-better-world
Syntax looks tricky. I wonder if there is real world experience with it for iPhone development and if it does actually solve this issue elegantly.
I tend to wrap big nested block cluster f**** like you describe in subclasses of NSOperation that describe what the overall behaviour that your big nest block cluster f*** is actually doing (rather than leaving them littered throughout other code).
For example if your following code:
[remoteAPIWithURL:url1 success:^(int status){
[remoteAPIWithURL:url2 success:^(int status){
[remoteAPIWithURL:url3 success:^(int status){
[remoteAPIWithURL:url2 success:^(int status){
//succes!!!
}];
}];
}];
}];
is intended to get an authorise token and then sync something perhaps it would be an NSAuthorizedSyncOperation… I'm sure you get the gist. Benefits of this are nice tidy bundles of behaviour wrapped up in a class with one place to edit them if things change down the line. My 2¢.
In NSDocument the following methods are available for serialization:
Serialization
– continueActivityUsingBlock:
– continueAsynchronousWorkOnMainThreadUsingBlock:
– performActivityWithSynchronousWaiting:usingBlock:
– performAsynchronousFileAccessUsingBlock:
– performSynchronousFileAccessUsingBlock:
I'm just digging into this, but it seems like this would be a good place to start.
Not sure if that is want you where looking for? Though all objects in the array need different times to complete the all appear in the order the where submitted to the queue.
typedef int(^SumUpTill)(int);
SumUpTill sum = ^(int max){
int i = 0;
int result = 0;
while (i < max) {
result += i++;
}
return result;
};
dispatch_queue_t queue = dispatch_queue_create("com.dispatch.barrier.async", DISPATCH_QUEUE_CONCURRENT);
NSArray *urlArray = #[ [NSURL URLWithString:#"http://www.google.com"],
#"Test",
[sum copy],
[NSURL URLWithString:#"http://www.apple.com"]
];
[urlArray enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
dispatch_barrier_async(queue, ^{
if ([obj isKindOfClass:[NSURL class]]) {
NSURLRequest *request = [NSURLRequest requestWithURL:obj];
NSURLResponse *response = nil;
NSError *error = nil;
[NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
NSLog(#"index = %d, response=%# error=%#", idx, response, error);
}
else if ([obj isKindOfClass:[NSString class]]) {
NSLog(#"index = %d, string %#", idx, obj);
}
else {
NSInteger result = ((SumUpTill)obj)(1000000);
NSLog(#"index = %d, result = %d", idx, result);
}
});
}];