Is there a framework or method to handle multiple concurrent UILocalNotifications?
I have multiple concurrent UILocalNotifications that get turned into UIAlertControllers when the applications is active. Is there some sort of way to buffer UIAlerts so they won't fire all at the same time. Perhaps some sort of pre-existing framework or method that would buffer these alerts.
There is not a framework for this. You need to write code for this and I would recommend presenting the alerts on a serial queue. Here is an example of a serial queue that you may be able to use a kind of templet:
dispatch_queue_t serialQueue = dispatch_queue_create("com.unique.name.queue", DISPATCH_QUEUE_SERIAL);
dispatch_async(serialQueue, ^{
[self ReadAllImagesFromPhotosLibrary];
});
dispatch_async(serialQueue, ^{
[self WriteFewImagestoDirectory];
});
dispatch_async(serialQueue, ^{
[self GettingBackAllImagesFromFolder];
});
dispatch_async(serialQueue, ^{
[self MoveToNextView];
});
Related
I have a series of dispatch_async that I am performing and I would like to only update the UI when they are all done. Problem is the method within dispatch_async calls something in a separate thread so it returns before the data is fully loaded and dispatch_group_notify is called before everything is loaded.
So I introduce a infinite loop to make it wait until a flag is set.
Is this the best way? See code below.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_group_t group = dispatch_group_create();
for (...) {
dispatch_group_async(group, queue, ^{
__block BOOL dataLoaded = NO;
[thirdPartyCodeCallWithCompletion:^{
dataLoaded = YES;
}];
// prevent infinite loop
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1.0 * NSEC_PER_SEC)),
queue, ^{
dataLoaded = YES;
});
// infinite loop to wait until data is loaded
while (1) {
if (dataLoaded) break;
}
}
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
//update UI
});
}
You're already aware of dispatch groups. Why not just use dispatch_group_wait(), which includes support for a timeout? You can use dispatch_group_enter() and dispatch_group_leave() rather than dispatch_group_async() to make the group not done until the internal block for the third-party call with completion is finished.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_group_t group = dispatch_group_create();
for (...) {
dispatch_group_enter(group);
dispatch_async(queue, ^{
[thirdPartyCodeCallWithCompletion:^{
dispatch_group_leave(group);
}];
}
}
dispatch_group_wait(group, dispatch_time(DISPATCH_TIME_NOW, NSECS_PER_SEC));
dispatch_async(dispatch_get_main_queue(), ^{
//update UI
});
The use of dispatch_group_wait() does make this code synchronous, which is bad if run on the main thread. Depending on what exactly is supposed to happen if it times out, you could use dispatch_group_notify() as you were and use dispatch_after() to just updates the UI rather than trying to pretend the block completed.
Update: I tweaked my code to make sure that "update UI" happens on the main queue, just in case this code isn't already on the main thread.
By the way, I only used dispatch_async() for the block which calls thirdPartyCodeCallWithCompletion: because your original used dispatch_group_async() and I wasn't sure that the hypothetical method was asynchronous. Most APIs which take a completion block are asynchronous, though. If that one is, then you can just invoke it directly.
Another method is to use semaphore and the dispatch_semaphore_wait:
// Create your semaphore, 0 is specifying the initial pool size
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
#autoreleasepool {
// Your code goes here
}
// Release the resource and signal the semaphore
dispatch_semaphore_signal(semaphore);
});
// Wait for the above block execution, AKA Waits for (decrements) a semaphore.
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
// After this line you can now safely assert anything you want regarding the async operation since it is done.
I am new to asynchronous callbacks and have been given differing advice. I need to perform asynchronous callbacks and after many hours of research I still do not know if I should use blocks or GCD and queues. Any pointers would be welcome.
OK. So what I was really asking is:
"in order to use an 'asynchronous' callbacks, do I need to use GCD and queues?"
What I am gathering from the answers is that the answer is YES. Definitely GCD and queues inside of blocks.
My confusion stemmed from the fact that I had been given the direction that all I needed was a block, like the code below:
[UIView animateWithDuration:.4f
animations:^{
flashView.alpha = 0.f;
}
completion:^(BOOL finished){
[flashView removeFromSuperview];
}
];
But what I am seeing in the answers here is that the above block in not sufficient for making 'asynchronous' callbacks. Instead I DO in-fact need to use GCD and queues inside a block, like the code below:
- (void)invokeAsync:(id (^)(void))asyncBlock resultBlock:(void (^)(id))resultBlock errorBlock:(void (^)(id))errorBlock {
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
id result = nil;
id error = nil;
#try {
result = asyncBlock();
} #catch (NSException *exception) {
NSLog(#"caught exception: %#", exception);
error = exception;
}
// tell the main thread
dispatch_async(dispatch_get_main_queue(), ^{
NSAutoreleasePool *secondaryPool = [[NSAutoreleasePool alloc] init];
if (error != nil) {
errorBlock(error);
} else {
resultBlock(result);
}
[secondaryPool release];
});
[pool release];
});
}
An asynchronous callback is one where your current thread keeps executing statements, and you detach the execution of code in a different thread to be ran later.
There are several technologies to accomplish this. On this example I'm calling the method cacheImage:, with parameter image (just an example) in 4 different asynchronous ways.
// 1. NSThread
[NSThread detachNewThreadSelector:#selector(cacheImage:) toTarget:self withObject:image];
// 2. performSelector...
[self performSelectorInBackground:#selector(cacheImage:) withObject:image];
// 3. NSOperationQueue
NSInvocationOperation *invOperation = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(cacheImage:) object:image];
NSOperationQueue *opQueue = [[NSOperationQueue alloc] init];
[opQueue addOperation:invOperation];
// 4. GCD
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self cacheImage:image];
});
By far the more simple way is to use GCD because it already has a thread ready for you to use, instead having to create it yourself with the other options.
However, because blocks are implemented as objects, you could indeed use blocks without GCD, for example:
// block definition
typedef void (^hello_t)();
// method that uses a block as parameter
-(void) runBlock:(hello_t)hello {
hello();
}
// asynchronous execution of a block
[NSThread detachNewThreadSelector:#selector(runBlock) toTarget:self withObject:^(){
NSLog(#"hi");
}];
PS: you don't need to use NSAutoreleasePool manually unless you create many many objects and you want to free memory immediately. Also, #try #catch are rarely used in Objective-C
Not any point of confusion here. GCD also has a block execution.
GCD API, which supports the asynchronous execution of operations at the Unix level of the system.
Block objects are a C-level syntactic and runtime feature. They are similar to standard C functions, but in addition to executable code they may also contain variable bindings to automatic (stack) or managed (heap) memory. A block can therefore maintain a set of state (data) that it can use to impact behavior when executed.
Apple designed blocks with the explicit goal of making it easier to write programs for the Grand Central Dispatch threading architecture, although it is independent of that architecture and can be used in much the same way as closures in other languages. Apple has implemented blocks both in their own branch of the GNU Compiler Collection and in the Clang LLVM compiler front end. Language runtime library support for blocks is also available as part of the LLVM project.
Therefore you can use any one of them given you same functionality.
Suppose I call dispatch_async() three times in order:
dispatch_async(dispatch_get_main_queue(),
^{
[self doOne];
});
// some code here
dispatch_async(dispatch_get_main_queue(),
^{
[self doTwo];
});
// more code here
dispatch_async(dispatch_get_main_queue(),
^{
[self doThree];
});
Will this always be executed like
[self doOne], [self doTwo], then [self doThree], or is the order is guaranteed?
In this case, the question probably is if the main queue is serial or concurrent.
From the documentation:
dispatch_get_main_queue
Returns the serial dispatch queue associated with the application’s
main thread.
so the main queue is a serial queue, and [self doOne], [self doTwo], [self doThree] are executed sequentially in that order.
Concurrency Programming Guide, About Dispatch Queues:
The main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. [emphasis mine]
I have a method that I add to a GCD queue that I have created (so it's a serial queue) and then run it async. From within that block of code I make a dispatch to the main queue, when that block of code dispatched to the main queue is complete I set a BOOL flag to YES, so that I further down in my code can check if this condition is YES then I can continue to the next method. Here is the code in short:
dispatch_queue_t queue = dispatch_queue_create("ProcessSerialQueue", 0);
dispatch_async(queue, ^{
Singleton *s = [Singleton sharedInstance];
dispatch_sync(dispatch_get_main_queue(), ^{
[s processWithCompletionBlock:^{
// Process is complete
processComplete = YES;
}];
});
});
while (!processComplete) {
NSLog(#"Waiting");
}
NSLog(#"Ready for next step");
However this does not work, because dispatch_sync is never able to run the code on the main queue. Is this because I'm running a while loop on the main queue (rendering it busy)?
However if I change the implementation of the while loop to this:
while (!processComplete) {
NSLog(#"Waiting")
NSDate *date = [NSDate distantFuture];
[[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:date];
}
It works without a glitch. Is this an acceptable solution for this scenario? Can I do it any other preferred way? What kind of magic stuff does NSRunLoop do? I need to understand this better.
Part of the main thread's NSRunLoop job is to run any blocks queued on the main thread. By spinning in a while-loop, you're preventing the runloop from progressing, so the queued blocks are never run unless you explicitly make the loop run yourself.
Runloops are a fundemental part of Cocoa, and the documentation is pretty good, so I'd reccommend reading it.
As a rule, I'd avoid manually invoking the runloop as you're doing. You'll waste memory and make make things complicated very quickly if you have multiple manual invocations running on top of one another.
However, there is a much better way of doing this. Split your method into a -process and a -didProcess method. Start the async operation with your -process method, and when it completes, call -didProcess from the completion block. If you need to pass variables from one method to the other, you can pass them as arguments to your -didProcess method.
Eg:
dispatch_queue_t queue = dispatch_queue_create("ProcessSerialQueue", 0);
dispatch_async(queue, ^{
Singleton *s = [Singleton sharedInstance];
dispatch_sync(dispatch_get_main_queue(), ^{
[s processWithCompletionBlock:^{
[self didProcess];
}];
});
});
You might also consider making your singleton own the dispatch queue and make it responsible for handling the dispatch_async stuff, as it'll save on all those nasty embedded blocks if you're always using it asynchronously.
Eg:
[[Singleton sharedInstance] processAyncWithCompletionBlock:^{
NSLog(#"Ready for next step...");
[self didProcess];
}];
Doing something like what you posted will most likely freeze the UI. Rather than freezing up everything, call your "next step" code in a completion block.
Example:
dispatch_queue_t queue = dispatch_queue_create("ProcessSerialQueue", 0);
dispatch_queue_t main = dispatch_get_main_queue();
dispatch_async(queue, ^{
Singleton *s = [Singleton sharedInstance];
dispatch_async(dispatch_get_main_queue(), ^{
[s processWithCompletionBlock:^{
// Next step code
}];
});
});
Don't go creating a loop like that waiting for a value inside a block, variables in blocks are read only, instead call your completion code from inside the block.
dispatch_async(queue, ^{
Singleton *s = [Singelton sharedInstance];
[s processWithCompletionBlock:^{
//process is complete
dispatch_sync(dispatch_get_main_queue(), ^{
//do something on main queue....
NSLog(#"Ready for next step");
});
}];
});
NSLog(#"waiting");
I'm trying to perform the acquisition of data from the internet on the load of my view. To not lag the UI, I'm performing the HTML download and parsing by using
[self performSelectorInBackground:#selector(alertThreadMethod) withObject:nil];
which checks to see if there is an alert online. In order to display the information on the view however, iOS says that I need to use the main thread. So i call the display code right after:
[self performSelectorInBackground:#selector(alertThreadMethod) withObject:nil];
[self loadAlert];
In doing this, the [self loadAlert]; actually runs before the selector in the background (it is faster). Because of this, it does not have the information that the selector in the background is supposed to provide it.
How can I ensure that [self loadAlert]; runs after? Or is there a better way to do this?
You can either move loadAlert invocation into the alertThreadMethod or use Grand Central Dispatch serial queues, e.g.,
dispatch_queue_t queue = dispatch_queue_create("com.example.MyQueue", NULL);
dispatch_async(queue, ^{
[self alertThreadMethod];
[self loadAlert];
});
dispatch_release(queue);
Or, if loadAlert is updating the UI, since you do UI updates in the main queue, you'd do something like:
dispatch_queue_t queue = dispatch_queue_create("com.example.MyQueue", NULL);
dispatch_async(queue, ^{
[self alertThreadMethod];
dispatch_async(dispatch_get_main_queue(), ^{
[self loadAlert];
});
});
dispatch_release(queue);
By the way, if you're just doing this one task in background, rather than creating your own serial queue, you might just use one of the existing background queues. You only need to create a queue if you need the serial nature (i.e. you're going to be numerous dispatch_async calls and you can't have them running concurrently). But in this simple case, this might be even a little more efficient, bypassing the creating and releasing of the serial queue:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
[self alertThreadMethod];
dispatch_async(dispatch_get_main_queue(), ^{
[self loadAlert];
});
});
In your alertThreadMethod, after you have your information, call the method performSelectorOnMainThread:withObject:waitUntilDone: and pass it a selector to your loadAlert method.
-(void)alertThreadMethod
{
// get your information here
performSelectorOnMainThread:#selector(loadAlert) withObject:nil waitUntilDone:NO
}