I have question around this code
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSData* data = [NSData dataWithContentsOfURL:
kLatestKivaLoansURL];
[self performSelectorOnMainThread:#selector(fetchedData:)
withObject:data waitUntilDone:YES];
});
The first parameter of this code is
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
Are we asking this code to perform serial tasks on global queue whose definition itself is that it returns global concurrent queue of a given priority level?
What is advantage of using dispatch_get_global_queue over the main queue?
I am confused. Could you please help me to understand this better.
The main reason you use the default queue over the main queue is to run tasks in the background.
For instance, if I am downloading a file from the internet and I want to update the user on the progress of the download, I will run the download in the priority default queue and update the UI in the main queue asynchronously.
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
//Background Thread
dispatch_async(dispatch_get_main_queue(), ^(void){
//Run UI Updates
});
});
All of the DISPATCH_QUEUE_PRIORITY_X queues are concurrent queues (meaning they can execute multiple tasks at once), and are FIFO in the sense that tasks within a given queue will begin executing using "first in, first out" order. This is in comparison to the main queue (from dispatch_get_main_queue()), which is a serial queue (tasks will begin executing and finish executing in the order in which they are received).
So, if you send 1000 dispatch_async() blocks to DISPATCH_QUEUE_PRIORITY_DEFAULT, those tasks will start executing in the order you sent them into the queue. Likewise for the HIGH, LOW, and BACKGROUND queues. Anything you send into any of these queues is executed in the background on alternate threads, away from your main application thread. Therefore, these queues are suitable for executing tasks such as background downloading, compression, computation, etc.
Note that the order of execution is FIFO on a per-queue basis. So if you send 1000 dispatch_async() tasks to the four different concurrent queues, evenly splitting them and sending them to BACKGROUND, LOW, DEFAULT and HIGH in order (ie you schedule the last 250 tasks on the HIGH queue), it's very likely that the first tasks you see starting will be on that HIGH queue as the system has taken your implication that those tasks need to get to the CPU as quickly as possible.
Note also that I say "will begin executing in order", but keep in mind that as concurrent queues things won't necessarily FINISH executing in order depending on length of time for each task.
As per Apple:
https://developer.apple.com/library/content/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
A concurrent dispatch queue is useful when you have multiple tasks that can run in parallel. A concurrent queue is still a queue in that it dequeues tasks in a first-in, first-out order; however, a concurrent queue may dequeue additional tasks before any previous tasks finish. The actual number of tasks executed by a concurrent queue at any given moment is variable and can change dynamically as conditions in your application change. Many factors affect the number of tasks executed by the concurrent queues, including the number of available cores, the amount of work being done by other processes, and the number and priority of tasks in other serial dispatch queues.
Basically, if you send those 1000 dispatch_async() blocks to a DEFAULT, HIGH, LOW, or BACKGROUND queue they will all start executing in the order you send them. However, shorter tasks may finish before longer ones. Reasons behind this are if there are available CPU cores or if the current queue tasks are performing computationally non-intensive work (thus making the system think it can dispatch additional tasks in parallel regardless of core count).
The level of concurrency is handled entirely by the system and is based on system load and other internally determined factors. This is the beauty of Grand Central Dispatch (the dispatch_async() system) - you just make your work units as code blocks, set a priority for them (based on the queue you choose) and let the system handle the rest.
So to answer your above question: you are partially correct. You are "asking that code" to perform concurrent tasks on a global concurrent queue at the specified priority level. The code in the block will execute in the background and any additional (similar) code will execute potentially in parallel depending on the system's assessment of available resources.
The "main" queue on the other hand (from dispatch_get_main_queue()) is a serial queue (not concurrent). Tasks sent to the main queue will always execute in order and will always finish in order. These tasks will also be executed on the UI Thread so it's suitable for updating your UI with progress messages, completion notifications, etc.
Swift version
This is the Swift version of David's Objective-C answer. You use the global queue to run things in the background and the main queue to update the UI.
DispatchQueue.global(qos: .background).async {
// Background Thread
DispatchQueue.main.async {
// Run UI Updates
}
}
Related
Suppose I have a semaphore to control access to a dispatch_queue_t.
I wait for the semaphore (dispatch_semaphore_wait) before scheduling a block on the dispatch queue.
dispatch_semaphore_wait(semaphore,DISPATCH_TIME_FOREVER)
dispatch_async(queue){ //do work ; dispatch_semaphore_signal(semaphore); }
Suppose I have work waiting in several separate locations. Some "work" have higher priority than the other "work".
Is there a way to control which of the "work" will be scheduled next?
Additional information: using a serial queue without a semaphore is not an option for me because the "work" consist of its own queue with several blocks. All of the work queue has to run, or none of it. No work queues can run simultaneously. I have all of this working fine, except for the priority control.
Edit: (in response to Jeremy, moved from comments)
Ok, suppose you have a device/file/whatever like a printer. A print job consists of multiple function calls/blocks (print header, then print figure, then print text,...) grouped together in a transaction. Put these blocks on a serial queue. One queue per transaction.
However you can have multiple print jobs/transactions. Blocks from different print jobs/transactions can not be mixed. So how do you ensure that a transaction queue runs all of its jobs and that a transaction queue is not started before another queue has finished? (I am not printing, just using this as an example).
Semaphores are used to regulate the use of finite resources.
https://www.mikeash.com/pyblog/friday-qa-2009-09-25-gcd-practicum.html
Concurrency Programming Guide
The next step I am trying to figure out is how to run one transaction before another.
You are misusing the API here. You should not be using semaphores to control what gets scheduled to dispatch queues.
If you want to serialize execution of blocks on the queue, then use a serial queue rather than a concurrent queue.
If different blocks that you are enqueuing have different priority, then you should express that different priority using the QOS mechanisms added in OS X 10.10 and iOS 8.0. If you need to run on older systems, then you can use the different priority global concurrent queues for appropriate work. Beyond that, there isn't much control on older systems.
Furthermore, semaphores inherently work against priority inheritance since there is no way for the system to determine who will signal the semaphore and thus you can easily end up in a situation where a higher priority thread will be blocked for a long time waiting for a lower priority thread to signal the semaphore. This is called priority inversion.
If I need to create a large number of queues (say 10+ queues for image loading), is it faster to use the global concurrent queue or create the same number of private dispatch queues? For a quad-core CPU, is the concurrent queue limited to four concurrent queues before it turns into serial queue for subsequent queued tasks?
I'd suggest creating your own concurrent queue which constrains how many concurrent operations are permitted. For example, you could create a single concurrent NSOperationQueue with maxConcurrentOperationCount set to four or five. Then add all of your synchronous image retrieval requests to that. For example:
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
queue.maxConcurrentOperationCount = 5;
Then just add all of your image requests with something like:
[queue addOperationWithBlock:^{
// request image
}];
You can get fancier than that, but this is what a basic alternative to your two suggestions. But this will ensure that you do not have more than five concurrent network requests.
Note, for this to work (as well as your GCD suggestions), your operations, themselves, must be synchronous. If they are not synchronous, then you have to do some extra work to make sure that the operations don't complete until the task they perform does.
If you want to know when they're all done, you can use a completion operation:
NSOperation *completionOperation = [NSBlockOperation operationWithBlock:^{
// this is what will happen when they're done
}];
Then add your operations:
NSOperation *operation = [NSBlockOperation operationWithBlock:^{
// do network request here
}];
[completionOperation addDependency:operation];
[queue addOperation:operation];
And when done queuing all of your individual operations, you can then queue that completion operation, which won't fire until the rest are done (because you've declared a dependency between them):
[queue addOperation:completionOperation];
Faster depends on the work being done, of course.
The global concurrent queue attempts to match the number of concurrent activities to the available hardware. That's not documented, so it might or might not match the number of cores (or maybe double the number of cores if they're hyper threaded and the work permits). If queued actions block (e.g. on network activity or disk I/O) then the global queue will start new jobs.
You can create your own queues to force the issue, but that probably won't help. If you have four cores then queueing up 10 or 20 or whatever number of simultaneous CPU-heavy actions isn't going to help the overall speed. Once you max out resources, you've maxed them out, and adding more private queues don't change that.
What happens if you dispatch_async a block of code on a queue that's currently blocked by it's own dispatch_sync operation? Do they lock or will the blocked queue continue after the dispatch_sync operation returns?
I have an object I created that manages access to a backing store (SQLite, in this case). It uses one concurrent GCD queue and any other objects that want to access the information from the store will pass a request to the manager along with a block that will be executed asynchronously. The essence of what happens is this (not actual code):
- (void) executeRequest:(StoreRequest *)request withCompletionBlock:(void(^)(NSInteger result)block{
dispatch_queue_t currentContext = dispatch_get_current_queue();
dispatch_async(_storeQueue, ^{
NSInteger result = [_store executeRequest:request];
if (block){
dispatch_async(currentContext, ^{
block(result);
}
}
});
}
The real code is a bit more complex (I actually queue up and store requests/blocks/contexts to execute at the end of a run loop). I also use dispatch_barrier_async for write requests to prevent concurrent read/writing. This all works fine, but in certain situations I also need to perform a synchronous request on the store. Now this request doesn't need to be performed before any queued up operations, but I do need the requesting queue blocked until the operation is performed. This can be easily done:
- (NSInteger) executeRequest:(StoreRequest *)request{
__block NSInteger result = 0;
dispatch_sync(_storeQueue, ^{
result = [_store executeRequest:request];
});
return result;
}
My question is this: What happens if a pending asynchronous operation placed before the synchronous operation dispatches a block of code asynchronously on the queue that is currently blocked by the synchronous dispatch. In other words, the above operation will dispatch its request at the end of the _store queue and wait. But it's quite possible (even likely) that the operations in front of it include asynchronous dispatches back to the waiting queue (for other operations). Will this lock the threads? Since the queued blocks are dispatched asynchronously the _store queue will never be blocked and therefore will finish, theoretically allowing the queue it's blocking to continue...but I'm not sure what happens with the blocks that were asynchronously dispatched or if dispatching anything to a block thread locks it up. I would assume that the blocked queue will continue, finish it's request and then the process the pending blocks, but I want to make sure.
Actually, now that I've written this all up, I'm pretty sure it'll work just fine, but I'm going to post this question anyway to make sure I'm not missing anything.
dispatch_async never blocks. It's that simple.
The dispatch_async itself never blocks. It appends the block to the end of the queue and returns immediately.
Will the block get executed? It depends. In a sequential queue, if one block is blocked, no other block will execute until that block gets unblocked and finishes. On a background queue, the queue can use multiple threads, so even if some blocks are blocked, it will just start other blocks. I haven't tried if there is a limit to the number of blocked blocks, but there's a good chance that all unblocked blocks will eventually execute and finish, and you are left with the blocked ones.
I'm executing the following method :
MotionHandler.m
-(void)startAccelerationUpdates
{
[motionManagerstartDeviceMotionUpdatesToQueue:[NSOperationQueue mainQueue]withHandler:^(CMDeviceMotion *motion, NSError *error){.....}
}
on a background thread, as follows:
[currentMotionHandler performSelectorInBackground:#selector(startAccelerationUpdates) withObject:nil];
But the above method uses the main Queue (which is on the main thread) to perform the necessary updates even though I'm calling it on a background thread.. So are acceleration updates being performed on a background thread or on the main thread, I'm confused..?
What's even more interesting is that when I call the above method on background thread again, but this time using the current Queue, I get no updates. Could someone please explain the difference between running something on :
1. a background thread but on the main queue
2. a background thread but on the current queue
3. the main thread but on the main queue
4. the main thread but on the current queue
in the current implementation? Thank you!
I'll give it a shot. First, without being told by the NSOperationQueue class reference, we could not infer anything about what thread the 'mainQueue' would run on. Reading it we see that in fact that queue runs its operations on the mainThread, the one the UI uses, so you can update the UI in operations posted to that queue. Although it doesn't say it, these operations must be serial, due to them being executed by the runLoop (its possible they can get preempted too, not 100% sure of that).
The purpose for currentQueue is so that running operations can determine the queue they are on, and so they can potentially queue new operations on that queue.
a background thread but on the main queue
Not possible - the NSOperation's mainQueue is always associated with the mainThread.
a background thread but on the current queue
When you create a NSOperationQueue, and add NSOperations to it, those get run on background threads managed by the queue. Any given operation can query what thread its on, and that thread won't change while it runs. That said, a second operation on that queue may get run on a different thread.
the main thread but on the main queue
See 1)
the main thread but on the current queue
If you queue an operation to the mainQueue (which we know is always on the mainThread), and you ask for the currentQueue, it will return the mainQueue:
[NSOperationQueue currentQueue] == [NSOperationQueue mainQueue];
You are confusing queues with threads. Especially since NSOpertionQueue has been rewritten to use GCD, there is little connection between queues and specific threads (except for the special case of the main thread).
Operations/blocks/tasks - whatever you want to call them - are inserted into a queue, and "worker thread(s)" pull these off and perform them. You have little control over which exact thread is going to do the work -- except for the main queue. Note, this is not exactly right, because it's a simplification, but it's true enough unless you are doing something quite advanced and specific.
So, none of your 4 scenarios even make sense, because you can't, for example, run something on "a background thread but on the main queue."
Now, your method startAccelerationUpdates specifically tells the CMMotionManager to put your handler on the main queue. Thus, when startAccelerationUpdates is called, it gets run in whichever thread it's running, but it schedules the handler to be executed on the main thread.
To somewhat complicate things, you are calling the startAccelerationUpdates method by calling performSelectorInBackground. Again, you don't know which thread is going to actually invoke startAccelerationUpdates, but it will not be the main thread.
However, in your case, all that thread is doing is calling startAccelerationUpdates which is starting motion updates, and telling them to be handled on the main thread (via the main queue).
Now, here's something to dissuade you from using the main queue to handle motion events, directly from the documentation...
Because the processed events might arrive at a high rate, using the main operation queue is not recommended.
Unfortunately, your statement
What's even more interesting is that when I call the above method on
background thread again, but this time using the current Queue, I get
no updates.
does not provide enough information to determine what you tried, how you tried it, or why you think it did not work. So, I'll make a guess... which may be wrong.
I'll key on your use of the current Queue.
I assume you mean that you substitute [NSOperationQueue mainQueue] with [NSOperationQueue currentQueue].
Well, let's see what that does. Instead of using the main queue, you will be using "some other" queue. Which one? Well, let's look at the documentation:
currentQueue
Returns the operation queue that launched the current
operation.
+ (id)currentQueue
Return Value
The operation queue that started the operation or nil if the queue could not be determined.
Discussion
You can use this method from within a running operation
object to get a reference to the operation queue that started it.
Calling this method from outside the context of a running operation
typically results in nil being returned.
Please note the discussion section. If you call this when you are not running an operation that was invoked from an NSOperationQueue, you will get nil which means there will be no queue on which to place your handler. So, you will get nothing.
You must specify which queue is to be used, if you want to use an NSOperationQueue other than the main queue. So, if that's the route you want to go, just create your own operation queue to handle motion events, and be off!
Good Luck!
This is related to the Grand Central Dispatch API used in objective-c, with the following codes:
dispatch_queue_t downloadQueue = dispatch_queue_create("other queue", NULL);
dispatch_async(downloadQueue, ^{
....some functions that retrieves data from server...
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"got it");
});
});
dispatch_release(downloadQueue);
My current understanding of how queues work is that the blocks in a queue will go on a thread for that queue. So two queues will become two threads. With multi-threading, those two queues will happen simultaneously.
However, the "got it" appears right at when the program received the data. How did that happen?
Please point out if you want to correct or add to my understanding of threading and queue.
So two queues will become two threads.
Not necessarily. One of the advantages of GCD is that the system dynamically decides how many threads it creates, depending on the number of available CPU cores and other factors. It might well be that two custom queues are executed on the same background thread, especially if there are rarely tasks for both queues waiting to be executed.
The only thing you can be certain about is that a serial queue never uses more than one thread at the same time. So the tasks you add to the same (serial) queue will always be executed in order. This is not the case for the three concurrent global queues you get with dispatch_get_global_queue().
Additionally, the main queue (the one you access with dispatch_get_main_queue()) is always bound to the main thread. It is the only queue whose tasks are executed on the program's main thread.
In your example, the task for the downloadQueue gets executed on a background thread. As soon as the code reaches dispatch_async(dispatch_get_main_queue(), ^{, GCD pushes this new task to the main thread where it gets executed practically immediately provided that the main thread is not busy with other things.