If I'm using dispatch_semaphore_wait inside a dispatch queue, could this starve my dispatch queue of threads if many threads are blocked on dispatch_semaphore_wait?
parallelDownloadsSemaphore = dispatch_semaphore_create(4);
[...]
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0), ^{
dispatch_semaphore_wait([self parallelDownloadsSemaphore], DISPATCH_TIME_FOREVER);
// perform lengthy download
dispatch_semaphore_signal([self parallelDownloadsSemaphore]);
});
You assumption is, as far as I know, right. The call of dispatch_semaphore_wait blocks the executing thread. I had this problem in a similar situation and figured out, that a concurrent queue two threads for each core creates (and for each priority). I'm not hundred percent sure if was 2 * cpu cores or 1 * cpu cores, but the number of threads for a concurrent queue is limited.
Related
I need to download images through background threads, but limit the number of threads. Maximum thread number must be 5 and in each thread must be just one serial queue. For client-server using socket rocket library. The main trouble is that i don't need NSOperation pluses like canceling operations. Looking for a simple decision, but can found just something like this:
self.limitingSema = dispatch_semaphore_create(kOperationLimit);
dispatch_queue_t concurentQueue = dispatch_queue_create("limiting queue", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(concurentQueue, ^{
dispatch_semaphore_wait(self.limitingSema, DISPATCH_TIME_FOREVER);
/* upload image here */
dispatch_semaphore_signal(self.limitingSema);
});
But then how to limit number of threads and to wait new operations starting, until they are not ready in queue ?
Is it good to control number of queues ?
NSArray *queues = #[dispatch_queue_create("com.YOU.binaryQueue_1", DISPATCH_QUEUE_SERIAL),
dispatch_queue_create("com.YOU.binaryQueue_2", DISPATCH_QUEUE_SERIAL),
dispatch_queue_create("com.YOU.binaryQueue_3", DISPATCH_QUEUE_SERIAL)
];
NSUInteger randQueue = arc4random() % [queues count];
dispatch_async([queues objectAtIndex:randQueue], ^{
NSLog(#"Do something");
});
randQueue = arc4random() % [queues count];
dispatch_async([queues objectAtIndex:randQueue], ^{
NSLog(#"Do something else");
});
GCD has no option to limit the amount of concurrent blocks running.
This will potentially create one thread that just waits for each operation you enqueue. GCD dynamically adjusts the number of threads it uses. If you enqueue another block and GCD has no more threads available it will spin up another thread if it notices there are free CPU cores available. Since the worker thread is sleeping inside your block the CPU is considered free. This will cause many threads using up a lot of memory - each thread gets 512 KB of Stack.
Your best option would be to use NSOperationQueue for this as you can control directly how many operations will be run in parallel using the maxConcurrentOperationCount property. This will be easier (less code for you to write, test and debug) and much more efficient.
If I need to create a large number of queues (say 10+ queues for image loading), is it faster to use the global concurrent queue or create the same number of private dispatch queues? For a quad-core CPU, is the concurrent queue limited to four concurrent queues before it turns into serial queue for subsequent queued tasks?
I'd suggest creating your own concurrent queue which constrains how many concurrent operations are permitted. For example, you could create a single concurrent NSOperationQueue with maxConcurrentOperationCount set to four or five. Then add all of your synchronous image retrieval requests to that. For example:
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
queue.maxConcurrentOperationCount = 5;
Then just add all of your image requests with something like:
[queue addOperationWithBlock:^{
// request image
}];
You can get fancier than that, but this is what a basic alternative to your two suggestions. But this will ensure that you do not have more than five concurrent network requests.
Note, for this to work (as well as your GCD suggestions), your operations, themselves, must be synchronous. If they are not synchronous, then you have to do some extra work to make sure that the operations don't complete until the task they perform does.
If you want to know when they're all done, you can use a completion operation:
NSOperation *completionOperation = [NSBlockOperation operationWithBlock:^{
// this is what will happen when they're done
}];
Then add your operations:
NSOperation *operation = [NSBlockOperation operationWithBlock:^{
// do network request here
}];
[completionOperation addDependency:operation];
[queue addOperation:operation];
And when done queuing all of your individual operations, you can then queue that completion operation, which won't fire until the rest are done (because you've declared a dependency between them):
[queue addOperation:completionOperation];
Faster depends on the work being done, of course.
The global concurrent queue attempts to match the number of concurrent activities to the available hardware. That's not documented, so it might or might not match the number of cores (or maybe double the number of cores if they're hyper threaded and the work permits). If queued actions block (e.g. on network activity or disk I/O) then the global queue will start new jobs.
You can create your own queues to force the issue, but that probably won't help. If you have four cores then queueing up 10 or 20 or whatever number of simultaneous CPU-heavy actions isn't going to help the overall speed. Once you max out resources, you've maxed them out, and adding more private queues don't change that.
I have question around this code
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSData* data = [NSData dataWithContentsOfURL:
kLatestKivaLoansURL];
[self performSelectorOnMainThread:#selector(fetchedData:)
withObject:data waitUntilDone:YES];
});
The first parameter of this code is
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
Are we asking this code to perform serial tasks on global queue whose definition itself is that it returns global concurrent queue of a given priority level?
What is advantage of using dispatch_get_global_queue over the main queue?
I am confused. Could you please help me to understand this better.
The main reason you use the default queue over the main queue is to run tasks in the background.
For instance, if I am downloading a file from the internet and I want to update the user on the progress of the download, I will run the download in the priority default queue and update the UI in the main queue asynchronously.
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
//Background Thread
dispatch_async(dispatch_get_main_queue(), ^(void){
//Run UI Updates
});
});
All of the DISPATCH_QUEUE_PRIORITY_X queues are concurrent queues (meaning they can execute multiple tasks at once), and are FIFO in the sense that tasks within a given queue will begin executing using "first in, first out" order. This is in comparison to the main queue (from dispatch_get_main_queue()), which is a serial queue (tasks will begin executing and finish executing in the order in which they are received).
So, if you send 1000 dispatch_async() blocks to DISPATCH_QUEUE_PRIORITY_DEFAULT, those tasks will start executing in the order you sent them into the queue. Likewise for the HIGH, LOW, and BACKGROUND queues. Anything you send into any of these queues is executed in the background on alternate threads, away from your main application thread. Therefore, these queues are suitable for executing tasks such as background downloading, compression, computation, etc.
Note that the order of execution is FIFO on a per-queue basis. So if you send 1000 dispatch_async() tasks to the four different concurrent queues, evenly splitting them and sending them to BACKGROUND, LOW, DEFAULT and HIGH in order (ie you schedule the last 250 tasks on the HIGH queue), it's very likely that the first tasks you see starting will be on that HIGH queue as the system has taken your implication that those tasks need to get to the CPU as quickly as possible.
Note also that I say "will begin executing in order", but keep in mind that as concurrent queues things won't necessarily FINISH executing in order depending on length of time for each task.
As per Apple:
https://developer.apple.com/library/content/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
A concurrent dispatch queue is useful when you have multiple tasks that can run in parallel. A concurrent queue is still a queue in that it dequeues tasks in a first-in, first-out order; however, a concurrent queue may dequeue additional tasks before any previous tasks finish. The actual number of tasks executed by a concurrent queue at any given moment is variable and can change dynamically as conditions in your application change. Many factors affect the number of tasks executed by the concurrent queues, including the number of available cores, the amount of work being done by other processes, and the number and priority of tasks in other serial dispatch queues.
Basically, if you send those 1000 dispatch_async() blocks to a DEFAULT, HIGH, LOW, or BACKGROUND queue they will all start executing in the order you send them. However, shorter tasks may finish before longer ones. Reasons behind this are if there are available CPU cores or if the current queue tasks are performing computationally non-intensive work (thus making the system think it can dispatch additional tasks in parallel regardless of core count).
The level of concurrency is handled entirely by the system and is based on system load and other internally determined factors. This is the beauty of Grand Central Dispatch (the dispatch_async() system) - you just make your work units as code blocks, set a priority for them (based on the queue you choose) and let the system handle the rest.
So to answer your above question: you are partially correct. You are "asking that code" to perform concurrent tasks on a global concurrent queue at the specified priority level. The code in the block will execute in the background and any additional (similar) code will execute potentially in parallel depending on the system's assessment of available resources.
The "main" queue on the other hand (from dispatch_get_main_queue()) is a serial queue (not concurrent). Tasks sent to the main queue will always execute in order and will always finish in order. These tasks will also be executed on the UI Thread so it's suitable for updating your UI with progress messages, completion notifications, etc.
Swift version
This is the Swift version of David's Objective-C answer. You use the global queue to run things in the background and the main queue to update the UI.
DispatchQueue.global(qos: .background).async {
// Background Thread
DispatchQueue.main.async {
// Run UI Updates
}
}
What happens if you dispatch_async a block of code on a queue that's currently blocked by it's own dispatch_sync operation? Do they lock or will the blocked queue continue after the dispatch_sync operation returns?
I have an object I created that manages access to a backing store (SQLite, in this case). It uses one concurrent GCD queue and any other objects that want to access the information from the store will pass a request to the manager along with a block that will be executed asynchronously. The essence of what happens is this (not actual code):
- (void) executeRequest:(StoreRequest *)request withCompletionBlock:(void(^)(NSInteger result)block{
dispatch_queue_t currentContext = dispatch_get_current_queue();
dispatch_async(_storeQueue, ^{
NSInteger result = [_store executeRequest:request];
if (block){
dispatch_async(currentContext, ^{
block(result);
}
}
});
}
The real code is a bit more complex (I actually queue up and store requests/blocks/contexts to execute at the end of a run loop). I also use dispatch_barrier_async for write requests to prevent concurrent read/writing. This all works fine, but in certain situations I also need to perform a synchronous request on the store. Now this request doesn't need to be performed before any queued up operations, but I do need the requesting queue blocked until the operation is performed. This can be easily done:
- (NSInteger) executeRequest:(StoreRequest *)request{
__block NSInteger result = 0;
dispatch_sync(_storeQueue, ^{
result = [_store executeRequest:request];
});
return result;
}
My question is this: What happens if a pending asynchronous operation placed before the synchronous operation dispatches a block of code asynchronously on the queue that is currently blocked by the synchronous dispatch. In other words, the above operation will dispatch its request at the end of the _store queue and wait. But it's quite possible (even likely) that the operations in front of it include asynchronous dispatches back to the waiting queue (for other operations). Will this lock the threads? Since the queued blocks are dispatched asynchronously the _store queue will never be blocked and therefore will finish, theoretically allowing the queue it's blocking to continue...but I'm not sure what happens with the blocks that were asynchronously dispatched or if dispatching anything to a block thread locks it up. I would assume that the blocked queue will continue, finish it's request and then the process the pending blocks, but I want to make sure.
Actually, now that I've written this all up, I'm pretty sure it'll work just fine, but I'm going to post this question anyway to make sure I'm not missing anything.
dispatch_async never blocks. It's that simple.
The dispatch_async itself never blocks. It appends the block to the end of the queue and returns immediately.
Will the block get executed? It depends. In a sequential queue, if one block is blocked, no other block will execute until that block gets unblocked and finishes. On a background queue, the queue can use multiple threads, so even if some blocks are blocked, it will just start other blocks. I haven't tried if there is a limit to the number of blocked blocks, but there's a good chance that all unblocked blocks will eventually execute and finish, and you are left with the blocked ones.
This is related to the Grand Central Dispatch API used in objective-c, with the following codes:
dispatch_queue_t downloadQueue = dispatch_queue_create("other queue", NULL);
dispatch_async(downloadQueue, ^{
....some functions that retrieves data from server...
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"got it");
});
});
dispatch_release(downloadQueue);
My current understanding of how queues work is that the blocks in a queue will go on a thread for that queue. So two queues will become two threads. With multi-threading, those two queues will happen simultaneously.
However, the "got it" appears right at when the program received the data. How did that happen?
Please point out if you want to correct or add to my understanding of threading and queue.
So two queues will become two threads.
Not necessarily. One of the advantages of GCD is that the system dynamically decides how many threads it creates, depending on the number of available CPU cores and other factors. It might well be that two custom queues are executed on the same background thread, especially if there are rarely tasks for both queues waiting to be executed.
The only thing you can be certain about is that a serial queue never uses more than one thread at the same time. So the tasks you add to the same (serial) queue will always be executed in order. This is not the case for the three concurrent global queues you get with dispatch_get_global_queue().
Additionally, the main queue (the one you access with dispatch_get_main_queue()) is always bound to the main thread. It is the only queue whose tasks are executed on the program's main thread.
In your example, the task for the downloadQueue gets executed on a background thread. As soon as the code reaches dispatch_async(dispatch_get_main_queue(), ^{, GCD pushes this new task to the main thread where it gets executed practically immediately provided that the main thread is not busy with other things.