I have 3 threads (in addition to the main thread). The threads read, process, and write. They each do this to a number of buffers, which are cycled through and reused. The reason it's set up this way is so the program can continue to do the other tasks while one of them is running. So, for example, while the program is writing to disk, it can simultaneously be reading more data.
The problem is I need to synchronize all this so the processing thread doesn't try to process buffers that haven't been filled with new data. Otherwise, there is a chance that the processing step could process leftover data in one of the buffers.
The read thread reads data into a buffer, then marks the buffer as "new data" in an array. So, it works like this:
//set up in main thread
NSConditionLock *readlock = [[NSConditionLock alloc] initWithCondition:0];
//set up lock in thread
[readlock lockWhenCondition:buffer_new[current_buf]];
//copy data to buffer
memcpy(buffer[current_buf],source_data,data_length);
//mark buffer as new (this is reset to 0 once the data is processed)
buffer_new[current_buf] = 1;
//unlock
[readlock unlockWithCondition:0];
I use buffer_new[current_buf] as a condition variable to NSConditionLock. If the buffer isn't marked as new, then the thread in question will lock, waiting for the previous thread to write new data. That part seems to work okay.
The main problem is I need to sync this in both directions. If the read thread happens to take too long for some reason and the processing thread has already finished with processing all the buffers, the processing thread needs to wait and vice-versa.
I'm not sure NSConditionLock is the appropriate way to do this.
I'd turn this on its head. As you say, threading is hard and multi-way synchronization of threads is even harder. Queue based concurrency is often much more natural.
Define three queues; a read queue, a write queue and a processing queue. Then employ a rule stating that no buffer shall be enqueued in more than one queue at a time.
That is, a buffer may be enqueued onto the read queue and, once done reading, enqueued into the processing queue, and once done processing, enqueued into the write queue.
You could use a stack of buffers if you want but, typically, the cost of allocation is pretty cheap compared to the cost of processing and, thus, enqueue-for-read could also do the allocation while dequeue-once-written could do the free.
This would be pretty straightforward to code with GCD. Note that if you really want parallelism, your various queues would really just be throttles, using semaphores -- potentially shared -- to enqueue the work to the global concurrent queues.
Note also that this design has a distinct advantage over what you are currently using in that it uses no locks. The only locks are hidden below the GCD APIs as a part of queue management, but that is effectively invisible to your code.
Have you seen then Apple Concurrency Programming Guide ?
It recommends several preferable methods for moving away from a Threads and Locks concurrency model. Using Operation Queues for example can not only reduce and simplify your code, speed up your development and give you better performance.
Sometimes you need to use threads, and you already have the correct idea. You will need to keep adding locks, and with each it will get exponentially more complicated until you can't understand your own code. Then you can start adding locks at random places. Then you're screwed.
Read the concurrency guide, then follow bbum's advice.
Related
If you have a semaphore that is being used to restrict access to a shared resource or limit the number of concurrent actions, what is the locking algorithm to be able to change the maximum value of that semaphore once it's in use?
Example 1
In NSOperationQueue, there is a property named maxConcurrentOperationCount. This value can be changed after the queue has been created. The documentation notes that changing this value doesn't affect any operations already running, but it does affect pending jobs, which presumably are waiting on a lock or semaphore to execute.
Since that semaphore is potentially being held by pending operations, you can just replace it with one with a new count. So another lock must be needed in the change somewhere, but where?
Example 2:
In most of Apple's Metal sample code, they use a semaphore with an initial count of 3 to manage in-flight buffers. I'd like to experiment changing that number while my application is running, just to see how big of a difference it makes. I could tear down the entire class that uses that semaphore and then rebuild the Metal pipeline, but that's a bit heavy handed. Like above, I'm curious how I can structure a sequence of locks or semaphores to allow me to swap out that semaphore for a different one while everything is running.
My experience is with Grand Central Dispatch, but I'm equally interested in a C++ implementation that might use those locking or atomic constructs.
I should add that I'm aware I can technically just make unbalanced calls to signal and wait but that doesn't seem right to me. Namely, whatever code that is making these changes needs to be able to block itself if wait takes awhile to reduce the count...
All the time till now I had 3D objects created during the startup. But now I need to add them dynamically. What can be simpler, I thought...
The main issue right now is how to upload the new object's data in the fastest way and find out when the data is uploaded.
Here's my setup:
I'm using the vulkan memory allocator library, so I'm free form memory management burden.
I'm planning to use a separate VkBuffer for every object - this way I don't need to manage offsets, alignments and it would be easier to add/remove objects.
And here are my thoughts/questions:
How to upload the data? I want the buffer to be gpu-visible only, that means I need a staging buffer.
If I use the staging buffer I need to know when the data is ready to use on the gpu. I don't want to flush the pipeline and wait. The only way I see is to use a fence per object and only call the draw command when this fence is ready.
If I use a staging buffer and want to upload multiple objects during a short frame, I need somehow to be sure that the parts of this staging buffer not being overridden by different objects. For this, I need to keep it big, handle alignment for the offsets. But how big?
I'm pretty sure I'm overcomplicating. I believe there should be a much simpler pattern. How would you do this?
I believe there should be a much simpler pattern.
It's Vulkan; it's an explicit, low-level API. "Simple" is not its goal.
Overall, your Vulkan code needs to be written to adapt to the capabilities of the hardware. That's the best way to get performance out of it.
The first decision that needs to be made is whether you need staging at all. Staging (for buffer copies) is only necessary if your device's DEVICE_LOCAL memory is not mappable. And yes, there are (integrated) GPUs that allow you to map DEVICE_LOCAL memory. If that is the case, then you can just write directly to where you need the data to go.
If staging is needed, then you need to decide if the hardware supports an independent transfer-only queue. If so, then you will likely get performance benefits by employing it. Not all hardware supports transfer-only queues, so your application needs to adapt. Also, transfer-only queues can have restrictions on the granularity of memory transfers taking place on those queues, so you need to check to see if your streaming strategy fits within the limits of that particular hardware.
Also, if there is no appropriate transfer queue, you can create the effect of a transfer queue by using a second compute or graphics queue... if the hardware supports multiple queues at all. Being able to submit transfer commands and rendering commands on different queues is a good thing, assuming you are taking advantage of threading (ie: issuing submits of the batches to the different queues on different threads).
If you are able to use a separate queue for transfers (whether a true transfer queue or just a separate compute/graphics queue), then you get to play around with semaphores. The batch that transfers data must signal a semaphore when it completes; this is part of the batch in the vkQueueSubmit call. The batch on the main queue that uses the transferred data for some process needs to wait on that semaphore. So both threads need to be using the same VkSemaphore object. And the wait on the semaphore should just have a global memory barrier, to make the memory visible.
The tricky part is this: you cannot submit the batch that waits on the semaphore until the submit call for the batch that signals it has been submitted. You don't have to wait until completion, but you do have to wait until the vkQueueSubmit call on the transfer queue has returned. So you need a way to transfer the semaphore between different threads, or you could just issue both submit commands on the same thread.
If you aren't using a second queue, then things are slightly simpler.
You still want to build the transfer command buffer itself on a different thread (to take advantage of threading CB construction). But that CB now needs to be communicated to the thread responsible for submitting the rendering stuff. And this channel of communication needs to know that this CB contains transfer commands, which some of the rendering CB processes ought to wait on.
The simplest and most flexible way to do this is to build the transfer CB so that the last command is a vkCmdSetEvent command (and the first command is a vkCmdResetEvent to reset it from previous frames of usage). The submission thread then only needs to create a small CB that only contains a vkCmdWaitEvents command which waits on the transfer event that will be set. That command should issue a full memory barrier, and that CB should execute between the transfer CB and any rendering CBs that read from the transferred data.
The flexibility of this is in the structure of the process. It is structured similarly to how the multi-queue version works. In both cases, a separate thread needs to communicate something to the render submission thread (in one case, a semaphore; in the other, a CB and an event). And the render submission thread needs to do things to wait on that "something", but without disrupting the process of building the rendering commands itself (in one case, you just change the batch to wait on the semaphore; in the other, you insert a CB that waits for the event).
If you want to get a bit smarter about execution dependencies, you can even have the transfer operation forward information about which pipeline stages need to wait on the operation. But that's mostly an optimization.
Here's the thing though: all of the staging cases are not performance-friendly. They're problematic because you can't do anything while the transfer operation is going on. And that is the case because... you're trying to read from the memory in the same frame you're writing to it. That's bad.
You should endeavor instead to delay rendering any objects for which loading is not complete. Or put another way, you want to load the data for new objects before you need them, not on the same frame you need them. This is what streaming systems do: they pre-emptively load data that will be needed soon, but not right now.
But how big?
Only you and your use cases can answer that question. If you are streaming in fixed-sized blocks (which you should do where possible), then it's fairly easy: your staging buffer should be one or maybe two streaming blocks in size. If your rendering system is more flexible, imposing few limitations on the higher-level code, then your staging buffer and your streaming system needs to be more flexible. And there's no right answer for that; it depends entirely on how it gets used.
Welcome to using explicit, low-level APIs.
Hey guys I wonder if we submit a VkSubmitInfo containing one empty VkCommandBuffer to the queue, if it will be executed or ignored. I mean will the semaphores in VkSubmitInfo::pWaitSemaphore and VkSubmitInfo::pDestSemaphore still be considered when submitting an empty VkCommandBuffer ?
Looks a stupid question but what I want is to "multiply" the only one semaphore that gets out of the vkAcquireNextImageKHR.
I mean, I want to submit an empty commandbuffer with VkSubmitInfo::pWaitSemaphore pointing to "acquire_semaphore", and having VkSubmitInfo::pDstSemaphore having as many semaphores as I need.
if it will be executed or ignored.
What would be the difference? If there are no commands in the command buffer, then executing it will do nothing.
I mean, I want to submit an empty commandbuffer with VkSubmitInfo::pWaitSemaphore pointing to "acquire_semaphore", and having VkSubmitInfo::pDstSemaphore having as many semaphores as I need.
This has nothing to do with the execution of the CB itself. The behavior of a batch doesn't change just because the CB doesn't do anything.
However, unless you have multiple queues waiting on the completion of this queue's operations, there's really no reason to have multiple destination semaphores. The batch containing the real work could just wait on the pWaitSemaphores.
Also, there's no reason to have empty batches that only wait on a single semaphore. Let's say you have batch Q, which signals the pWaitSemaphores that this empty batch signals. Well, there's no reason that batch Q's pDstSemaphores couldn't signal the semaphores that you want the empty batch to signal. After all, vkQueueSubmit semaphore wait operations have, as its destination command scope, all subsequent commands for that queue from vkQueueSubmit calls, the current one or subsequent ones.
So you would only need an empty batch if you have to wait on multiple semaphores that are signals from different batches on different queues. And such a complex dependency layout strongly suggests an over-complicated dependency design that will lead to reduced performance.
Even waiting on acquire makes no sense for this. You only need to wait on acquire if that queue is going to manipulate to the acquired image. Well, you can't manipulate an image from multiple queues simultaneously. So there's no point in signaling a bunch of semaphores when acquire completes; that's why acquire only takes one.
So I want to simulate a Fence only with semaphores and see what goes faster.
This suggests strongly that you're thinking about things incorrectly.
You use a fence when you want the CPU to detect the completion of a GPU operation. For vkAcquireNextImageKHR, you would use a fence if you need the CPU to know when the image has been acquired.
Semaphores are about the GPU detecting when a GPU operation has completed, regardless of whether the operation comes from a queue or not. So if the GPU needs to wait until an image is acquired, you use a semaphore.
It doesn't matter which is faster because they do different things.
Is it safe? For instance, if I create a bunch of different GCD queues that each compress (tar cvzf) some files, am I doing something wrong? Will the hard drive be destroyed?
Or does the system properly take care of such things?
Dietrich's answer is correct save for one detail (that is completely non-obvious).
If you were to spin off, say, 100 asynchronous tar executions via GCD, you'd quickly find that you have 100 threads running in your application (which would also be dead slow due to gross abuse of the I/O subsystem).
In a fully asynchronous concurrent system with queues, there is no way to know if a particular unit of work is blocked because it is waiting for a system resource or waiting for some other enqueued unit of work. Therefore, anytime anything blocks, you pretty much have to spin up another thread and consume another unit of work or risk locking up the application.
In such a case, the "obvious" solution is to wait a bit when a unit of work blocks before spinning up another thread to de-queue and process another unit of work with the hope that the first unit of work "unblocks" and continues processing.
Doing so, though, would mean that any asynchronous concurrent system with interaction between units of work -- a common case -- would be so slow as to be useless.
Far more effective is to limit the # of units of work that are enqueued in the global asynchronous queues at any one time. A GCD semaphore makes this quite easy; you have a single serial queue into which all units of work are enqueued. Every time you dequeue a unit of work, you increment the semaphore. Every time a unit of work is completed, you decrement the semaphore. As long as the semaphore is below some maximum value (say, 4), then you enqueue a new unit of work.
If you take something that is normally IO limited, such as tar, and run a bunch of copies in GCD,
It will run more slowly because you are throwing more CPU at an IO-bound task, meaning the IO will be more scattered and there will be more of it at the same time,
No more than N tasks will run at a time, which is the point of GCD, so "a billion queue entries" and "ten queue entries" give you the same thing if you have less than 10 threads,
Your hard drive will be fine.
Even though this question was asked back in May, it's still worth noting that GCD has now provided I/O primitives with the release of 10.7 (OS X Lion). See the man pages for dispatch_read and dispatch_io_create for examples on how to do efficient I/O with the new APIs. They are smart enough to properly schedule I/O against a single disk (or multiple disks) with knowledge of how much concurrency is, or is not, possible in the actual I/O requests.
My software will simulate a few hundred hardware devices, each of which will send several thousand reports to a database server.
Trying it without threading did not give very good results, so now it's time to thread.
Since I am load testing the d/b server, some of those transactions will succeed and a few may fail. The GUI of the main program needs to reflect this. How should the threads communicate their results back to the main program? Update global variables? Send a message? Or something lese?
Now, if I update only at the end of each thread then the GUI is going to look rather boring (and I can't tell if the program hung). It might be nice to update the GUI periodically. But that might cause contention, with threads waiting for other threads to update (for instance, if I am writing to global variables, I need a mutex, which will block each thread which is waiting to write).
I'm new to threading. How is this normally done? Perhaps the main program could poll the threads, instead of the threads iforming the main program?
One way to organize this is for your threads to add messages to a thread-safe queue (e.g. a ConcurrentQueue) as they get data. To keep things simple you can have a timer thread in your UI that periodically dequeues the queued messages to a private list and then renders them. This design allows your threads to easily queue and forget messages with minimal contention, and for your UI to periodically update itself without blocking your writers too much (i.e. for only the period it takes to dequeue current messages to a private list).
Although you are attempting to simulate the load of hundreds of devices, using thread per device is not the way to model this as you can only run so many threads concurrently anyway.