Vulkan Queue Families - vulkan

If I get in correctly:
queueFamily is a set of queues
queue could have more than one queue flag
there are 4 types of queue flags(graphics, compute, transfer and sparse binding)
I'm trying to enumerate all informations about a single queue family. First I chceck how many queue families are available, next how many queues every of queue family has, and how many queue flags family supports.
It's enough to know that I have queue family that supports e.g queue graphics flag, or in future I will have to go deeper and check for particular queue from specific queue family?

All queues from a single family have the same properties (the same set of flags). So You don't have to go deeper and check each queue.
But there 3 things You need to remember. Spec guarantees that there must be at least one universal queue which supports graphics and compute operations. Second thing is that different queue families may have the same properties (the same set of flags). And last thing - swapchain presentation (ability to present swapchain image to a given surface) is also a queue family property, but it must be checked through a separate set of queries (functions).

or in future
That is basically Q about versioning and extensions.
Major versions are allowed to make any changes (i.e being "uncomapatible"). So you will potentially have to do things differently in the app. But it is conceivable old major versions will still be available alogside new versions.
Minor versions, and extensions are supposed to be backwards-compatible (with notable exceptions). But only on ABI level, so there is no absolute guarantee your program will compile with new header.
That means driver update should not break your already compiled app.
(The notable exceptions are):
*Flags returned from Vulkan may have unspecified bits (i.e. bits that are not specified within spec of version you are using with the extensions you have enabled)
enums returned from Vulkan may have unspecified values
if you actively try to crash your app, such as if( vulkanVersion != 1.0.0 ) crash();
the compatibility (obviously?) does not apply to things that are not purely functional (i.e. does not apply to performance, watts, noise, or whatever)
if you use any of the new stuff, Vulkan expects you to know all of it — e.g. if your app is mostly Vulkan 1.0, and Vulkan returns flag from Vulkan 1.42 alongside graphics you have not yet bothered to learn, then you later use another flag bit defined by 1.42 in another command, it may interact with the queue flag somehow.
any version is allowed to incompatibly fix Spec bug (or what authors decide to consider one)
?

Related

Does Vulkan parallel rendering relies on multiple queues?

I'm a newbie of Vulkan, and not very clear on how parallel rendering works, here's some question (the "queue" mentioned below refers specifically to the graphics queue:
Does parallel rendering relies on a device which supports more than one queue?
If question 1 is a yes, what if the physical device only have one queue, but Vulkan abstracted to 4 queues (which is the real case of my macbook's gpu), will the rendering in this case really parallel?
If question 1 is a yes, what if there is only one queue in Vulkan's abstraction, does that mean the device defiantly can render objects in parallel.
P.S. About question 2, when I use Metal api, the number of queues are only one, but when using Vulkan api, the number is 4, I'm not sure it is right to say "the physical device only have one queue".
I have the sneaking suspicion you are abusing the word "parallel". Make sure you know what it means.
Rendering on GPU is by nature embarrassingly parallel. Typically one queue can feed the entire GPU, and typically apps assume that is true.
In all likelihood they made the number of queues equal to the CPU core count. In Vulkan, submissions to a single queue always need to be externally synchronized. Having more queues allows to submit from multiple threads without synchronization.
If there is only one Vulkan queue, you can only submit to one queue. And any submission has to be synchronized with mutex or coming only from one thread in the first place.

Vulkan Queue Families Clarification

Is specified somewhere in Vulkan spec that presentation capabilities, vkGetPhysicalDeviceSurfaceSupportKHR returning true, is related just to families with VK_QUEUE_GRAPHICS_BIT bit flag or transfer exclusive family can possibly return true too?
I was probably little bit confused by naming in Vulkan tutorial (https://vulkan-tutorial.com/Drawing_a_triangle/Presentation/Window_surface#page_Creating-the-presentation-queue), but I assume that what is there named as presentQueue and presentFamily is just in fact (another one or same) graphics queue and graphic family and it has no relation to queue families of group VK_QUEUE_TRANSFER_BIT (if the queue family does not contain both flags).
Are my assumptions right or I am misunderstanding something?
Strictly speaking, there is no such thing as a "present family", nor is there a "graphics family". There are just different queue families which support different types of operations, like presentation or graphics.
How many different queue families are supported and which capabilities they have depends on your specific GPU model. To get a good overview of all this information, I can recommend the tool Hardware Capability Viewer by Sascha Willems. (It is also used to populate the database behind the awesome gpuinfo.org if you choose to upload your data.)
On an NVIDIA RTX 2080, for example, I get the following different queue families with the following capabilities:
Queue family #0 supports transfer, graphics, compute, and presentation
Queue family #1 supports only transfer, and nothing else
Queue family #2 supports transfer, compute, and presentation
As you can see, I could use queues from queue families #0 or #2 to send images to the presentation engine, but not queues from queue family #1.
I find the capabilities quite interesting. It means that for certain use cases, using one of the specialized queue families (i.e. #1 or #2) can lead to more optimal performance than using queues from family #0 which are able to perform any operation. Using different queues can enable your application to parallelize work better, but it will generally also require some sort of work-synchronization between the different queues.
Queues from family #2 are often referred to as "async compute queues" (I think, this terminology came mostly from the DirectX world), and can be enabled in games' graphics settings for quite a while now (if supported). What I have spotted recently is the option to enable "present from compute" (Doom Eternal offers this setting) and again, this would refer to queues from family #2. I would guess that this does not automatically lead to increased performance (which is why it can be enabled/disabled) but on some GPUs it definitely will.
Answering your specific question: A queue family does not have to support graphics capabilities in order to support presentation. There are queue families (e.g. on an RTX 2080) which support compute and presentation, but not graphics. All of this depends on the specific GPU model. I don't know if there are any GPUs that offer transfer-only queue families with presentation support --- maybe that doesn't make too much sense, so I guess rather not.

Rendering Terrain Dynamically with Argument Buffers : Understanding why the particle buffer is not overwritten by the GPU inflight

I am looking through an Apple demo project that is associated with the 2017 WWDC video entitled "Introducing Metal 2" where the developers demonstrate the use of argument buffers. The project is linked here on the page titled "Rendering Terrain Dynamically with Argument Buffers" on the Apple developer website. Here, they synchronize resource writes by the CPU to prevent race conditions with a dispatch_semaphore_t, signaling it when the command buffer finishes executing on the GPU and waiting on it if the CPU is writing data several frames ahead of the GPU. This is consistent with what was shown in a previous 2014 WWDC "Working With Metal: Fundamentals".
I noticed that it seems the APPLParticleRenderer is sending data to be written by the GPU in a compute pass before it finishes reading from that same buffer from the fragment shader from a previous render pass. The resource storage mode of the buffer is MTLResourceStorageModePrivate. My question: does Metal automatically synchronize access to private id<MTLBuffer>s accessible only by the GPU? Do render, compute, and blit passes called from new id<MTLCommandEncoder> have access to the buffer only after other passes have written and read from it (exclusive access)? I have seen that there are guaranteed barriers within tile shaders, where tile memory is accessed exclusively by the kernel before subsequent fragment shaders access the memory.
Lastly, in the 2016 WWDC "What's New in Metal, Part 2", the first presenter, Charles Brissart, at 16:44 mentions that fragment and vertex functions reading and writing from the same buffer must be placed into two render command encoders, but for compute kernels one compute command encoder suffices. This is consistent with what is seen within the particle renderer.
See my comment on the original question for a brief version of this answer.
It turns out that Metal tracks dependencies between commands scheduled to the GPU by default for MTLResource types. The hazardTrackingMode property of a MTLResource is defaulted to MTLHazardTrackingModeTracked (MTLHazardTrackingMode.tracked in Swift) according to the Metal documentation. This means Metal tracks dependencies across commands that modify the resource, as is the case with the particle kernel, and delays execution until prior commands accessing the resource are complete.
Therefore, since the _particleDataPool buffer has a storage mode of MTLResourceStorageModePrivate (storageModePrivate in Swift), it can only be written to by the GPU; hence, no CPU/GPU synchronization is necessary with a semaphore for this buffer and thus no multi-buffer system is necessary for the resource.
Only when a resource can be written to by the CPU while the GPU is still reading from it do we want multiple buffers so the CPU is not idle.
Note that the default hazard tracking mode for a MTLHeap is MTLHazardTrackingModeUntracked (MTLHazardTrackingMode.untracked in Swift), in which case you are responsible for synchronizing resource writes by the GPU
EDIT
After reading into resource synchronization in Metal, there are some additional points I would like to make that I think further clarify what's going on. Note that the remaining portion is in Swift. To learn more in detail, I recommend reading the "Synchronization" section in the Metal documentation here.
MTLFence
Firstly, a MTLFence is used to synchronize accesses to untracked resources within the execution of a single command buffer. A fence gives you explicit control over when the GPU accesses resources and is necessary when you are working with an untracked resource. Otherwise, Metal will handle this synchronization for you
It is important to note that the automatic management I mention in the answer only occurs within a single command buffer between encoding passes. But this does not mean we need to synchronize across command buffers scheduled in the same command queue since a command buffer is not immediately scheduled for execution. In fact, according to the documentation on the addScheduledHandler(_:) method of the MTLCommandBuffer protocol found here
The device object schedules the command buffer after it identifies any dependencies with work tasks submitted by other command buffers or other APIs in the system.
at which point it would be safe to access these same buffers. Note that within a single render encoding pass, it is important to mention that if a vertex shader writes into a buffer the fragment shader in the same pass reads from, this is undefined. I mentioned this in the original question, the solution being to use two render pass encoders. I have yet to determine why this is not necessary for a compute encoder, but I imagine it has to do with how kernels are executed in comparison to vertex and fragment shaders
MTLEvent
In some cases, however, command buffers in different queues created by the same MTLDevice need access to the same resource or depend on one another in some way. In this case, synchronization is necessary because the separate queues schedule their own command buffers without knowledge of the other, meaning there is potential for the two command buffers to be executing concurrently.
To fix this problem, you use an MTLEvent instance created by the device using makeEvent() and encode event signals at specific points in each buffer.
MTLSharedEvent
In the event (no pun intended) that you have multiple processors (different CPU cores, CPU and GPU, or multi-GPU), resource synchronization is needed. Here, you create a MTLSharedEvent in place of a MTLEvent that can be used to synchronize across devices and processes. It is essentially the same API as that of the MTLEvent, but involves command queues on different devices.

Why do queues in a queue family in Vulkan need priority if we can't distinguish between them?

As asked in the title. My main point is "why", as in what's the benefiting factor in such logical structure for queues and queue families.
Do chip/card makers actually etch multiple independent queues onto their chips? That are at the same time separately distinguishable?
Does implementing separate processing units/streams provide any benefit to implementations? And by extension, does it retroactiely benefit older APIs such as OpenCL?
I've observed an interesting fact: that in my "Intel(R) Core(TM) i3-8100B CPU # 3.60GHz" Mac Mini, there are 2 GPUs listed in "vulkaninfo.app" (from LunarG SDK). My bad, the app linked against 2 libMoltonVK.dylib (1 in "Contents/Frameworks", 1 in "/usr/local/lib").
"Why" is not a great question for SO format. It leads to speculation.
The queues are distinguishable in Vulkan. They each have their index with which they can be distinguished. Keep in mind they are rather a driver thing. Even when the driver has more queues, even single one typically can use all the GPU's computing resources.
Furthermore Vulkan specification does not really say what should happen when you supply a specific priority value. It is perfectly valid for driver\GPU to ignore it.
Chip makers do have compute units that are independent. They can theoretically execute different code from each other. But it is not usually advantageous. In the usual work rendering some regular W × H image, it saturates all the compute units with the same work.
Why: because you can submit different types of work that're of different importance, and you can give a hint to the Vulkan implementation what you want to be done first-most.
Everything else in the question are pointless:
Do chip/card makers actually etch multiple independent queues onto their chips? That are at the same time separately distinguishable?
Not necessarily, those may be logical queues that're time-sliced.
Does implementing separate processing units/streams provide any benefit to implementations? And by extension, does it retroactiely benefit older APIs such as OpenCL?
No, a contemporary API called Metal (from Apple) don't have a queue count or the concept of queue family at all.

Should I try to use as many queues as possible?

On my machine I have two queue families, one that supports everything and one that only supports transfer.
The queue family that supports everything has a queueCount of 16.
Now the spec states
Command buffers submitted to different queues may execute in parallel or even out of order with respect to one another
Does that mean I should try to use all available queues for maximal performance?
Yes, if you have workload that is highly independent use separate queues.
If the queues need a lot of synchronization between themselves, it may kill any potential benefit you may get.
Basically what you are doing is supplying GPU with some alternative work it can do (and fill stalls and bubbles and idles with and giving GPU the choice) in the case of same queue family. And there is some potential to better use CPU (e.g. singlethreaded vs one queue per thread).
Using separate transfer queues (or other specialized family) seem to be the recommended approach even.
That is generally speaking. More realistic, empirical, sceptical and practical view was already presented by SW and NB answers. In reality one does have to be bit more cautious as those queues target the same resources, have same limits, and other common restrictions, limiting potential benefits gained from this. Notably, if the driver does the wrong thing with multiple queues, it may be very very bad for cache.
This AMD's Leveraging asynchronous queues for concurrent execution(2016) discusses a bit how it maps to their HW\driver. It shows potential benefits of using separate queue families. It says that although they offer two queues of compute family, they did not observe benefits in apps at that time. They say they have only one graphics queue, and why.
NVIDIA seems to have a similar idea of "asynch compute". Shown in Moving to Vulkan: Asynchronous compute.
To be safe, it seems we should still stick with only one graphics, and one async compute queue though on current HW. 16 queues seem like a trap and a way to hurt yourself.
With transfer queues it is not as simple as it seems either. You should use the dedicated ones for Host->Device transfers. And the non-dedicated should be used for device->device transfer ops.
To what end?
Take the typical structure of a deferred renderer. You build your g-buffers, do your lighting passes, do some post-processing and tone mapping, maybe throw in some transparent stuff, and then present the final image. Each process depends on the previous process having completed before it can begin. You can't do your lighting passes until you've finished your g-buffer. And so forth.
How could you parallelize that across multiple queues of execution? You can't parallelize the g-buffer building or the lighting passes, since all of those commands are writing to the same attached images (and you can't do that from multiple queues). And if they're not writing to the same images, then you're going to have to pick a queue in which to combine the resulting images into the final one. Also, I have no idea how depth buffering would work without using the same depth buffer.
And that combination step would require synchronization.
Now, there are many tasks which can be parallelized. Doing frustum culling. Particle system updates. Memory transfers. Things like that; data which is intended for the next frame. But how many queues could you realistically keep busy at once? 3? Maybe 4?
Not to mention, you're going to need to build a rendering system which can scale. Vulkan does not require that implementations provide more than 1 queue. So your code needs to be able to run reasonably on a system that only offers one queue as well as a system that offers 16. And to take advantage of a 16 queue system, you might need to render very differently.
Oh, and be advised that if you ask for a bunch of queues, but don't use them, performance could be impacted. If you ask for 8 queues, the implementation has no choice but to assume that you intend to be able to issue 8 concurrent sets of commands. Which means that the hardware cannot dedicate all of its resources to a single queue. So if you only ever use 3 of them... you may be losing over 50% of your potential performance to resources that the implementation is waiting for you to use.
Granted, the implementation could scale such things dynamically. But unless you profile this particular case, you'll never know. Oh, and if it does scale dynamically... then you won't be gaining a whole lot from using multiple queues like this either.
Lastly, there has been some research into how effective multiple queue submissions can be at keeping the GPU fed, on several platforms (read all of the parts). The general long and short of it seems to be that:
Having multiple queues executing genuine rendering operations isn't helpful.
Having a single rendering queue with one or more compute queues (either as actual compute queues or graphics queues you submit compute work to) is useful at keeping execution units well saturated during rendering operations.
That strongly depends on your actual scenario and setup. It's hard to tell without any details.
If you submit command buffers to multiple queues you also need to do proper synchronization, and if that's not done right you may get actually worse performance than just using one queue.
Note that even if you submit to only one queue an implementation may execute command buffers in parallel and even out-of-order (aka "in-flight"), see details on this in chapter chapter 2.2 of the specs or this AMD presentation.
If you do compute and graphics, using separate queues with simultaneous submissions (and a synchronization) will improve performance on hardware that supports async compute.
So there is no definitive yes or no on this without knowing about your actual use case.
Since you can submit multiple independent workload in the same queue, and it doesn't seem there is any implicit ordering guarantee among them, you don't really need more than one queue to saturate the queue family. So I guess the sole purpose of multiple queues is to allow for different priorities among the queues, as specified during device creation.
I know this answer is in direct contradiction to the accepted answer, but that answer fails to address the issue that you don't need more queues to send more parallel work to the device.