I need some clarity about something in the Vulkan Spec. Section 8.1 says:
Render passes must include subpass dependencies (either directly or
via a subpass dependency chain) between any two subpasses that operate
on the same attachment or aliasing attachments and those subpass
dependencies must include execution and memory dependencies separating
uses of the aliases, if at least one of those subpasses writes to one
of the aliases. These dependencies must not include the
VK_DEPENDENCY_BY_REGION_BIT if the aliases are views of distinct image
subresources which overlap in memory.
Does this mean that if subpass S0 write to attachment X (either as Color, Depth, or Resolve) and subsequent subpass S1 uses that attachment X (either as Color, Input, Depth, or Resolve), then there must be a subpass dependency from S0->S1 (directly or via chain)?
EDIT 1:
Upon further thinking, the scenario is not just if S0 writes and S1 reads. If S0 reads and S1 writes, that also needs synchronization from S0->S1.
EDIT 2:
I should say that what I was specifically unsure before was with a color attachment that is written by 2 different subpasses. Assuming that subpasses don't have a logical dependency, other than they use the same color attachment, they could be ran in parallel if they used different attachments. Before reading this paragraph, I was under the impression that dependencies were only needed between 2 subpasses if subpass B need some actual data from subpass A, and so needed to wait until this data was available. That paragraphs talks about general memory hazards.
If there is no logical need for 2 subpasses to be ordered in a certain way, the GPU could decide which is the better one to run first. But, if the developer always has to declare dependencies if 2 subpasses touch the same attachment, then that's potential speed lost that only gpu could figure out. It shouldn't be hard for the GPU to figure out that, although 2 subpasses don't have a developer-stated dependency between them, they do read/write to the same attachment, so it shouldn't just mindlessly write to it at the same time from both subpasses. Yes, I mean that the GPU would do some simple synchronization for basic cases, so as to not decapitate itself.
If there is a render pass that has two subpasses A and B, and both use the same attachment, and A writes to the shared attachment, then there is logically an ordering requirement between A and B. There has to be.
If there is no ordering requirement between two operations, then it is logically OK for those two operations to be interleaved. That is, partially running one, then partially running another, then completing the first. And they can be interleaved to any degree.
You can't interleave A and B, because the result you get is incoherent. For any shared attachment between A and B, if B writes to a pixel in that attachment, should A read that data? What if B writes twice to it? Should A read the pre-written value, the value after the first write, or the value after the second write? If A also writes to that pixel, should B's write happen before it or after? Or should A's write be between two of B's writes? And if so, which ones?
Interleaving A and B just doesn't make sense. There must be a logical order between them. So the scenario you hypothesize, where there "is no logical need for 2 subpasses to be ordered in a certain way" does not make sense.
Either you want any reads/writes done by B to happen before the writes done by A, or you want them to happen after. Neither choice is better or more correct than the other; they are both equally valid usage patterns.
Vulkan is an explicit, low-level rendering API. It is not Vulkan's job to figure out what you're trying to do. It's your job to tell Vulkan what you want it to do. And since either possibility is perfectly valid, you must tell Vulkan what you want done.
Both A & B need 5 color attachments each, but other than the memory, they don't care about each other. Why can't the GPU share the 5 color attachments intelligently between the subpasses, interleaving as it sees fit?
How would that possibly work?
If the first half of A writes some data to attachments that the second half of A's operations will read that data, B can't get in the middle of that and overwrite that data. Because then the data will be overwritten with incorrect values and the second half of A won't have access to the data written by the first half.
If A and B both start with clearing buffers (FYI: calling vkCmdClearAttachments at all should be considered at least potentially dubious), then no interleaving is possible. since they first thing they will do is overwrite the attachments in their entirety. The rendering commands within those subpasses expect the attachments to have known data, and having someone come along and mess with it will break those assumptions and yield incorrect results.
Therefore, whatever these subpasses are doing, they must execute in their entirety before the other. You may not care what order they execute in, but they must entirely execute in some order, with no overlap between them.
Vulkan just makes you spell out what that order is.
Related
I'm in the processing of learning Vulkan, and I have just integrated ImGui into my code using the Vulkan-GLFW example in the original ImGui repo, and it works fine.
Now I want to render both the GUI and my 3D model on the screen at the same time, and since the GUI and the model definitely needs different shaders, I need to use multiple pipelines and submit multiples commands. The GUI is partly transparent, so I would like it to be rendered after the model. The Vulkan specs states that the execution order of commands are not likely to be the order that I record the commands, thus I need synchronization of some kind. In this Reddit post several methods of exactly achieving my goals was proposed, and I once believed that I must use multiple subpasses (together with subpass dependency) or barriers or other synchronization methods like that to solve this problem.
Then I had a look at SaschaWillems' Vulkan examples, in the ImGui example though, I see no synchronization between the two draw calls, it just record the command to draw the model first, and then the command to draw the GUI.
I am confused. Is synchronization really needed in this case, or did I misunderstand something about command re-ordering or blending? Thanks.
Think about what you're doing for a second. Why do you think there needs to be synchronization between the two sets of commands? Because the second set of commands needs to blend with the data in the first set, right? And therefore, it needs to do a read/modify/write (RMW), which must be able to read data written by the previous set of commands. The data being read has to have been written, and that typically requires synchronization.
But think a bit more about what that means. Blending has to read from the framebuffer to do its job. But... so does the depth test, right? It has to read the existing sample's depth value, compare it with the incoming fragment, and then discard the fragment or not based on the depth test. So basically every draw call that uses a depth test contains a framebuffer read/modify/wright.
And yet... your depth tests work. Not only do they work between draw calls without explicit synchronization, they also work within a draw call. If two triangles in a draw call overlap, you don't have any problem with seeing the bottom one through the top one, right? You don't have to do inter-triangle synchronization to make sure that the previous triangles' writes are finished before the reads.
So somehow, the depth test's RMW works without any explicit synchronization. So... why do you think that this is untrue of the blend stage's RMW?
The Vulkan specification states that commands, and stages within commands, will execute in a largely unordered way, with several exceptions. The most obvious being the presence of explicit execution barriers/dependencies. But it also says that the fixed-function per-sample testing and blending stages will always execute (as if) in submission order (within a subpass). Not only that, it requires that the triangles generated within a command also execute these stages (as if) in a specific, well-defined order.
That's why your depth test doesn't need synchronization; Vulkan requires that this is handled. This is also why your blending will not need synchronization (within a subpass).
So you have plenty of options (in order from fastest to slowest):
Render your UI in the same subpass as the non-UI. Just change pipelines as appropriate.
Render your UI in a subpass with an explicit dependency on the framebuffer images of the non-UI subpass. While this is technically slower, it probably won't be slower by much if at all. Also, this is useful for deferred rendering, since your UI needs to happen after your lighting pass, which will undoubtedly be its own subpass.
Render your UI in a different render pass. This would only really be needed for cases where you need to do some full-screen work (SSAO) that would force your non-UI render pass to terminate anyway.
say the first RenderPass generates an output of a rendered texture, then the second RenderPass sample the texture in the shader and render it to the swapchain .
i don't know if i can do this kind of rendering inside a single RenderPass using subpasses, can subpass attachments have different sizes? in other words , can subpass behave like render textures?
It seems like there's a few separate questions wrapped up here, so I'll have a stab at answering them in order!
First, there's no need to implicitly synchronize two renderpasses where the second relies on the output from the first, provided they are either recorded on the same command buffer in the correct order, or (if recorded on separate command buffers) submitted in the correct order. The GPU will consume commands in the order submitted, so there's an implicit synchronization there.
If you are consuming the output from a renderpass (or subpass) by sampling inside a shader (which you will need to if the sizes are different, see below), rather than setting up a subpass output as an input attachment in a later subpass then you will need to do so in a separate render pass.
If you are consuming the output from a previous subpass as an input attachment, that implies you are using pixel local load operations inside your shader (where framebuffer attachments written in a previous subpass are read from at the exact same location in a subsequent subpass). This requires attachments be the same size, since all operations occur at the same pixel location.
From the Vulkan specification:
The subpasses in a render pass all render to the same dimensions, and fragments for pixel (x,y,layer) in one subpass can only read attachment contents written by previous subpasses at that same (x,y,layer) location. For multi-pixel fragments, the pixel read from an input attachment is selected from the pixels covered by that fragment in an implementation-dependent manner. However, this selection must be made consistently for any fragment with the same shading rate for the lifetime of the VkDevice.
So if your attachments vary in size, this would imply you need to consume your initial output in a separate renderpass.
I have three inputs in merge signals in different time, the out put of merge signals appeared to wait for all signals and outputted them. what I want is to have an output for every signal (on current output) as soon as it inputted.
For example: if I write (1) in initial value. 5,5,5 in all three numeric. with 3 sec time delay, I will have 6,7,and 16 in target 1, target 2, and target 3. And over all 16 on current output. I don't want that to appear at once on current output. I want to have as it appears in target with same time layout.
please see attached photo.
can anyone help me with that.
thanks.
All nodes in LabVIEW fire when all their inputs arrive. This language uses synchronous data flow, not asynchronous (which is the behavior you were describing).
The output of Merge Signals is a single data structure that contains all the input signals — merged, like the name says. :-)
To get the behavior you want, you need some sort of asynchronous communication. In older versions of LabVIEW, I would tell you to create a queue refnum and go look at examples of a producer/consumer pattern.
But in LabVIEW 2016 and later, right click on each of the tunnels coming out of your flat sequence and chose “Create>>Channel Writer...”. In the dialog that appears, choose the Messenger channel. Wire all the outputs of the new nodes together. This creates an asynchronous wire, which draws very differently from your regular wires. On the wire, right click and choose “Create>>Channel Reader...”. Put the reader node inside a For Loop and wire a 3 to the N terminal. Now you have the behavior that as each block finishes, it will send its data to the loop.
Move the Write nodes inside the Flat Sequence if you want to guarantee the enqueue order. If you wait and do the Writes outside, you’ll sometimes get out-of-order data (I.e. when the data generation nodes happen to run quickly).
Side note: I (and most LabVIEW architects) would strongly recommend you avoid using sequence structures as much as possible. They’re a bad habit to get into — lots of writings online about their disadvantages.
I don't know is it an API feature (I'm almost sure it's not) or a GPU specifics, but why, for example, vkCmdWaitEvents can be recorded inside and outside of a render pass, but vkCmdResetEvent can be recorded only outside? The same applies to other commands.
When it comes to event setting in particular, they play havoc with how the render pass model interacts with tile-based renderers.
Recall that the whole point of the complexity of the render pass model is to service the needs of tile-based renderers (TBRs). When a TBR encounters a complex series of subpasses, the way it wants to execute them is as follows.
It does all of the vertex processing stages for all of the rendering commands for all of the subpasses, all at once, storing the resulting vertex data in a buffer for later consumption. Then for each tile, it executes the rasterization stages for each subpass on the primitives that are involved in the building of that tile.
Note that this is the ideal case; specific things can make it fail to various degrees, but even then, it tends to fail in batches, where you do can execute several subpasses of a render pass like this.
So let's say you want to set an event in the middle of a subpass. OK... when does that actually happen? Remember that set-event command actually sets the event after all of the preceeding commands have completed. In a TBR, if everything is proceeding as above, when does it get set? Well ideally, all vertex processing for the entire renderpass is supposed to happen before any rasterization, so setting the event has to happen after the vertex processing is done. And all rasterization processing happens on a tile-by-tile basis, processing whichever primitives overlap that tile. Because of the fragmented rendering process, it's difficult to know when an individual rendering command has completed.
So the only place the set-event call could happen is... after the entire renderpass has completed. That is obviously not very useful.
The alternative is to have the act of issuing a ckCmdSetEvent call fundamentally reshape how the implementation builds the entire render pass. To break up the subpass into the stuff that happened before the event and the stuff that happens after the event.
But the reason why VkRenderPass is so big and complex, the reason why VkPipelines have to reference a specific subpass of a render pass, and the reason why vkCmdPipelineBarrier within a render pass requires you to specify a subpass self-dependency, is so that a TBR implementation can know up front when and where it will have to break the ideal TBR rendering scheme. Having a function introduce that breakup without forewarning works against this idea.
Furthermore, Vulkan is designed so that, if something is going to have to be implemented highly inefficiently, then it is either impossible to do directly or the API really makes it look really inefficient. vkCmd(Re)SetEvent cannot be efficiently implemented within a render pass on TBR hardware, so you can't do it period.
Note that vkCmdWaitEvents doesn't have this problem, because the system knows that the wait is waiting on something outside of a render pass. So it's just some particular stage that has to wait on the event to complete. If it's a vertex stage doing the waiting, it's easy enough to set that wait at the beginning of that command's processing. If it's a fragment stage, it can just insert the wait at the beginning of all rasterization processing; it's not the most efficient way to handle it, but since all vertex processing has executed, odds are good that the event has been set by then.
For other kinds of commands, recall that the dependency graph of everything that happens within a render pass is defined within VkRenderPass itself. The subpass dependency graph is there. You can't even issue a normal vkCmdPipelineBarrier within a render pass, not unless that subpass has an explicit self-dependency in the subpass dependency graph.
So what good would it be to issue a compute shader dispatch or a memory transfer operation in the middle of a subpass if you cannot wait for the operation to finish in that subpass or a later one? If you can't wait on the operation to end, then you cannot use its results. And if you can't use its results... you may as well have issued it before the render pass.
And the reason you can't have other dependencies goes back to TBRs. The dependency graph is an inseparable part of the render pass to allow TBRs to know up-front what the relationship between subpasses is. That allows them to know whether they can build their ideal renderer, and when/where that might break down.
Since the TBR model of render passes makes such waiting impractical, there's no point in allowing you to issue such commands.
Because a renderpass is a special construct that implies focusing work solely on the framebuffer.
In addition each of the subpasses are allowed to be run in parallel unless they have an explicit dependency between them.
This has an effect on how they would need to be synchronized to other instructions in the other subpasses.
Doing copies dominates use of the memory bus and would stall render work that depends on it. Doing that inside the renderpass creates a big gpu bubble that can be easily resolved by putting it outside and making sure its finished by the time you start the renderpass.
Some hardware also has dedicated copy units that is separate from the graphics hardware so the less synchronizing you need to do between them the better.
In VkSubmitInfo, when pWaitDstStageMask[0] is VK_PIPLINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT, vulkan implementation executes pipeline stages without waitng for pWaitSemaphores[0] until it reaches Color Attachment Stage.
However, if the command buffer has multiple subpasses and multiple draw commands, then does WaitDstStageMask mean the stages of all draw commands?
If I want vulkan implementation to wait the semaphore when it reaches color attachment output stage of the last subpass, what should I do?
You probably don't actually want to do this. On hardware that benefits from multi-subpass renderpasses, the fragment work for the entire renderpass will be scheduled and execute as essentially a single monolithic chunk of work. E.g. all subpasses will execute for some pixel (x,y) regions before any subpasses are executed for some other pixel (x,y) regions. So it doesn't really make sense to say insert a synchronization barrier on an external event between two subpasses. So you need to think about what your renderpass is doing and whether it is really open to the kinds of optimizations they were designed for.
If not, then treating the subpasses (or at least the final one) as independent renderpasses isn't going to be a loss anyway, so you might as well just put it in a separate renderpass in a separate submit batch, and put the semaphore wait before it.
If so, then you just want to do the semaphore wait before the COLOR_ATTACHMENT stage for the whole renderpass anyway.
In such situation You have (I think) two options:
You can split render pass - exclude the last subpass and submit it's commands in a separate render pass recorded in a separate command buffer so You can specify semaphore for which it should wait on (but this doesn't sound too reasonably) or...
You can use events - You should signal events after the commands which generate results later commands require, and then, in the last sub-pass You should wait on that event just before the commands that indeed need to wait.
The second approach is probably preferable (despite You are not using submission's pWaitSemaphores and pWaitDstStageMask fields), but it also has it's restrictions:
vkCmdWaitEvents must not be used to wait on event signal operations occuring on other queues.
And I'm not sure, but maybe subpass dependencies may also help You here. Clever definitions of submission's pWaitSemaphores and render pass's subpass dependencies may be enough to do the job. But I'm not too confident in explaining subpass dependencies (I'm not sure I fully understand them) so don't rely on this. Maybe someone can confirm this. Bot the above two option definitely will do the trick.