I was recently learning the Vulkan API but just cannot understand what VK_SUBPASS_EXTERNAL (assigned to VkSubpassDependency::srcSubpass or VkSubpassDependency::dstSubpass) means.
The official documentation states: "If srcSubpass is equal to VK_SUBPASS_EXTERNAL, the first synchronization scope includes commands that occur earlier in submission order than the vkCmdBeginRenderPass used to begin the render pass instance."
Does it imply that a subpass can depend on another subpass residing in other render passes? Or anything else?
VK_SUBPASS_EXTERNAL means anything outside of a given render pass scope. When used for srcSubpass it specifies anything that happened before the render pass. And when used for dstSubpass it specifies anything that happens after the render pass.
Does it imply that a subpass can depend on another subpass residing in other render passes?
It means that synchronization mechanisms need to include operations that happen before or after the render pass. It may be another render pass, but it also may be some other operations, not necessarily render pass-related.
Related
Let's just say I have different versions of render passes depending on how many views/layers I want to render to, let's say 1 (for normal single-plane rendering), 2 (for VR), and 6 for cubemaps. So for each render pass type I might have these three sub-types. Each sub-type has this information given to create the render pass:
typedef struct VkRenderPassMultiviewCreateInfo {
VkStructureType sType;
const void* pNext;
uint32_t subpassCount;
const uint32_t* pViewMasks;
uint32_t dependencyCount;
const int32_t* pViewOffsets;
uint32_t correlationMaskCount;
const uint32_t* pCorrelationMasks;
} VkRenderPassMultiviewCreateInfo;
My question is, when we create a graphics pipeline you have to give a sub-pass. Does filling the above struct with different information result in the sub-passes in the render pass created not compatible with each other? And so necessitating the creation of separate graphics pipeline objects for each of the multiview types?
With certain very minor exceptions (view count not being one of them), if the render passes are different, then you must use a different pipeline.
It should also be noted that multiview and layered rendering use the same underlying resources. That is, multiview is a form of layered rendering. As such, if a render pass/pipeline expects multiview, it cannot use layered rendering for non-multiview purposes. That is, in multiview each layer you render to is considered a "view" for multiview purposes. Which means that it automatically routes primitives to specific layers based on the view mask (all rendering commands in a multi-view subpass are automatically repeated for multiple views, and you're not allowed to write to Layer). In layered rendering, it's up to the user code to which layer(s) a rendering command renders to)
Lastly, let us look at dynamic rendering in Vulkan 1.3 (aka: rendering without a render pass). With this feature, you don't have to build a render pass, and pipelines meant to be used for such rendering are not built against a render pass. However, this won't exactly fix your problem.
Even though you aren't building your pipelines against an entire VkRenderPass structure, you still need to give the pipeline some idea of what you're rendering to. This is provided by the VkPipelineRenderingCreateInfo at pipeline creation time. Among other things, this struct specifies the viewMask for that subpass.
And this value must exactly match the value you used with vkCmdBeginRendering. So you will need pipelines to be built against a specific multiview setup, and therefore, you will need separate pipelines for each multiview setup.
Single-view rendering should not be considered a special case of multi-view rendering. They are two different ways of rendering.
Does subpass only perform syncronization for subpasses whitin a single render pass does it also include previous renderpasses and upcomming renderpasses in before/after scope. If so, when should I use a pipeline barrier?
You need to carefully read the specification for these things. Vulkan synchronization is not something you figure out as you go.
The scopes are formally specified:
If srcSubpass is equal to VK_SUBPASS_EXTERNAL, the first synchronization scope includes commands that occur earlier in submission order than the vkCmdBeginRenderPass used to begin the render pass instance. Otherwise, the first set of commands includes all commands submitted as part of the subpass instance identified by srcSubpass and any load, store or multisample resolve operations on attachments used in srcSubpass. In either case, the first synchronization scope is limited to operations on the pipeline stages determined by the source stage mask specified by srcStageMask.
and there's similar wording for the second synchronization scope.
To understand this, you must already know and fully grasp all the nomenclature involved here, which is also specified.
In short though, VK_SUBPASS_EXTERNAL covers stuff outside render pass. That half of the dependency works virtually the same as vkCmdPipelineBarrier. And having specific *Supbass, then the half of the dependency is limited only to that particular subpass.
If so, when should I use a pipeline barrier?
Whenever using subpass dependency is not possible or inconvenient. I.e. for dependencies between two computes, queue ownership transfers, one-off layout transitions, etc...
So a VkSampler is created with a VkSamplerCreateInfo that just has a bunch of configuration settings, that as far as I can see would just define a pure function of some input image.
They are described as:
VkSampler objects represent the state of an image sampler which is used by the implementation to
read image data and apply filtering and other transformations for the shader.
One use (possibly only use) of VkSampler is to write them to descriptors (such as VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER) for use in descriptor sets that are bound to pipelines/shaders.
My question is: can you write the same VkSampler to multiple different descriptors? from the same or multiple different descriptor pools? even if one of the current descriptors is in use in some currently executing render pass?
Can you use the same VkSampler concurrently from multiple different render passes / subpasses / pipelines?
Put another way, are VkSamplers stateless? or do they represent some stateful memory on the device and so you shouldn't use the same one concurrently?
VkSampler objects definitely have data associated with them, so it would be wrong to call them "stateless". What they are is immutable. Like VkRenderPass, VkPipeline, and similar objects, once they are created, their contents cannot be changed.
Synchronization between accesses is (generally) needed only for cases when one of the accesses is a modification operation. Since VkSamplers are immutable, there are no modification operations. So synchronization is not needed for cases where you're accessing a VkSampler from different threads, commands, or whathaveyou.
The only exception is the obvious one: vkDestroySampler, which requires that submitted commands that use the sampler have completed before calling the function.
I'm learning Metal, and there's a conceptual question that I'm trying to wrap my head around: at what level, exactly, should my code handle successive drawing operations that require different pipeline states? As I understand it (from answers like this: https://stackoverflow.com/a/43827775/2752221), I can use a single MTLRenderCommandEncoder and change its pipeline state, the vertex buffer it's using, etc., between calls to drawPrimitives:, and the encoder state that was current at the time of each call to drawPrimitives: will be preserved. So that's great. But it also seems like the design of Metal is such that one can make multiple MTLRenderCommandEncoder instances, and use them to sequentially throw batches of commands into a MTLCommandBuffer. Given that the former works – using one MTLRenderCommandEncoder and changing its state – why would one do the latter? Under what circumstances is it correct to do the former, and under what circumstances is it necessary to do the latter? What is an example of a situation where the latter would be necessary/appropriate?
If it matters, I'm working on a macOS app, using Objective-C. Thanks.
Ignoring multithreaded encoding cases, which are somewhat advanced, the main reason you'd want to create multiple render command encoders during a frame is because you need to change which textures you're rendering to.
You'll notice that you need to provide a render pass descriptor when creating a render command encoder. For this reason, we often say that the sequence of commands belonging to a particular encoder constitute a render pass. The attachments of that descriptor refer to the textures that will be written to by the commands encoded by the encoder.
Many different techniques, including shadow mapping and postprocessing effects like bloom require multiple passes to produce. Since you can't change attachments in the midst of a pass, creating a new encoder is the only way to encode multiple passes in a frame.
Relatedly, you should ordinarily use one command buffer per frame. You can, however, sometimes reduce frame time by splitting your passes across multiple command buffers, but this is highly dependent on the shape of your workload and should only be done in tandem with profiling, as it's not always an optimization.
In addition to Warren's answer, another way to look at the question is by examining the API. A number of Metal objects are created from descriptors. The properties of the descriptor at the time an object is created from it govern that object for its lifetime. Those are aspects of the object that can't be changed after creation.
By contrast, the object will have various setter methods to modify other properties over its lifetime.
For a render command encoder, the properties that are fixed for its lifetime are those specified by the MTLRenderPassDescriptor used to create it. If you want to render with different values for any of those properties, the only way to do so is to create a new encoder from a different descriptor. On the other hand, if you can do everything you need/want to do by using the encoder's setter methods, then you don't need a new encoder.
VkGraphicsPipelineCreateInfo has integer member subpass.
My use case is creating a single pipeline object and use it with multiple subpasses. Each subpass has different color attachment.
No. A pipeline is always built relative to a specific subpass of a specific render pass. It cannot be used in any other subpass:
The subpass index of the current render pass must be equal to the subpass member of the VkGraphicsPipelineCreateInfo structure specified when creating the VkPipeline currently bound to VK_PIPELINE_BIND_POINT_GRAPHICS.
You will need to create multiple pipelines, one for each subpass you intend to use it with. The pipeline cache should make this efficient for implementations that don't really care much about this.