In the chapter 'Descriptor Set Binding' in the Vulkan spec the following statement is made:
A compatible descriptor set must be bound for all set numbers that any shaders in a pipeline access, at the time that a draw or dispatch command is recorded to execute using that pipeline.
Where is 'compatible descriptor' defined? I haven't found the definition in the spec. I wonder whether a descriptor set must match the set layout in the shader exactly or whether the descriptor set is allowed to have a resource bound to a binding point which does not exist in the shader.
The reason for this question is the following: assume I have two shaders which are almost identical (consider them 'variations' of a template shader), they have the same layouts, except one of them doesn't use one particular binding point (i.e. this could be a 'fast path', derived from the generic path by an #ifdef, resulting into one binding point being optimized away). Assume I have two draw calls, the first using one shader and the second using the other, and assume the resources required are identical, except that there is an additional resource for that one shader which has that special binding point. Also assume that I use the same descriptor set layout which maps exactly to the one shader which has the additional binding point. In this situation I would prefer to use the same descriptor set for both shaders to reduce the number of updates/binds of descriptor sets. The set would match the one shader exactly and it would contain a resource binding which does not exist in the other shader.
Shaders do not have a layout; pipelines have a layout. When you build a pipeline, the VkPipelineLayout has to agree with what is defined in the shader... to some extent.
That is, the resources a shader declares have to match the resources specified by the VkPipelineLayout. But the pipeline layout can also define other resources that aren't used by the shaders in that pipeline.
The descriptor sets bound when rendering with a pipeline have to exactly match the descriptor set layouts defined for that pipeline (you can bind descriptors for sets higher than the highest set used by the pipeline, but everything up to the highest set used by the pipeline must match). So if you want to do what you're trying to do, just give both pipelines the same layout.
Related
In my Vulkan code, I use spir-v reflection (spirv-reflect) to construct a compatible VkPipeline object for a combination of shader modules (VkShaderModule).
Once a VkPipeline object is created for a specific combination, it is cached so it won't have to be recreated once the same combination is requested.
Given a list of shader blobs, I can deduce the descriptor sets used and their bindings to construct an appropriate VkDecriptorSetLayout.
However, given that I already created many pipelines with similar shaders before, I probably already created a compatible VkDecriptorSetLayout, which I can potentially reuse.
Generally speaking, is it a beneficial to reuse VkDecriptorSetLayout objects across multiple VkPipeline objects?
If it is beneficial, is there a mechanism which will allow me to identify compatible sets?
(for instance, a set 'tag' attribute in hlsl / glsl) which can be used for caching.
If I want to have one big array of descriptors for sampled images, and index into them. The descriptor layout looks like:
binding = 0;
descriptorCount = 8192;
descriptorType = VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE;
This means much less binding of descriptors, everything is indexed with information passed to a shader or read from a storage buffer. The problem is in updating the entries this image table/descriptors. Let's just say I issue a draw call that reads from this descriptor array from range [0 - 16]. The draw call is still being processed, and I want to update descriptor array element [28]. No previous commands are using element [28]. Is my understanding correct that this is illegal/undefined because you are NOT ALLOWED to update a descriptor set (the entire set) if it's still being used? Does this mean every time I want to update an element in the descriptor array I need to allocate a new descriptor? If I have to wait for the command buffer to be finished with using that descriptor set to update the descriptor then it's likely I'll have to wait on a fence, because descriptor updates are immediate.
This system of having one large descriptor array when draw calls simply index into the large array to get the right texture seems ideal as you don't bind descriptors as much, but I want to know whether updating it like this is illegal/undefined.
As with many things, this is covered by the descriptor indexing feature. If this feature is not available, then you're out of luck.
With descriptor indexing, the VK_DESCRIPTOR_BINDING_UPDATE_UNUSED_WHILE_PENDING_BIT flag does exactly what it says. It allows you to update unused descriptors while a command buffer the descriptor is bound to is pending.
What "unused" means depends on a separate flag. If VK_DESCRIPTOR_BINDING_PARTIALLY_BOUND_BIT is set, then "unused" refers to any descriptor which is not "dynamically used". That is, the rendering command will not access it based on any of the data used by that rendering command. If this bit is set, then "unused" refers to any descriptor which is "statically used". And "statically used" is rather more broad: if your SPIR-V has any instruction that accesses the OpVariable containing the descriptor at all, then it is considered "static use".
And yes, since an arrayed descriptor is a single OpVariable, unless you use the partially bound bit, you cannot use the array at all if you want to modify even part of it. But then again, the partially bound bit also is what makes it possible to have descriptors in the array that do not have valid values. So you probably already have that anyway if you're trying to do this sort of thing at all.
These flags are part of the descriptor set layout's creation info (using an extension structure).
So a VkSampler is created with a VkSamplerCreateInfo that just has a bunch of configuration settings, that as far as I can see would just define a pure function of some input image.
They are described as:
VkSampler objects represent the state of an image sampler which is used by the implementation to
read image data and apply filtering and other transformations for the shader.
One use (possibly only use) of VkSampler is to write them to descriptors (such as VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER) for use in descriptor sets that are bound to pipelines/shaders.
My question is: can you write the same VkSampler to multiple different descriptors? from the same or multiple different descriptor pools? even if one of the current descriptors is in use in some currently executing render pass?
Can you use the same VkSampler concurrently from multiple different render passes / subpasses / pipelines?
Put another way, are VkSamplers stateless? or do they represent some stateful memory on the device and so you shouldn't use the same one concurrently?
VkSampler objects definitely have data associated with them, so it would be wrong to call them "stateless". What they are is immutable. Like VkRenderPass, VkPipeline, and similar objects, once they are created, their contents cannot be changed.
Synchronization between accesses is (generally) needed only for cases when one of the accesses is a modification operation. Since VkSamplers are immutable, there are no modification operations. So synchronization is not needed for cases where you're accessing a VkSampler from different threads, commands, or whathaveyou.
The only exception is the obvious one: vkDestroySampler, which requires that submitted commands that use the sampler have completed before calling the function.
I'm learning Metal, and there's a conceptual question that I'm trying to wrap my head around: at what level, exactly, should my code handle successive drawing operations that require different pipeline states? As I understand it (from answers like this: https://stackoverflow.com/a/43827775/2752221), I can use a single MTLRenderCommandEncoder and change its pipeline state, the vertex buffer it's using, etc., between calls to drawPrimitives:, and the encoder state that was current at the time of each call to drawPrimitives: will be preserved. So that's great. But it also seems like the design of Metal is such that one can make multiple MTLRenderCommandEncoder instances, and use them to sequentially throw batches of commands into a MTLCommandBuffer. Given that the former works – using one MTLRenderCommandEncoder and changing its state – why would one do the latter? Under what circumstances is it correct to do the former, and under what circumstances is it necessary to do the latter? What is an example of a situation where the latter would be necessary/appropriate?
If it matters, I'm working on a macOS app, using Objective-C. Thanks.
Ignoring multithreaded encoding cases, which are somewhat advanced, the main reason you'd want to create multiple render command encoders during a frame is because you need to change which textures you're rendering to.
You'll notice that you need to provide a render pass descriptor when creating a render command encoder. For this reason, we often say that the sequence of commands belonging to a particular encoder constitute a render pass. The attachments of that descriptor refer to the textures that will be written to by the commands encoded by the encoder.
Many different techniques, including shadow mapping and postprocessing effects like bloom require multiple passes to produce. Since you can't change attachments in the midst of a pass, creating a new encoder is the only way to encode multiple passes in a frame.
Relatedly, you should ordinarily use one command buffer per frame. You can, however, sometimes reduce frame time by splitting your passes across multiple command buffers, but this is highly dependent on the shape of your workload and should only be done in tandem with profiling, as it's not always an optimization.
In addition to Warren's answer, another way to look at the question is by examining the API. A number of Metal objects are created from descriptors. The properties of the descriptor at the time an object is created from it govern that object for its lifetime. Those are aspects of the object that can't be changed after creation.
By contrast, the object will have various setter methods to modify other properties over its lifetime.
For a render command encoder, the properties that are fixed for its lifetime are those specified by the MTLRenderPassDescriptor used to create it. If you want to render with different values for any of those properties, the only way to do so is to create a new encoder from a different descriptor. On the other hand, if you can do everything you need/want to do by using the encoder's setter methods, then you don't need a new encoder.
I wondering why it's possible to specify multiple descriptors set layouts in VkPipelineLayoutCreateInfo because a single one already includes all of the bindings.
A descriptor set layout describes the layout for a single descriptor set. But a pipeline can have multiple descriptor sets. This is what the layout(set = #) part of the qualifier in GLSL means: it specifies which set that this particular resource gets its descriptor from. The set is an index into the VkPipelineLayoutCreateInfo::pSetLayouts array. The descriptor is the index into that set's list of descriptors. The two of them combined identify the specific descriptor within the pipeline layout.
So your assumption that a single descriptor set "already includes all of the bindings" is incorrect.
As stated in the specification, the point of having multiple descriptor sets is to allow users to change one set of descriptors without changing another, and to allow pipelines to be partially layout compatible with one another.
For example, you might have per-scene information like the location of lights and the camera/projection matrices. But you might also have per-object information like the matrices. If all of that information is in the same descriptor set, then if you want different objects to have different per-object descriptor sets, they would also have to have different per-scene info in those same sets.
You can instead split them up into separate descriptor sets, with the less frequently changing information in set 0 (per-scene) and the more frequently changing data in set 1 (per-object). That way, you don't have to change every descriptor just to change your per-object data.
Also, you can change pipelines without having to restore the per-scene sets. For example, let's say you're switching from your non-skinned pipeline to your skinned pipeline. Well, obviously they have fundamentally different kinds of per-object data. But their per-scene data is the same. If you have these data in different descriptor sets, then you don't need another descriptor set for the per-scene data. You don't even need to bind a new set 0 when you change the program binding. Because set 0 is compatible with both programs, set 0's binding is valid in both.
The specification even has a notation specifically about this scenario:
Place the least frequently changing descriptor sets near the start of the pipeline layout, and place the descriptor sets representing the most frequently changing resources near the end. When pipelines are switched, only the descriptor set bindings that have been invalidated will need to be updated and the remainder of the descriptor set bindings will remain in place.