Should I reuse VkDecriptorSetLayout objects? - vulkan

In my Vulkan code, I use spir-v reflection (spirv-reflect) to construct a compatible VkPipeline object for a combination of shader modules (VkShaderModule).
Once a VkPipeline object is created for a specific combination, it is cached so it won't have to be recreated once the same combination is requested.
Given a list of shader blobs, I can deduce the descriptor sets used and their bindings to construct an appropriate VkDecriptorSetLayout.
However, given that I already created many pipelines with similar shaders before, I probably already created a compatible VkDecriptorSetLayout, which I can potentially reuse.
Generally speaking, is it a beneficial to reuse VkDecriptorSetLayout objects across multiple VkPipeline objects?
If it is beneficial, is there a mechanism which will allow me to identify compatible sets?
(for instance, a set 'tag' attribute in hlsl / glsl) which can be used for caching.

Related

Concurrent use of VkSamplers?

So a VkSampler is created with a VkSamplerCreateInfo that just has a bunch of configuration settings, that as far as I can see would just define a pure function of some input image.
They are described as:
VkSampler objects represent the state of an image sampler which is used by the implementation to
read image data and apply filtering and other transformations for the shader.
One use (possibly only use) of VkSampler is to write them to descriptors (such as VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER) for use in descriptor sets that are bound to pipelines/shaders.
My question is: can you write the same VkSampler to multiple different descriptors? from the same or multiple different descriptor pools? even if one of the current descriptors is in use in some currently executing render pass?
Can you use the same VkSampler concurrently from multiple different render passes / subpasses / pipelines?
Put another way, are VkSamplers stateless? or do they represent some stateful memory on the device and so you shouldn't use the same one concurrently?
VkSampler objects definitely have data associated with them, so it would be wrong to call them "stateless". What they are is immutable. Like VkRenderPass, VkPipeline, and similar objects, once they are created, their contents cannot be changed.
Synchronization between accesses is (generally) needed only for cases when one of the accesses is a modification operation. Since VkSamplers are immutable, there are no modification operations. So synchronization is not needed for cases where you're accessing a VkSampler from different threads, commands, or whathaveyou.
The only exception is the obvious one: vkDestroySampler, which requires that submitted commands that use the sampler have completed before calling the function.

When to use multiple MTLRenderCommandEncoders to perform my Metal rendering?

I'm learning Metal, and there's a conceptual question that I'm trying to wrap my head around: at what level, exactly, should my code handle successive drawing operations that require different pipeline states? As I understand it (from answers like this: https://stackoverflow.com/a/43827775/2752221), I can use a single MTLRenderCommandEncoder and change its pipeline state, the vertex buffer it's using, etc., between calls to drawPrimitives:, and the encoder state that was current at the time of each call to drawPrimitives: will be preserved. So that's great. But it also seems like the design of Metal is such that one can make multiple MTLRenderCommandEncoder instances, and use them to sequentially throw batches of commands into a MTLCommandBuffer. Given that the former works – using one MTLRenderCommandEncoder and changing its state – why would one do the latter? Under what circumstances is it correct to do the former, and under what circumstances is it necessary to do the latter? What is an example of a situation where the latter would be necessary/appropriate?
If it matters, I'm working on a macOS app, using Objective-C. Thanks.
Ignoring multithreaded encoding cases, which are somewhat advanced, the main reason you'd want to create multiple render command encoders during a frame is because you need to change which textures you're rendering to.
You'll notice that you need to provide a render pass descriptor when creating a render command encoder. For this reason, we often say that the sequence of commands belonging to a particular encoder constitute a render pass. The attachments of that descriptor refer to the textures that will be written to by the commands encoded by the encoder.
Many different techniques, including shadow mapping and postprocessing effects like bloom require multiple passes to produce. Since you can't change attachments in the midst of a pass, creating a new encoder is the only way to encode multiple passes in a frame.
Relatedly, you should ordinarily use one command buffer per frame. You can, however, sometimes reduce frame time by splitting your passes across multiple command buffers, but this is highly dependent on the shape of your workload and should only be done in tandem with profiling, as it's not always an optimization.
In addition to Warren's answer, another way to look at the question is by examining the API. A number of Metal objects are created from descriptors. The properties of the descriptor at the time an object is created from it govern that object for its lifetime. Those are aspects of the object that can't be changed after creation.
By contrast, the object will have various setter methods to modify other properties over its lifetime.
For a render command encoder, the properties that are fixed for its lifetime are those specified by the MTLRenderPassDescriptor used to create it. If you want to render with different values for any of those properties, the only way to do so is to create a new encoder from a different descriptor. On the other hand, if you can do everything you need/want to do by using the encoder's setter methods, then you don't need a new encoder.

Vulkan: Descriptor set compatibility

In the chapter 'Descriptor Set Binding' in the Vulkan spec the following statement is made:
A compatible descriptor set must be bound for all set numbers that any shaders in a pipeline access, at the time that a draw or dispatch command is recorded to execute using that pipeline.
Where is 'compatible descriptor' defined? I haven't found the definition in the spec. I wonder whether a descriptor set must match the set layout in the shader exactly or whether the descriptor set is allowed to have a resource bound to a binding point which does not exist in the shader.
The reason for this question is the following: assume I have two shaders which are almost identical (consider them 'variations' of a template shader), they have the same layouts, except one of them doesn't use one particular binding point (i.e. this could be a 'fast path', derived from the generic path by an #ifdef, resulting into one binding point being optimized away). Assume I have two draw calls, the first using one shader and the second using the other, and assume the resources required are identical, except that there is an additional resource for that one shader which has that special binding point. Also assume that I use the same descriptor set layout which maps exactly to the one shader which has the additional binding point. In this situation I would prefer to use the same descriptor set for both shaders to reduce the number of updates/binds of descriptor sets. The set would match the one shader exactly and it would contain a resource binding which does not exist in the other shader.
Shaders do not have a layout; pipelines have a layout. When you build a pipeline, the VkPipelineLayout has to agree with what is defined in the shader... to some extent.
That is, the resources a shader declares have to match the resources specified by the VkPipelineLayout. But the pipeline layout can also define other resources that aren't used by the shaders in that pipeline.
The descriptor sets bound when rendering with a pipeline have to exactly match the descriptor set layouts defined for that pipeline (you can bind descriptors for sets higher than the highest set used by the pipeline, but everything up to the highest set used by the pipeline must match). So if you want to do what you're trying to do, just give both pipelines the same layout.

Behavior of components when structures are in an array

I am currently working on the simulation of a physical system in Fortran90 with something like 50 millions particles. Each has a position x (to simplify).
For now, I am using a 1D vector that contains the position of each particle. And when I have to iterate on every particle, I just go through that vector (as I took care to sort the particles to limit cache misses).
I am now considering creating a particle class. But what about the access to its position as I iterate ? Will it be as fast as the previous case ?
So, what does the compiler do to store the attributes of an object ? And a fortiori, what about the case with more than one attributes?
Thank you for your time.
On "how are derived types stored":
Fortran Standard requires components of a sequence type to be stored (in memory) as a sequence of contiguous storage, in components' declaration order. Sequence types are those declared with a SEQUENCE statement, which implies that the type shall have at least one component, each component shall be of an intrinsic or sequence type, shall not be a parameterized or extensible type, and can't have type-bound procedures. If you want this behavior and your type is suitable, make it a sequence type (you may take data alignment into consideration).
On the other hand, Fortran Standard does not state how compilers have to organize storage for non-sequence derived types. That's not bad at all, as compilers are free to optimize storage. Most of times, you may expect almost the same as sequence types: things stored contiguously whenever posible (padding may apply). Arrays and strings are always contiguous. Pointer and allocatable components are a only reference, for obvious reasons, and their targets lay somewhere else.
From the Standard:
A structure resolves into a sequence of components. Unless the
structure includes a SEQUENCE statement, the use of this terminology
in no way implies that these components are stored in this, or any
other, order. Nor is there any requirement that contiguous storage be
used. The sequence merely refers to the fact that in writing the
definitions there will necessarily be an order in which the components
appear, and this will define a sequence of components. This order is
of limited significance because a component of an object of derived
type will always be accessed by a component name except in the
following contexts: the sequence of expressions in a derived-type
value constructor, intrinsic assignment, the data values in namelist
input data, and the inclusion of the structure in an input/output list
of a formatted data transfer, where it is expanded to this sequence of
components. Provided the processor adheres to the defined order in
these cases, it is otherwise free to organize the storage of the
components for any nonsequence structure in memory as best suited to
the particular architecture.
On "Is it faster to have a derived type than independent arrays":
As #VladmirF said in comment, its a broad topic, depends highly on how are you accessing and operating your data, and has been asked and answered before (Check links on its comment). You may find a lot about it arround (link1, link2) and I'll add this one on "cache blocking" thay may interest you.

Serialization vs. Archiving?

The iOS docs differentiate between "serializing" and "archiving." Is this a general distinction (i.e., holds in other languages) or is it specific to Objective-C? Also, what is the difference between these two?
This is a case of one being the other some (but not all) of the time.
Wikipedia has this to say about serialization:
"Serialization is the process of converting a data structure or object into a sequence of bits so that it can be stored in a file or memory buffer, or transmitted across a network connection link to be "resurrected" later in the same or another computer environment"
So, archiving may only be serialization, but it could also be the combination of serialization and compresssion, for example. Or perhaps it adds some kind of header info. So serialization is a form of archive, but an archive is not necessarily a serialization.
This isn't really specific to iOS - these terms are thrown around all over. Their specific meaning in the context of iOS could be quite specific, though.
I was actually trying to look for their difference from IOS perspective. Adding the following for people interested :
Purpose:
Archiving is used to store object graphs. complete data model can be archived and restored easily. The way Nib files work can be considered as example for archiving.
Serialization is used for storing arbitrary hierarchy of objects.
The wat plist files work can be considered as example fo serializations.
Differences(excerpts from Archives programing guide):
"The archive preserves the identity of every object in the graph and all the relationships it has with all the other objects in the graph."
Every object encoded within the context of rootObject invocation is tracked. If the coder is asked to encode an object more than once, the coder encodes a reference to the first encoding instead of encoding the object again.
"The serialization only preserves the values of the objects and their position in the hierarchy. Multiple references to the same value object might result in multiple objects when deserialized. The mutability of the objects is not maintained."
Implementation differences:
Any object that implements NSCoding protocol can be archived where as Only instances of NSArray, NSDictionary, NSString, NSDate, NSNumber, and NSData (and some of their subclasses) can be serialized. The contents of array and dictionary objects must also contain only objects of these few classes.
When to Use:
property lists(serialization) should be used for data that consists primarily of strings and numbers. They are very inefficient when used with large blocks of binary data.
It is worthy to Archive objects other than plist objects or storing large blocks of data.
Generally speaking, Serialization is concerned with converting your program data types into architecture independent byte streams. Archiving is specialized serialization in that you could store type and other relationship based information that allow you to unserialize/unmarshall easily. So archival can be thought of as a specialization and subset of Serialization. For Objective-C
Serialization converts Objective-C
types to and from an
architecture-independent byte stream.
In contrast to archiving, basic
serialization does not record the data
type of the values nor the
relationships between them; only the
values themselves are recorded. It is
your responsibility to deserialize the
data in the proper order. Several
convenience classes, however, do
provide the ability to serialize
property lists, recording their
structure along with their values.
With C++ boost serialization --
http://www.boost.org/doc/libs/1_45_0/libs/serialization/doc/index.html
Here, we use the term "serialization"
to mean the reversible deconstruction
of an arbitrary set of C++ data
structures to a sequence of bytes.
Such a system can be used to
reconstitute an equivalent structure
in another program context. Depending
on the context, this might used
implement object persistence, remote
parameter passing or other facility.
In this system we use the term
"archive" to refer to a specific
rendering of this stream of bytes.
This could be a file of binary data,
text data, XML, or some other created
by the user of this library.