Is there a way to get the buffer size of a uniform or buffer storage block in a shader/pipeline from my host-side code? [duplicate] - vulkan

Looking for a Vulkan alternative for this; In OpenGL is there a way to get a list of all uniforms & attribs used by a shader program?

Vulkan, as a general rule, does not have querying APIs for any information you have provided to the API. If you give something to the API, and you need to know something about that data, then you're expected to remember what it was.
SPIR-V contains all of the definitions of the various resources and interfaces used by a shader. And SPIR-V is a pretty well-specified format. Since you gave the SPIR-V to Vulkan, you therefore have ample opportunity to know what all of the "uniforms & attribs" in that shader are. So Vulkan has no shader querying API.
There are several tools for introspecting into SPIR-V binaries to extract this kind of information. But Vulkan itself isn't one of them.

Related

What is the descriptor type used in case of constant buffers in Vulkan?

I am trying to create a descriptor set layout which has constant buffers (These constant buffers are used in PS and VS). I don't know what to use as descriptor type for the structure VkDescriptorSetLayoutBinding. This is a basic question but I am new to Vulkan.
Thanks in advance.
Assuming You are talking about HLSL constant buffers, in the HLSL documentation we can read:
Constant buffers reduce the bandwidth required to update shader constants by allowing shader constants to be grouped together and committed at the same time rather than making individual calls to commit each constant separately.
The closest equivalent in GLSL (and in Vulkan) for constant buffer is a uniform buffer. Thus You should specify VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER during descriptor set layout creation.
If You need additional information about descriptor sets, You can read for example 6th part of the API without Secrets: Introduction to Vulkan tutorial.

What is a "Push-Constant" in vulkan?

I've looked through a bunch of tutorials that talk about push constants, allude to possible benefits, but never have I actually seen, even in Vulkan docs, what the heck a "push constant" actually is... I don't understand what they are supposed to be, and what the purpose of push constants are. The closest thing I can find is this post which unfortunately doesn't ask for what they are, but what are the differences between them and another concept and didn't help me much.
What is a push constant, why does it exist and what is it used for? where did its name come from?
Push constants is a way to quickly provide a small amount of uniform data to shaders. It should be much quicker than UBOs but a huge limitation is the size of data - spec requires 128 bytes to be available for a push constant range. Hardware vendors may support more, but compared to other means it is still very little (for example 256 bytes).
Because push constants are much quicker than other descriptors (resources through which we provide data to shaders), they are convenient to use for data that changes between draw calls, like for example transformation matrices.
From the shader perspective, they are declared through layout( push_constant ) qualifier and a block of uniform data. For example:
layout( push_constant ) uniform ColorBlock {
vec4 Color;
} PushConstant;
From the application perspective, if shaders want to use push constants, they must be specified during pipeline layout creation. Then the vkCmdPushConstants() command must be recorded into a command buffer. This function takes, among others, a pointer to a memory from which data to a push constant range should be copied.
Different shader stages of a given pipeline can use the same push constant block (similarly to UBOs) or smaller parts of the whole range. But, what is important, each shader stage can use only one push constant block. It can contain multiple members, though. Another important thing is that the total data size (across all shader stages which use push constants) must fit into the size constraint. So the constraint is not per stage but per whole range.
There is an example in the Vulkan Cookbook's repository showing a simple push constant usage scenario. Sascha Willems's Vulkan examples also contain a sample showing how to use push constants.

D3D12 Use backbuffer surface as unordered access view (UAV)

Im making a simple raytracer for a schoolproject were a compute shader is supposed to be used to shade a triangle or some other primitive.
For this I'd like to write to a backbuffer-surface directly in the compute shader, to then present the results imideatly. I know for certain that this is possible in DX11 though i can't seem to get it to work in DX12.
I couldn't gather that much information about this, but i found this gamedev thread discussing the exact same problem I try to figure out and they seem to come to the conclusion which was my go to workaround: writing to an intermediate texture and then sampling in a pipeline.
I can't fully accept that this would be impossible to achieve in dx12. Why would that feature be removed? Could it be that the queuing-systems removes some overhead that makes it unnecessary to have this feature?
Is there any way to achieve a raytracer without writing to a separate texture and then sampling in a pipeline or copy it onto the back-buffer? What are my best alternatives for achieving performance?
You will have to access the answer. They removed the capability to create an UAV the same way they removed the capability to use multisample surface in the swapchain.
The problem with authorizing UAV on the swapchain surface is that they would have to forfeit tracking of what is happening to it. DX12 rely on descriptor heaps that are 100% volatile at runtime for UAVs ( render targets are CPU side only and can be tracked ).
Microsoft need to track the swapchain surface status strongly in order to guarantee behavior with the desktop presentation and for that reason, they choose to deny the UAV binding.

Does Vulkan have a TransformFeedback equivalent

Does Vulkan have support for saving the vertices output from a pipeline stage? I've been looking and I can't find any examples or references, maybe someone else knows otherwise?
Transform Feedback didn't make the cut for the initial Vulkan release, and there is no direct equivalent to it.
So you actually have to do it yourself by e.g. writing to a SSBO from a geometry shader using PrimitiveIDs or go with compute shaders.
Note that the geometry shader version might not work on all devices, as it requires support for the vertexPipelineStoresAndAtomics feature.
Update
Support for TransformFeedback has been made available as an extension since 1.1.88.

OpenGL lights, textures, etc. correct way?

Until this moment I've only implemented all the effects in GLSL shaders using inputs, outputs and uniforms, except for a couple of really essential constants like gl_Position, etc. I've read several tutorials, had a lecture on computer graphics and everytime all they implement things by looking at physical model and calculating all the stuff using input values and uniforms. That is a kind of the way I thought it all works.
Now I faced the fact, that there are much more GLSL things, like glLight* API functions and gl_LightSource, gl_Texture constants in GLSL with a big set of light types and lighting models predefined. Seems to be a kind of different way of programming shaders.
I wonder if there are any advantages/disadvantages using one or other way? Did I miss something very important? It looks I'm doing a lot of redundant work.
All the glLight* calls you might find in both GLSL and the OpenGL API are from the old and deprecated fixed-function pipeline!
Now you must do all the calculations yourself through Shaders, as I can guess you're already doing.
Why did they "remove" all the awesome stuff?
They "removed" (deprecated) the Matrix Stack, Light calls, Immediate Mode Rendering, etc. etc. etc. and the list goes one for various reason. But the overall reason is that it's better to implement and control those things yourself.
It requires more work from our side implementing and controlling all those things, though you're in total control of everything and when you actually want to use something.
Using the fixed-function pipeline OpenGL would allocate and load various things you might never even wanted to use.
Also when talking about the Matrix Stack as an example, you would usually (the lazy way) make OpenGL re-calculate the Matrix Stack each render call, using the old glPushMatrix(), glPopMatrix(), glTranslate*(), etc. functions. Now because YOU HAVE TO, you are forced to do all those calculations and handling the Matrices yourself. So now you would realize that most of the Matrices and much more could simply be allocated and calculated once, or atleast not every render call.
Of course they didn't deprecated Immediate Mode Rendering, because we need to implement that ourselves, now we simply need to use Buffers, because they are so much better in every way.
Extra
If you want a great spreadsheet that shows which function are deprecated and which are core functions, and extension functions, etc. Then take a look here, though be aware that this spreadsheet is made by people who use OpenGL and not by the Khronos Group (current developers of OpenGL) nor Silicon Graphics (the creators of OpenGL).
Ignore glLightXXX functions, the related gl_LightXXX variables and all the documentation associated with them. It's all deprecated and if you look closely at the docs, you'll probably that it's several years old or specifically designed for versions of OpenGL <= 2.x. Instead continue to work with your own vertex attributes and set up lighting configuration in your own uniforms however you please based on the model of lighting you want to implement. It's more work, but it's more flexible in the long run.
The OpenGL lighting model that uses glLight pre-dates the programmable shader pipeline, and represent a particular way of doing lighting in the fixed function pipeline.
Once GLSL entered the scene it was possible to use the OpenGL lighting model in conjunction with shaders. You could use the same glLight function and it's related functions to set up your lighting parameters but then write shaders that used the same information in different ways, allowing per-pixel lighting calculations.
Textures are a little more murky, because OpenGL still has a texture model and many of the GL functions relating to textures are still valid, though some are deprecated. However, any documentation that refers to GLSL variables like gl_Texture is similarly out of date. Current OpenGL uses sampler objects for texture access.
If you want to make sure you're doing it the 'modern' way, make sure you create a forward-compatible OpenGL profile of 3.3 or higher or 4.0 or higher, and make sure your shaders declare the appropriate version number as their first line like so:
#version 330
This will cause the use of any deprecated OpenGL function or deprecated shader variable to generate an error so that you know to avoid them.
Current graphics hardware offers an interface to customize any rendering step e.g Vertex Shading, Tesselation, Geometry shading, fragment shading and so on. GLSL is the language to programm or influence the rendering steps of the graphics hardware leveraging this interface.
The predefined function glLight, glTexture and so on belong to the deprecated fixed
graphics pipeline of opengl. Modern OpenGL still supports the functions of this fixed pipeline but it ist strongly recommended to use GLSL for the different rendering steps.
The glLight function is a fixed function which just influences Vertex Processing. So you can just achieve a per vertex shading, which not looks very realistic.
When you programm the lighting on your own within the fragment shader using GLSL you can directly influence any pixel.
So to summarize the main advantage is that a programmer is more flexible and is able to influence every kind of rendering step, which enables you to achieve sophisticated and realistic 3d graphics. The main disadvantage is. You need much more knowledge and (GLSL, graphics pipeline) and much more programming effort to achieve the same result as with fixed functions.
Best regards