How to reset OpenGL program's uniform attribute value to default? - api

Let's say I have an OpenGL program that has a uniform attribute "diffuseColor". I have set it as following:
GLint location = glGetUniformLocation(handle, "diffuseColor");
glUniform3f(location, 1, 0, 0);
Now I would like to return it to the default value, which is encoded in the shader code. I do not have access to the source code, but I can call OpenGL API functions on the compiled program. Is there a way to read default value and set it with glUniform3f? Or even better, is there a something like glResetUniform3f(GLint loc)?

Uniform initializers are applied upon linking the program. The value can then be read using glGetUniformfv/glGetUniformiv. There is no way to read the initial value of the uniform after you changed the uniform value.
There is no way to reset a single uniform to its initial value, but relinking the program will reset all uniforms in it. Linking a program is a costly operation and should be avoided in between frames.

Related

How to map SSBO buffer to CPU in Vulkan similar to glMapBuffer() in openGL

I am making a project in Vulkan, and I want to use an SSBO modified in the GPU on CPU; but Vulkan doesn't have a function to map the buffer, only have a memory function. I tried everything about MemoryMapping, but nothing worked.
With Vulkan, after creating the SSBO memory buffer and specifying memory property flag VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT (which will create the buffer from memory accessible by the system/CPU), use command vkMapMemory() and pass it the void *pointer to use to access the shader block.
The memcpy() command can then be used to read and write data to and from the block (be sure to use fences and avoid reading/writing while the GPU is still using the SSBO).
A quick note on casting and offsetting - whilst using the void pointer to write data to an SSBO with a single memcpy() call is fine, it can't be used to read in the same manner. The pointer has to be cast to the data type in use.
Also, offset arithmetic cannot be performed on void pointers to reach individual structs either.
The data type or struct to which the pointer is cast defines how increment/decrement works - it will do so by the size of said data type and not by bytes in the address (the latter may seem more intuitive).
For example:
(copy the fifth int from a block of ints...)
int theInt;
int *ssboBlockPointer = (int*)vTheSSBOMappedPointer;
memcpy(&theInt, ssboBlockPointer + 5, sizeof(int));
(or copy the 5th struct from a block of structs - offset will move 5 structs)
theStruct oneStruct;
theStruct *ssboBlockPointer = (theStruct*)vTheSSBOMappedPointer;
memcpy(&theStruct , ssboBlockPointer + 5, sizeof(theStruct));

What is the use of .range field when updating a buffer descriptor?

When I went to update a UNIFORM_BUFFER descriptor, I set up the buffer info:
VkDescriptorBufferInfo buffer_info;
buffer_info.buffer = /* SOME BUFFER */;
buffer_info.offset = 0;
buffer_info.range = 0; // I assume this doesn't do any thing for this use
And then vkUpdateDescriptorSets().
Get the validation layer error:
VkDescriptorBufferInfo range is not VK_WHOLE_SIZE and is zero, which
is not allowed.. The Vulkan spec states: If range is not equal to
VK_WHOLE_SIZE, range must be greater than 0
My question is, isn't the job of the buffer info to tell which buffer and what offset to read from the shaders at a particular descriptor set and binding? I didn't think the size of the buffer mattered because generally that's how these things work, you usually specify a buffer and offset and then you read outside that buffer in the shader at your peril.
Let's just I write the wrong range, what would that do? If I write a 32 and in the shader I access 64 bytes in, what happens? Is this argument for validation warnings?
Edit: I just want to clarify the range argument can't mean how much of the buffer I want to copy, what I'm writing to is essentially a pointer. The actual writing of the buffer data is done in a buffer to buffer copy transfer.
A descriptor describes a (usually memory) resource being used by a shader in some capacity. Buffers do have a size, but a shader can use a subset of a buffer's memory range. The descriptor describes which portion of the buffer is being used.
If a descriptor should use the whole size of the buffer assigned to it (starting at offset), that's what VK_WHOLE_SIZE is for.
This allows you to have multiple uniform buffers provided by the same VkBuffer resource. You can even use dynamic uniform block descriptors to change the offset/range without changing the buffer binding itself. This is faster than switching descriptor sets, thus making it easier to provide per-object data.
Let's just I write the wrong range, what would that do?
If the range is smaller than the size of the uniform block specified in the shader, then you'll get a validation failure/undefined behavior.

Specifying push constant block offset in HLSL

I am trying to write a Vulkan renderer, I use glslangValidator with HLSL for shaders and am trying to implement push constants.
[[vk::push_constant]]
cbuffer cbFragment {
float4 imageColor;
float4 aaaa;
};
[[vk::push_constant]]
cbuffer cbMatrices {
float4 bbbb;
};
The annotation "[[vk::push_constant]]" works, I use spirv_reflect for reflection and both push constants show up and they work as intended.
The problem I'm having is that they seemingly overlap, if I assign "bbbb" a value, "imageColor" is affected in exactly the same way and vice versa. In the reflection data both push constant blocks have the offset 0, which explains the issue. However, I seem to be completely unable to change the offset of either of the push constants.
[[vk::offset(x)]] does not work at all, it neither affects the individual member offsets nor the offset of the push constants. The only offset that works at all is HLSL's built in "packoffset", which only applies to the buffer members. And although it might actually be a solution to just offset the members of one of the push constants to be outside the range of the other, I hardly believe that can be a sensible solution as it's also causing the validation layer to fail because offsetting the individual member simply increases the size of the push constant unnecessarily and the overlap itself is still present.
I would greatly appreciate any help on this matter and am willing to provide any necessary clarification, thank you very much!
Push constants live in a single chunk of contiguous memory. The compiler doesn't try to append multiple blocks into that memory; like with the GLSL syntax, it's intended to just have one block containing all the push constant data.
This is consistent with other places where the compiler has to pack variables in a block: it only packs within a block, not across multiple blocks. Two separate non-pushconstant cbuffers would refer to two distinct buffers in memory, with contents that begin at offset zero within their individual buffer. There's only one "push constant buffer", hence you should only decorate one cbuffer with vk::push_constant.

Vulkan ignoring GLSL image format qualifier

I have a compute shader that reads a signed normalized integer image using imageLoad.
The image itself (which contains both positive and negative values) is created as a R16G16_SNORM and is written by a fragment shader in a previous gpass.
The imageview bound to the descriptorsetlayout binding in the compute shader is also created with the same R16G16_SNORM format.
Everything works as expected.
Yesterday I realized that in the compute shader I used the wrong image format qualifier rg16.
A bit puzzled (I could not understand how it could work properly reading an unsigned normalized value) I corrected to rg16_snorm, and.. nothing changed.
I performed several tests (I even specified a rg16f) and always had the same (correct, [-1,1] signed) result.
It seems like Vulkan (at least my implementation) silently ignores any image format qualifier, and falls back (I guess) to the imageview format bound to the descriptorset.
This seems to be in line with the spec regarding format in imageview creation
format is a VkFormat describing the format and type used to interpret texel blocks in the image
but then in Appendix A (Vulkan Environment for SPIR-V - "Compatibility Between SPIR-V Image Formats And Vulkan Formats") there is a clear distinction between Rg16 and Rg16Snorm.. so:
is it a bug or a feature?
I am working with an Nvidia 2070 Super under ubuntu 20.04
UPDATE
The initial image writing operation happens as the result of a fragment shader color attachment output, and as such, there is no descriptorsetlayout binding declaration. The fragment shader outputs a vec2 to the R16G16_SNORM color attachment as specified by the active framebuffer and renderpass.
The resulting image (after the relevant barriers) is then read (correctly, despite the wrong layout qualifier) by a compute shader as an image/imageLoad operation.
Note that validation layers are enabled and silent.
Note also that the resulting values are far from random, and exactly match the expected values (both positive and negative), using either rg16, rg16f or rg16_snorm.
What you're getting is undefined behavior.
There is a validation check on Image Write Operations that prevents the OpTypeImage's format (equivalent to the layout format specifier in GLSL) from being incompatible with the backing VkImageView's format:
If the image format of the OpTypeImage is not compatible with the VkImageView’s format, the write causes the contents of the image’s memory to become undefined.
Note that when it says "compatible", it doesn't mean image view compatibility; it means "exactly match". Your OpTypeImage format did not exactly match that of the shader, so your writes were undefined. And "undefined" can mean "works as if you had specified the correct format".

Variable Name Efficiency in Shader (OpenGL ES 2)

Out of curiosity, will it be more efficient to write shader variables like this :
lowp vec4 tC = texture2D(uTexture, vTexCoord); // texture color
or
lowp vec4 textureColor = texture2D(uTexture, vTexCoord); // texture color
Note that I wrote variable tC because it has less characters than variable textureColor
I understand in programming language like C/ObjC, it doesn't matter, but what about shader, since you can query the attributes / uniform names.
It shouldn't make a measurable difference. After linking your program during initialization, query the locations of attributes/uniforms, and keep the result around with the program handle. From then on, neither your app nor the driver will be touching the name strings, just the integer locations.
Even if you re-query locations every time you need to change an attrib binding or uniform value, the difference between a short and "moderate" name length likely won't make much difference compared to the other costs of doing the lookup and binding/value change.