Can I overlap two framebuffer attachments outputs in a fragment shader? - vulkan

Right now I am writing out to a colour buffer in the fragment shader, which is a float format.
layout (location = 0) out vec4 outColour;
I need to have a way to write the object's id to a framebuffer for picking. There are a number of ways I've thought about doing this. I can compile two versions for each shader, one a normal one, and another for the picking, which basically only needs to do the vertex position transformations and then skip everything else, lighting calculations, texturing, etc. This probably isn't ideal because this is essentially doubling the number of shaders I have to write.
An easier method I've thought is to do a conditional branch (preferably over a specialisation constant), and for picking purposes compile a picking version of the graphics pipeline with the picking boolean value set to true. This sounds better. For the ordinary passes I can write to multiple attachments. Will it be best to compile that picking pipeline with a new render pass that writes to only one framebuffer attachment, an integer one? If I swap the render pass for one that writes an integer at attachment 0 instead of the float 4 can I alias this in the fragment shader?
layout (location = 0) out vec4 outColour;
layout (location = 0) out ivec4 out_id;
void main()
{
vec4 colour;
int object_id;
if (bPicking)
out_id = ivec4(object_id, 0, 0, 0); // y, z, w not used
else
out_colour = colour;
}
I'm guessing I really need a different render pass because instead of writing to a R32G32B32A32_SFLOAT image I'm writing to a R8_UINT image for the IDs. This is really confusing what's the best way to do this?

Related

Vulkan, what variables does an object need? As in a separate mesh that can be updated individually

So I have been experimenting, and I can add a new "object" by adding every model in the scene to the same vertex buffer, but this isn't good for a voxel game because I don't want to have to reorganize the entire world's vertices every time a player destroys a block.
And it appears I can also add a new "object" by creating a new vertex and index buffer for it, and simply binding both it and all other vertex buffers to the command buffers array at the same time like this:
vkCmdBeginRenderPass(commandBuffers[i], &renderPassInfo, VK_SUBPASS_CONTENTS_INLINE);
vkCmdBindPipeline(commandBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, graphicsPipeline);
vkCmdBindDescriptorSets(commandBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayout, 0, 1, &descriptorSets[i], 0, nullptr);
// mesh 1
VkBuffer vertexBuffers[] = { vertexBuffer };
VkDeviceSize offsets[] = { 0 };
vkCmdBindVertexBuffers(commandBuffers[i], 0, 1, vertexBuffers, offsets);
vkCmdBindIndexBuffer(commandBuffers[i], indexBuffer, 0, VK_INDEX_TYPE_UINT32);
vkCmdDrawIndexed(commandBuffers[i], static_cast<uint32_t>(indices.size()), 1, 0, 0, 0);
// mesh 2
VkBuffer vertexBuffers2[] = { vertexBuffer2 };
vkCmdBindVertexBuffers(commandBuffers[i], 0, 1, vertexBuffers2, offsets);
vkCmdBindIndexBuffer(commandBuffers[i], indexBuffer2, 0, VK_INDEX_TYPE_UINT32);
vkCmdDrawIndexed(commandBuffers[i], static_cast<uint32_t>(indices.size()), 1, 0, 0, 0);
vkCmdEndRenderPass(commandBuffers[i]);
But then this requires me to bind ALL vertex buffers to the command buffers array every time even when only a single one of those meshes is updated or created/destroyed. So how would I "add" a new "game object," the vertices and indices of which can be updated without having to loop through everything else in the scene too? Or is it relatively quick to bind to an already calculated vertex and index buffer and this is standard?
And I have tried this with a command buffer per object:
VkSubmitInfo submits[] = { submitInfo, submitInfo2 };
if (vkQueueSubmit(graphicsQueue, 2, submits, inFlightFences[currentFrame]) != VK_SUCCESS) {
throw std::runtime_error("failed to submit draw command buffer!");
}
But it only renders the last object in the queue (it will render the first object if I say the submit size is 1).
I have tried adding a separate descriptorset, descritor pool, and pipeline as well and it still only renders the last command buffer in the queue. I tried adding a new commandpool for each object but commandPool is used by dozens of other functions and it really seems like there is supposed to be only one of those.
You split your world into chunks, and draw one chunk at a time. All chunks have some space reserved for them in (a single) vertex buffer, and when something has changed, you only update that one chunk. If a chunk grows too large... Well, you will probably need some sort of a memory allocation system.
Do NOT create separate buffers for every little thing. Buffers just hold data. Any data. You can even store different vertex formats for different pipelines in one same buffer - just in different places within it and binding it with an offset. Do not rebind just to draw a different mesh if all your vertices are packed neatly into array (they most likely are). If you want to only draw a part of a buffer - just use what draw commands give you.
Command buffers are just a block of instructions for the gpu. You dont need one per object. However, one cannot be used and written to at the same time, so you will need at least one per frame in flight and one to write to. Pipelines(descriptor sets, and pretty much whatever else you bind) are just a bunch of state that your gpu starts using once you bind it. At the start of command buffer, the state is undefined - it is NOT inherited between command buffers in any way.

Partially share some push constants between different stages

I'm trying to figure out how to share some push constants between different shader stages, by setting up multiple push constant ranges. I success when I use one single range with VkFlags VK_SHADER_STAGE_ALL, but I'm not sure if this is the correct way?
Here's an example of what I'm trying to achieve:
Fragment shader:
layout(push_constant) uniform fragmentPushConstants {
layout(offset = 0) float time;
layout(offset = 4) vec4 color;
} u_pushConstants;
Vertex shader:
layout(push_constant) uniform vertexPushConstants {
layout(offset = 0) float time;
} u_pushConstants;
For this example, how many ranges should I provide vkCreatePipelineLayout with and how should I structure them?
It's working if one single range (0 - 20) with VK_SHADER_STAGE_ALL is provided to the pipeline layout info structure. I cannot find any examples what so ever of multi-range usage, except when two different ranges without any overlapping ranges are being used. What's the purpose of ranges at all if I just could use one single range(0 - max) anyway with VK_SHADER_STAGE_ALL?

How do I make multiple copies of a set of polygons in a Vertex Buffer Array?

In OpengL 1, in Visual Basic with OpenTK, if I want a hundred cubes all arranged in circle i'd write
glRef = GL.GenLists(1)
GL.NewList(glRef, ListMode.Compile)
GL.Begin(PrimitiveType.Traingles)
GL.Vertex3....for the vertices of a cube
GL.End()
GL.EndList()
which would give me glRef as a handle with which I could do
For i = 0 to 100
GL.PushMatrix()
GL.Rotate(3.6*i, 0, 0, 1)
GL.Translate(5.0, 0.0, 0.0)
GL.CallList(glRef)
GL.PopMatrix()
Next
and get a hundred cubes all arranged in a circle.
How do I do the same sort of thing in Open GL 2.0 or higher with Vertex Buffer Objects?
I start off with
GL.GenBuffer(VBOid)
Dim VertexArray() As Single = {....for the vertices of a cube }
then do some binding of it to a vertex buffer
GL.BindBuffer(BufferTarget.ArrayBuffer, VBOid(0))
GL.BufferData(BufferTarget.ArrayBuffer, SizeOf(GetType(Single)) * VertexArray.Count, VertexArray, BufferUsageHint.StaticDraw)
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, False, 0, VertexArray)
and then in my draw routine I do something along the lines of
GL.EnableClientState(ArrayCap.VertexArray)
GL.BindBuffer(BufferTarget.ArrayBuffer, PrimitiveID(0))
GL.DrawElements(PrimitiveType.Triangles)
but at this point adding a second DrawBuffer command together with transforms doesn't seem to create me a second cube. I've been bashing my head against a wall, looking all over the internet and I can't find a straight forward reference which tells me how to do it, or even confirmation that it's possible.
Is this not the way its supposed to work, am I just supposed to send a hundred sets of cube vertices, or is there a way to copy a vertex buffer object and apply transforms to it? (Or is I'm probably doing it wrong somewhere and I need to go on a bug hunt - any tips for that would be helpful)
I don't think GL.DrawBuffer is the correct command in this place. It is used to specify in the context of FBOs which attachment points can be written.
Since you try to draw a VBO here, I would expect the use of GL.DrawArrays or GL.DrawElements.

Variable Name Efficiency in Shader (OpenGL ES 2)

Out of curiosity, will it be more efficient to write shader variables like this :
lowp vec4 tC = texture2D(uTexture, vTexCoord); // texture color
or
lowp vec4 textureColor = texture2D(uTexture, vTexCoord); // texture color
Note that I wrote variable tC because it has less characters than variable textureColor
I understand in programming language like C/ObjC, it doesn't matter, but what about shader, since you can query the attributes / uniform names.
It shouldn't make a measurable difference. After linking your program during initialization, query the locations of attributes/uniforms, and keep the result around with the program handle. From then on, neither your app nor the driver will be touching the name strings, just the integer locations.
Even if you re-query locations every time you need to change an attrib binding or uniform value, the difference between a short and "moderate" name length likely won't make much difference compared to the other costs of doing the lookup and binding/value change.

How to reset OpenGL program's uniform attribute value to default?

Let's say I have an OpenGL program that has a uniform attribute "diffuseColor". I have set it as following:
GLint location = glGetUniformLocation(handle, "diffuseColor");
glUniform3f(location, 1, 0, 0);
Now I would like to return it to the default value, which is encoded in the shader code. I do not have access to the source code, but I can call OpenGL API functions on the compiled program. Is there a way to read default value and set it with glUniform3f? Or even better, is there a something like glResetUniform3f(GLint loc)?
Uniform initializers are applied upon linking the program. The value can then be read using glGetUniformfv/glGetUniformiv. There is no way to read the initial value of the uniform after you changed the uniform value.
There is no way to reset a single uniform to its initial value, but relinking the program will reset all uniforms in it. Linking a program is a costly operation and should be avoided in between frames.