What are the normal methods for achiving texture mapping with raytracing? - vulkan

When you create a BLAS (bottom level acceleration structures) you specify any number of vertex/index buffers to be part of the structure. How does that end up interacting with the shader and get specified in the descriptor set? How should I link these structures with materials?
How is texture mapping usually done with raytracing? I saw some sort of "materials table" in Q2RTX but the documentation is non-existent and the code is sparsely commented.

A common approach is to use a material buffer in combination with a texture array that is addressed in the shaders where you require the texture data. You then pass the material id e.g. per-vertex or per-primitive and then use that to dynamically fetch the material, and with it the texture index. Due to the requirements for Vulkan ray tracing you can simplify this by using the VK_EXT_descriptor_indexing extension (Spec) that makes it possible to create a large and descriptor set containing all textures required to render your scene.
The relevant shader parts:
// Enable required extension
...
#extension GL_EXT_nonuniform_qualifier : enable
// Material definition
struct Material {
int albedoTextureIndex;
int normalTextureIndex;
...
};
// Bindings
layout(binding = 6, set = 0) readonly buffer Materials { Material materials[]; };
layout(binding = 7, set = 0) uniform sampler2D[] textures;
...
// Usage
void main()
{
Primitive primitive = unpackTriangle(gl_Primitive, ...);
Material material = materials[primitive.materialId];
vec4 color = texture(textures[nonuniformEXT(material.albedoTextureIndex)], uv);
...
}
In your application you then create a buffer that stores the materials generated on the host, and bind it to the binding point of the shader.
For the textures, you pass them as an array of textures. An array texture would be an option too, but isn't as flexible due to the same size per array slice limitation. Note that it does not have a size limitation in the above example, which is made possible by VK_EXT_descriptor_indexing and is only allowed for the final binding in a descriptor set. This adds some flexibility to your setup.
As for the passing the material index that you fetch the data from: The easiest approach is to pass that information along with your vertex data, which you'll have to access/unpack in your shaders anyway:
struct Vertex {
vec4 pos;
vec4 normal;
vec2 uv;
vec4 color;
int32_t materialIndex;
}

Related

Does order within push constant structs matter, even when using alignas()?

I have a "Sprite" struct that I hand over as a push constant.
struct Sprite
{
glm::vec2 position;
alignas(16) Rect uvRect; //a "Rect" is just 2 glm::vec2s
alignas(4) glm::uint32 textureIndex;
alignas(4) glm::float32 rotation;
};
In my .vert file, I describe its layout as:
layout(push_constant) uniform Push
{
vec2 offset; //the 'position' part of the Sprite
vec2 origin; //The first part of the Sprite struct's "uvRect"
vec2 extent; //The second part of the Sprite struct's "uvRect"
uint textureIndex;
float rotation;
}
push;
This doesn't work: I get a black screen.
However, if I rearrange Sprite so that it goes:
struct Sprite
{
glm::vec2 position;
alignas(4) glm::uint32 textureIndex;
alignas(16) Rect uvRect; //a "Rect" is just 2 glm::vec2s
alignas(4) glm::float32 rotation;
};
...and then change the layout descriptor thing in the .vert file accordingly, suddenly it does work.
Does anyone know why this might be?
In your first structure you align the Rect to the 16 byte boundary, but push constant buffer in vulkan is expecting another vec2 to be tightly packed. Assuming I'm reading the alignment spec correctly archive, structures are aligned on a multiple of a 16 byte boundary, whereas a 2 component vector has an alignment twice that of its base component.
An array or structure type has an extended alignment equal to the largest extended alignment of any of its members, rounded up to a multiple of 16.
A scalar or vector type has an extended alignment equal to its base alignment.
A two-component vector has a base alignment equal to twice its scalar alignment.
A scalar of size N has a scalar alignment of N.
Thus the offset in of uvRect in C is 16, but in GLSL the offset of origin is 8 and extent is 16.
By changing the order Vulkan will begin looking for origin on the 8 byte alignment, which after 3 dwords would be an offset of 16, which then matches what C is expecting.

Determine if input attachment is valid within shader

For fragment shaders, it's possible to set color attachment indexes to VK_ATTACHMENT_UNUSED (from the C/C++ API); in that case, writes to those attachments are discarded. This is nice because it allows us to write shaders that unconditionally write to output attachments, and the writes may or may not be discarded, depending on what the renderer decided.
It's also possible to set input attachment indexes to VK_ATTACHMENT_UNUSED, but we're not allowed to read from such attachments. That means that if an input attachment could be VK_ATTACHMENT_UNUSED, the shader must know whether it should read from it or not.
Is there a glsl/spir-v builtin way to check if an input attachment is bound to a valid image-view vs pointing to VK_ATTACHMENT_UNUSED? Otherwise, the app would have to pass data to the shader determining whether is can read or not. That's kind of a pain.
Something builtin like:
layout(input_attachment_index=0, binding=42) uniform subpassInput inputData;
vec4 color = vec4(0);
if (gl_isInputAttachmentValid(0)) {
color = subpassLoad(inputData).rgba
}
Vulkan doesn't generally have convenience features. If the user is perfectly capable of doing a thing, then if the user wants that thing done, Vulkan won't do it for them. If you can provide a value that specifies whether a resource the shader wants to use is available, Vulkan is not going to provide a query for you.
So there is no such query in Vulkan. You can build one yourself quite easily, however.
In Vulkan, pipelines are compiled against a specific subpass of a specific renderpass. And whether a subpass of a renderpass uses an input attachment or not is something that is fixed to the renderpass. As such, at the moment your C++ code compiles the shader module(s) into a pipeline, it knows if the subpass uses an input attachment or not. There's no way it doesn't know.
Therefore, there is no reason your pipeline compilation code cannot provide a specialization constant for your shader to test to see if it should use the input attachment or not. Simply declare a particular specialization constant, check it in the shader, and provide the specialization to the pipeline creation step via VkPipelineShaderStageCreateInfo::pSpecializationInfo.
//In shader
layout(constant_id = 0) const bool use_input_attachment;
...
if (use_input_attachment) {
color = subpassLoad(inputData).rgba
}
//In C++
const VkSpecializationMapEntry entries[] =
{
{
0, // constantID
0, // offset
sizeof(VkBool) // size
}
};
const VkBool data[] = { /*VK_TRUE or VK_FALSE, as needed*/ };
const VkSpecializationInfo info =
{
1, // mapEntryCount
entries, // pMapEntries
sizeof(VkBool), // dataSize
data, // pData
};

How to cut polyhedron with a plane or bounding box?

Within a polyhedron, how do I obtain the handle to any edge that intersects a given plane (purpose is that I can further cut it with CGAL::polyhedron_cut_plane_3)?
I currently have this snippet but it doesn't work. I constructed this from pieces found in CGAL documentations and examples:
CGAL 4.14 - 3D Fast Intersection and Distance Computation (AABB Tree)
typedef CGAL::Simple_cartesian<double> Kernel;
typedef CGAL::Polyhedron_3<Kernel> Polyhedron;
typedef CGAL::AABB_face_graph_triangle_primitive<Polyhedron> Primitive;
typedef CGAL::AABB_traits<Kernel, Primitive> Traits;
Polyhedron poly = load_obj(argv[1]); // load from file using a helper
Kernel::Plane_3 plane(1, 0, 0, 0); // I am certain this plane intersects the given mesh
CGAL::AABB_tree<Traits> tree(faces(poly).first, faces(poly).second, poly);
auto intersection = tree.any_intersection(plane);
if (intersection) {
if (boost::get<Kernel::Segment_3>(&(intersection->first))) {
// SHOULD enter here and I can do things with intersection->second
} else {
// BUT it enters here
}
} else {
std::cout << "No intersection." << std::endl;
}
Edit on 9/9/2019:
I changed this title from the original Old title: How to obtain the handle to some edge found in a plane-polyhedron intersection. With the methods provided in CGAL/Polygon_mesh_processing/clip.h, it is unnecessary to use AABB_Tree to find intersection.
To clip with one plane, one line is enough: CGAL::Polygon_mesh_processing::clip(poly, plane);
To clip within some bounding box, as suggested by #sloriot, there is an internal function CGAL::Polygon_mesh_processing::internal::clip_to_bbox. Here is an example.
The simplest way to do it would be to use the function undocumented function clip_to_bbox() from the file CGAL/Polygon_mesh_processing/clip.h to turn a plane into a clipping bbox and call the function corefine() to embedded the plane intersection into your mesh. If you want to get the intersection edges, pass a edge constrained map to corefine() in the named parameters.

OpenGL 4.5 - Shader storage: write in vertex shader, read in fragment shader

Both my fragment and vertex shaders contain the following two guys:
struct Light {
mat4 view;
mat4 proj;
vec4 fragPos;
};
layout (std430, binding = 0) buffer Lights {
Light lights[];
};
My problem is that that last field, fragPos, is computed by the vertex shader like this, but the fragment shader does not see the changes made by the vertex shader in fragPos (or any changes at all):
aLight.fragPos = bias * aLight.proj * aLight.view * vec4(vs_frag_pos, 1.0);
... where aLight is lights[i] in a loop. As you can imagine I'm computing the position of the vertex in the coordinate systems of each light present to be used in shadow mapping. Any idea what's wrong here? Am I doing a fundamentally wrong thing?
Here is how I initialize my storage:
struct LightData {
glm::mat4 view;
glm::mat4 proj;
glm::vec4 fragPos;
};
glGenBuffers(1, &BBO);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, BBO);
glNamedBufferStorage(BBO, lights.size() * sizeof(LightData), NULL, GL_DYNAMIC_STORAGE_BIT);
// lights is a vector of a wrapper class for LightData.
for (unsigned int i = 0; i < lights.size(); i++) {
glNamedBufferSubData(BBO, i * sizeof(LightData), sizeof(LightData), &(lights[i]->data));
}
It may be worth noting that if I move fragPos to a fixed-size array out variable in the vertex shader out fragPos[2], leave the results there and then add the fragment shader counterpart in fragPos[2] and use that for the rest of my stuff then things are OK. So what I want to know more about here is why my fragment shader does not see the numbers crunched down by the vertex shader.
I will not be very accurate, but I will try to explain you why your fragment shader does not see what your vertex shader write :
When your vertex shader write some informations inside your buffer, the value you write are not mandatory to be wrote inside video memory, but can be stored in a kind of cache. The same idea occur when your fragment shader will read your buffer, it may read value in a cache (that is not the same as the vertex shader).
To avoid this problem, you must do two things, first, you have to declare your buffer as coherent (inside the glsl) : layout(std430) coherent buffer ...
Once you have that, after your writes, you must issue a barrier (globally, it says : be careful, I write value inside the buffer, values that you will read may be invalid, please, take the new values I wrote).
How to do such a thing ?
Using the function memoryBarrierBuffer after your writes. https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/memoryBarrierBuffer.xhtml
BTW : don't forget to divide by w after your projection.

What is the correct way to structure a Cg program?

This tutorial uses explicit OUT structures, e.g:
struct C3E1v_Output {
float4 position : POSITION;
float4 color : COLOR;
};
C3E1v_Output C3E1v_anyColor(float2 position : POSITION,
uniform float4 constantColor)
{
C3E1v_Output OUT;
OUT.position = float4(position, 0, 1);
OUT.color = constantColor; // Some RGBA color
return OUT;
}
But looking at one of my shaders I have explicit in/out parameters:
float4 slice_vp(
// Vertex Inputs
in float4 position : POSITION, // Vertex position in model space
out float4 oposition : POSITION,
// Model Level Inputs
uniform float4x4 worldViewProj) : TEXCOORD6
{
// Calculate output position
float4 p = mul(worldViewProj, position);
oposition=p;
return p;
}
I'm having some problems using HLSL2GLSL with this and wondered if my Cg format is to blame (even though it works fine as a Cg script). Is there a 'right' way or are the two simply different ways to the same end?
As you've seen, both ways work. However, I strongly endorse using structs -- especially for the output of vertex shaders (input of fragment shaders). The reasons are less to do with what the machine likes (it doesn't care), and more to do with creating code that can be safely re-used and shared between projects and people. The last thing you want to have to find and debug is a case where one programmer has assigned a value to TEXCOORD1 in some cases and is trying to read it from TEXCOORD2 in (some) other cases. Or any permutation of register mis-match. Use structs, your life will be better.