Rendering to a cube texture and then sampling it? - vulkan

I want to render a scene to the six faces of a cube texture (ie the same scene from six different perpendicular cameras) and then I want to sample from the cube texture from a fragment shader while rendering to the final framebuffer to be presented.
Is the best way to organize that to have a render pass with 7 subpasses? 1 subpass for each face of the cube texture and then the final subpass to sample the cube texture?
Or will that not work?
If it will work, roughly how do I describe the cube texture in the render pass attachments?
If it won't work, what's the best way to organize it?

You need to use two render passes. You would need to read from the cubemap as a cubemap rather than as an input attachment, so you need to break your render pass up to do that.
The cubemap rendering pass should just use a layered attachment and layered rendering functionality to send each primitive to the appropriate layer. The final pass just works as normal.

Related

How do I render to multiple 3d targets in Vulkan?

I have some legacy DX11 code that renders to multiple 3d render targets. Destination target is passed via SV_TARGETxx and the slice is set via SV_RenderTargetArrayIndex in GS. Is there any way to do the same in Vulkan?
My plan is to create individual view for each slice of each 3d target and pass them all together as attachments to a single frame buffer, then in GS I can have something like gl_Layer = sliceNo + targetOffsets[xx]. Is there any better solution?
In Vulkan, the GS SV_RenderTargetArrayIndex is called Layer in SPIR-V or gl_Layer in GLSL. It behaves the same as in D3D. You create one view per 3D target, and attach that to the framebuffer. The Layer output from the GS will say which layer (of all the targets) the output primitive is drawn to.
In Vulkan there's no "true" 3D framebuffer attachments, in the sense that after projection to screen space coordinates everything exists in a 2D plane. So attachment image views can have 2D_ARRAY dimensionality, but not 3D. The Image and image view parameter compatibility requirements table says that given a 3D image, you can create a 2D_ARRAY image view with layerCount >= 1. Note that you have to create the image with the VK_IMAGE_CREATE_2D_ARRAY_COMPATIBLE_BIT flag.
So if you want to have N 3D render target images:
Create your N 3D images, with the VK_IMAGE_CREATE_2D_ARRAY_COMPATIBLE_BIT flag.
Create one image view for each image, with VK_IMAGE_VIEW_TYPE_2D_ARRAY and layerCount equal to the number of slices you want to be able to render to.
Create a VkRenderPass with one VkAttachmentDescription per 3D render target, plus whatever others you need for depth/stencil, resolve target, etc.
Create a VkFrameBuffer based on that VkRenderPass, and pass your image views in the VkFrameBufferCreateInfo::pAttachments array. Set VkFramebufferCreateInfo::layerCount to the number of layers/slices you want to be able to render to.
[Edit: Below paragraph can be ignored based on first comment. Leaving it for transparency.]
I'm confused what you're trying to do with SV_Target[n]. In both D3D and Vulkan, if you've got multiple render targets / color attachments, the fragment shader will write to all of them -- if your fragment shader doesn't provide a value for a bound target, the value written is undefined. So SV_Target[n] is used to tell which shader output variables go to which target, but they don't let you write to some without writing to others. Vulkan works similarly, using output variables gl_FragData[n] in GLSL.
If you're talking about having 1 draw call rendered from multiple points of view (but otherwise using the same pipeline) then you want VK_KHR_multiview. This is an extension in Vulkan 1.0, but core in 1.1.
There's an example of it's usage here and the corresponding shader functionality is here. It functions similar to what you seem to describe. You attach multiple images from a texture array to a single framebuffer ("rendertarget" in D3D) and then in the vertex shader you can determine which layer you're rendering to via the gl_ViewIndex variable. There's no need for a geometry shader with this approach.

Rendering multiple objects with different textures, vertex buffers, and uniform values in Vulkan

My background is in OpenGL and I'm attempting to learn Vulkan. I'm having a little trouble with setting up a class so I can render multiple objects with different textures, vertex buffers, and UBO values. I've run into an issue where two of my images are drawn, but they flicker and alternate. I'm thinking it must be due to presenting the image after the draw call. Is there a way to delay presentation of an image? Or merge different images together before presenting? My code can be found here, I'm hoping it is enough for someone to get an idea of what I'm trying to do: https://gitlab.com/cwink/Ingin/blob/master/ingin.cpp
Thanks!
You call render twice per frame. And render calls vkQueuePresentKHR, so obviously the two renderings of yours alternate.
You can delay presentation simply by delaying vkQueuePresentKHR call. Let's say you want to show each image for ~1 s. You can simply std::this_thread::sleep_for (std::chrono::seconds(1)); after each render call. (Possibly not the bestest way to do it, but just to get the idea where your problem lies.)
vkQueuePresentKHR does not do any kind of "merging" for you. Typically you "merge images" by simply drawing them into the same swapchain VkImage in the first place, and then present it once.

How can I overlay my UI render target onto the back buffer using DirectX 11?

I have two render targets, the back buffer and a UI render target where all 2d UI will be drawn.
I have used the graphics debugger to confirm that both render targets are being written to with the correct data, but I'm having trouble combining the two right at the end.
Question:
My world objects are drawn directly to the backbuffer so there is no problem displaying these, but how do I now overlay the UI render target OVER the backbuffer?
Desired effect:
Back buffer render target
UI render target
There's several ways to do this. The easiest is to render your UI elements to a texture that has both a RenderTargetView and a ShaderResourceView, then render the whole texture to the back buffer as a single quad in orthographic projection space. This effectively draws a 2D square containing your UI in screen space on the back buffer. It also has the benefit of allowing transparency.
You could also use the OutputMerger stage to blend the UI render target with the back buffer during rendering of the world geometry. You'd need to be careful how you set up your blend operations, as it could result in items being drawn over the UI, or blending inappropriately.
If your UI is not transparent, you could do the UI rendering first and mark the area under the UI in the stencil buffer, then do your world rendering while the stencil test is enabled. This would cause the GPU to ignore any pixels underneath the UI, and not send them to the pixel shader.
The above could also be modified to write the minimum depth value to the pixels within the UI render target, ensuring all geometry underneath it would fail the depth test. This modification would free up the stencil buffer for mirrors/shadows/etc.
The above all work for flat UIs drawn over the existing 3D world. To actually draw more complex UIs that appear to be a part of the world, you'll need to actually render the elements to 3D objects in the world space, or do complex projection operations to make it seem like they are.

Vulkan update descriptor every frame

I want to render my scene to a texture and then use that texture in shader so I created a frambuffer using imageview and recorded a command buffer for that. I successfully uploaded and executed the command buffer on gpu but the descriptor of imageview is black. I'm creating a descriptor from the imageview before rendering loop. Is it black because I create it before anything is rendered to framebuffer? If so I will have to update the descriptor every frame. Will I have to create a new descriptor from imageview every frame? Or is there another way I can do this?
I have read other thread on this title. Don't mark this as duplicate cause that thread is about textures and this is texture from a imageview.
Thanks.
#IAS0601 I will answer questions from Your comment through an answer, as it allows for much longer text to be written, and its formatting is much better. I hope this also answers Your original question, but You don't have to treat like the answer. As I wrote, I'm not sure what You are asking about.
1) In practically all cases, GPU accesses images through image views. They specify additional parameters which define how image is accessed (like for example which part of the image is accessed), but still it is the original image that gets accessed. Image view, as name suggests, is just a view, list of access parameters. It doesn't have any memory bound to it, it doesn't contain any data (apart from the parameters specified during image view creation).
So when You create a framebuffer and render into it, You render into original images or, to be more specific, to those parts of original images which were specified in image views. For example, You have a 2D texture with 3 array layers. You create a 2D image view for the middle (second) layer. Then You use this image view during framebuffer creation. And now when You render into this framebuffer, in fact You are rendering into the second layer of the original 2D texture array.
Another thing - when You later access the same image, and when You use the same image view, You still access the original image. If You rendered something into the image, then You will get the updated data (provided You have done everything correctly, like perform appropriate synchronization operations, layout transition if necessary etc.). I hope this is what You mean by updating image view.
2) I'm not sure what You mean by updating descriptor set. In Vulkan when we update a descriptor set, this means that we specify handles of Vulkan resources that should be used through given descriptor set.
If I understand You correctly - You want to render something into an image. You create an image view for that image and provide that image view during framebuffer creation. Then You render something into that framebuffer. Now You want to read data from that image. You have two options. If You want to access only one sample location that is associated with fragment shader's location, You can do this through an input attachment in the next subpass of the same render pass. But this way You can only perform operations which don't require access to multiple texels, for example a color correction.
But if You want to do something more advanced, like blurring or shadow mapping, if You need access to several texels, You must end a render pass and start another one. In this second render pass, You can read data from the original image through a descriptor set. It doesn't matter when this descriptor set was created and updated (when the handle of image view was specified). If You don't change the handles of resources - meaning, if You don't create a new image or a new image view, You can use the same descriptor set and You will access the data rendered in the first render pass.
If You have problems accessing the data, for example (as You wrote) You get only black colors, this suggests You didn't perform everything correctly - render pass load or store ops are incorrect, or initial and final layouts are incorrect. Or synchronization isn't performed correctly. Unfortunately, without access to Your project, we can't be sure what is wrong.

How can a 3D game render an object without having a sprite for every single angle?

When learning to program simple 2D games, each object would have a sprite sheet with little pictures of how a player would look in every frame/animation. 3D models don't seem to work this way or we would need one image for every possible view of the object!
For example, a rotating cube would need a lot images depicting how it would look on every single side. So my question is, how are 3D model "images" represented and rendered by the engine when viewed from arbitrary perspectives?
Multiple methods
There is a number of methods for rendering and storing 3D graphics and models. There are even different methods for rendering 2D graphics! In addition to 2D bitmaps, you also have SVG. SVG uses numbers to define points in an image. These points make shapes. The points can also define curves. This allows you to make images without the need for pixels. The result can be smaller file sizes, in addition to the ability to transform the image (scale and rotate) without causing distortion. Most 3D graphics use a similar technique, except in 3D. What these methods have in common, however, is that they all ultimately render the data to a 2D grid of pixels.
Projection
The most common method for rendering 3D models is projection. All of the shapes to be rendered are broken down into triangles before rendering. Why triangles? Because triangles are guaranteed to be coplanar. That saves a lot of work for the renderer since it doesn't have to worry about "coloring outside of the lines". One drawback to this is that most 3D graphics projection technologies don't support perfect spheres or other round surfaces. You have to use approximations and other tricks to make round surfaces (although there are some renderers which support round surfaces). The next step is to convert or project all of the 3D points into 2D points on the screen (as seen below).
From there, you essentially "color in" the triangles to make everything look solid. While this is pretty fast, another downside is that you can't really have things like reflections and refractions. Anytime you see a refractive or reflective surface in a game, they are only using trickery to make it look like a reflective or refractive material. The same goes for lighting and shading.
Here is an example of special coloring being used to make a sphere approximation look smooth. Notice that you can still see straight lines around the smoothed version:
Ray tracing
You also can render polygons using ray tracing. With this method, you basically trace the paths that the light takes to reach the camera. This allows you to make realistic reflections and refractions. However, I won't go into detail since it is too slow to realistically use in games currently. It is mainly used for 3D animations (like what Pixar makes). Simple scenes with low quality settings can be ray traced pretty quickly. But with complicated, realistic scenes, rendering can take several hours for a single frame (as is the case with Pixar movies). However, it does produce ultra realistic images:
Ray casting
Ray casting is not to be confused with the above-mentioned ray tracing. Ray casting does not trace the light paths. That means that you only have flat surfaces; not reflective. It also does not produce realistic light. However, this can be done relatively quickly, since in most cases you don't even need to cast a ray for every pixel. This is the method that was used for early games such as Doom and Wolfenstein 3D. In early games, ray casting was used for the maps, and the characters and other items were rendered using 2D sprites that were always facing the camera. The sprites were drawn from a few different angles to make them look 3D. Here is an image of Wolfenstein 3D:
Castle Wolfenstein with JavaScript and HTML5 Canvas: Image by Martin Kliehm
Storing the data
3D data can be stored using multiple methods. It is not necessarily dependent on the rendering method that is used. The stored data doesn't mean anything by itself, so you have to render it using one of the methods that have already been mentioned.
Polygons
This is similar to SVG. It is also the most common method for storing model data. You define the geometry using 3D points. These points can have other properties, such as texture data (in the form of UV mapping), color data, and whatever else you might want.
The data can be stored using a number of file formats. A common file format that is used is COLLADA, which is an XML file that stores the 3D data. There are a lot of other formats though. Fundamentally, however, all file formats are still storing the 3D data.
Here is an example of a polygon model:
Voxels
This method is pretty simple. You can think of voxel models like bitmaps, except they are a bunch of bitmaps layered together to make 3D bitmaps. So you have a 3D grid of pixels. One way of rendering voxels is converting the voxel points to 3D cubes. Note that voxels do not have to be rendered as cubes, however. Like pixels, they are only points that may have color data which can be interpreted in different ways. I won't go into much detail since this isn't too common and you generally render the voxels with polygon methods (like when you render them as cubes. Here is an example of a voxel model:
Image by Wikipedia user Vossman
In the 2D world with sprite sheets, you are drawing one of the sprites depending on the state of the actor (visual representation of your object). In the 3D world you are rendering a model for your actor that is a series of polygons with a texture mapped to it. There are standardized model files (I am mostly familiar with Autodesk 3DS Max), in which the model and the assigned textures can be packaged together (a .3DS or .MAX file), providing everything your graphics library needs to render the object and its textures.
In a nutshell, you don't use images for each view of a 3D object, you have a model with a texture rendered on it, creating a dynamic view as it is rendered by the graphics library.