How do I render to multiple 3d targets in Vulkan? - vulkan

I have some legacy DX11 code that renders to multiple 3d render targets. Destination target is passed via SV_TARGETxx and the slice is set via SV_RenderTargetArrayIndex in GS. Is there any way to do the same in Vulkan?
My plan is to create individual view for each slice of each 3d target and pass them all together as attachments to a single frame buffer, then in GS I can have something like gl_Layer = sliceNo + targetOffsets[xx]. Is there any better solution?

In Vulkan, the GS SV_RenderTargetArrayIndex is called Layer in SPIR-V or gl_Layer in GLSL. It behaves the same as in D3D. You create one view per 3D target, and attach that to the framebuffer. The Layer output from the GS will say which layer (of all the targets) the output primitive is drawn to.
In Vulkan there's no "true" 3D framebuffer attachments, in the sense that after projection to screen space coordinates everything exists in a 2D plane. So attachment image views can have 2D_ARRAY dimensionality, but not 3D. The Image and image view parameter compatibility requirements table says that given a 3D image, you can create a 2D_ARRAY image view with layerCount >= 1. Note that you have to create the image with the VK_IMAGE_CREATE_2D_ARRAY_COMPATIBLE_BIT flag.
So if you want to have N 3D render target images:
Create your N 3D images, with the VK_IMAGE_CREATE_2D_ARRAY_COMPATIBLE_BIT flag.
Create one image view for each image, with VK_IMAGE_VIEW_TYPE_2D_ARRAY and layerCount equal to the number of slices you want to be able to render to.
Create a VkRenderPass with one VkAttachmentDescription per 3D render target, plus whatever others you need for depth/stencil, resolve target, etc.
Create a VkFrameBuffer based on that VkRenderPass, and pass your image views in the VkFrameBufferCreateInfo::pAttachments array. Set VkFramebufferCreateInfo::layerCount to the number of layers/slices you want to be able to render to.
[Edit: Below paragraph can be ignored based on first comment. Leaving it for transparency.]
I'm confused what you're trying to do with SV_Target[n]. In both D3D and Vulkan, if you've got multiple render targets / color attachments, the fragment shader will write to all of them -- if your fragment shader doesn't provide a value for a bound target, the value written is undefined. So SV_Target[n] is used to tell which shader output variables go to which target, but they don't let you write to some without writing to others. Vulkan works similarly, using output variables gl_FragData[n] in GLSL.

If you're talking about having 1 draw call rendered from multiple points of view (but otherwise using the same pipeline) then you want VK_KHR_multiview. This is an extension in Vulkan 1.0, but core in 1.1.
There's an example of it's usage here and the corresponding shader functionality is here. It functions similar to what you seem to describe. You attach multiple images from a texture array to a single framebuffer ("rendertarget" in D3D) and then in the vertex shader you can determine which layer you're rendering to via the gl_ViewIndex variable. There's no need for a geometry shader with this approach.

Related

Vulkan render to texture

In an existing renderer which draws geometry in the swapchain, I need to render some parts of this geometry in a texture, others parts must remain on screen. All the geometry is recorded into one command buffer. I won't need to render this texture every time.
I created destination image, image view and framebuffer, but I don't know what to do now.
I dont think I need a specific pipeline, nor a new specific descriptor set, as everything is correctly rendered on screen.
Do I need another render pass, or a subpass, or anything else?
Exactly, you need a separate renderpass that fills your destination images. As the renderpass stores a reference to the images (as attachments) a separate one is required.
Within that renderpass you then can use subpass dependencies to transition the destination images to the proper layout. Your first transition should be VK_ACCESS_SHADER_READ_BIT to VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT for writing to the destination image and once that's done you transition back from VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT to VK_ACCESS_SHADER_READ_BIT so you can e.g. render your destination images in the visual pass. An alternative would be blitting them to the swap chain if the device supports that.
If you need a reference, you can check out my offscreen rendering sample.

Rendering to a cube texture and then sampling it?

I want to render a scene to the six faces of a cube texture (ie the same scene from six different perpendicular cameras) and then I want to sample from the cube texture from a fragment shader while rendering to the final framebuffer to be presented.
Is the best way to organize that to have a render pass with 7 subpasses? 1 subpass for each face of the cube texture and then the final subpass to sample the cube texture?
Or will that not work?
If it will work, roughly how do I describe the cube texture in the render pass attachments?
If it won't work, what's the best way to organize it?
You need to use two render passes. You would need to read from the cubemap as a cubemap rather than as an input attachment, so you need to break your render pass up to do that.
The cubemap rendering pass should just use a layered attachment and layered rendering functionality to send each primitive to the appropriate layer. The final pass just works as normal.

Vulkan update descriptor every frame

I want to render my scene to a texture and then use that texture in shader so I created a frambuffer using imageview and recorded a command buffer for that. I successfully uploaded and executed the command buffer on gpu but the descriptor of imageview is black. I'm creating a descriptor from the imageview before rendering loop. Is it black because I create it before anything is rendered to framebuffer? If so I will have to update the descriptor every frame. Will I have to create a new descriptor from imageview every frame? Or is there another way I can do this?
I have read other thread on this title. Don't mark this as duplicate cause that thread is about textures and this is texture from a imageview.
Thanks.
#IAS0601 I will answer questions from Your comment through an answer, as it allows for much longer text to be written, and its formatting is much better. I hope this also answers Your original question, but You don't have to treat like the answer. As I wrote, I'm not sure what You are asking about.
1) In practically all cases, GPU accesses images through image views. They specify additional parameters which define how image is accessed (like for example which part of the image is accessed), but still it is the original image that gets accessed. Image view, as name suggests, is just a view, list of access parameters. It doesn't have any memory bound to it, it doesn't contain any data (apart from the parameters specified during image view creation).
So when You create a framebuffer and render into it, You render into original images or, to be more specific, to those parts of original images which were specified in image views. For example, You have a 2D texture with 3 array layers. You create a 2D image view for the middle (second) layer. Then You use this image view during framebuffer creation. And now when You render into this framebuffer, in fact You are rendering into the second layer of the original 2D texture array.
Another thing - when You later access the same image, and when You use the same image view, You still access the original image. If You rendered something into the image, then You will get the updated data (provided You have done everything correctly, like perform appropriate synchronization operations, layout transition if necessary etc.). I hope this is what You mean by updating image view.
2) I'm not sure what You mean by updating descriptor set. In Vulkan when we update a descriptor set, this means that we specify handles of Vulkan resources that should be used through given descriptor set.
If I understand You correctly - You want to render something into an image. You create an image view for that image and provide that image view during framebuffer creation. Then You render something into that framebuffer. Now You want to read data from that image. You have two options. If You want to access only one sample location that is associated with fragment shader's location, You can do this through an input attachment in the next subpass of the same render pass. But this way You can only perform operations which don't require access to multiple texels, for example a color correction.
But if You want to do something more advanced, like blurring or shadow mapping, if You need access to several texels, You must end a render pass and start another one. In this second render pass, You can read data from the original image through a descriptor set. It doesn't matter when this descriptor set was created and updated (when the handle of image view was specified). If You don't change the handles of resources - meaning, if You don't create a new image or a new image view, You can use the same descriptor set and You will access the data rendered in the first render pass.
If You have problems accessing the data, for example (as You wrote) You get only black colors, this suggests You didn't perform everything correctly - render pass load or store ops are incorrect, or initial and final layouts are incorrect. Or synchronization isn't performed correctly. Unfortunately, without access to Your project, we can't be sure what is wrong.

Undestanding the image view parameter compatibility requirements

I am having some troubles understanding the Image and image view parameter compatibility requirements table in the VkImageViewCreateInfo documentation and the VkImageViewCreateInfo::viewType. The image VkImageViewCreateInfo properties seams to be flexible enough to create, for example, a single 1D or 1D array image view of a 2D image. I tried to create 1D image view out of a 2D image with validation layers enabled and I got no warnings (I don't know exactly which row/column will be used if this is a valid usage).
Is it true to assume that there is one-to-one mapping between the VkImageCreateInfo::imageType + VkImageCreateInfo::arrayLayers in the image and the VkImageViewCreateInfo::viewType in the view, i.e. this VkImageViewType type is there to handle the special case of cube maps, otherwise viewType could've been inferred from the image type? If not, how does the 1D view of 2D image work?
You can't create a 1D view of a 2D image, only the combinations listed in the table are valid.
It looks like the page you're looking at hasn't been regenerated recently, or doesn't include modifications made by the VK_KHR_maintenance1 extension.
Ignoring that extension and cubemaps for now, it's not quite true that there is a 1:1 correspondence between imageType+arrayLayers and viewType. A 2D image with multiple layers can be used with either 2D or 2D_ARRAY view types, and a 2D image with only one layer can still be used with a 2D_ARRAY view type. The view type corresponds to the SPIR-V resource types, and mostly determines how many coordinates are needed to identify a location in the view.
Then there is the cubemap complication, as you observed.
With VK_KHR_maintenance1, you can create 2D and 2D_ARRAY views of a subset of the slices in a 3D image. The extension adds two new rows to the table to describe that case.

How to plot 2D vector field in single picture using Hue and Brightness method with Digital Micrograph script?

I would like to to plot 2D vector field in a single picture using the Hue & brightness method, i.e., Hue to direction (or say, phase), brightness to magnitude.
Such method is often used to visualize e.g., magnetic domains, vortex etc which are reconstructed from Lorenz microscopy.
As input, I have two images of size 1024*1024, pixels contain the magnitude of X and Y component of the vector field.
Since DM does not support native HSL color scheme, I think one should first uses a group of self defined functions to convert HSL to RGB...
You can only use RGB images in DigitalMicrograph, so you will have to do the conversion from HSB to RGB in your script code, and then create the according RGB image.
Luckily, there is a demonstration script on the Gatan script resources webpage which does exactly that! You can basically use the script as it is shown there.
Gatan Script Resources
Link to script-file:
Display as HSB
Note, the script uses complex images as input - just as a convenient container to combine two images into a single one. The test function demonstrates this though.