Vulkan update descriptor every frame - vulkan

I want to render my scene to a texture and then use that texture in shader so I created a frambuffer using imageview and recorded a command buffer for that. I successfully uploaded and executed the command buffer on gpu but the descriptor of imageview is black. I'm creating a descriptor from the imageview before rendering loop. Is it black because I create it before anything is rendered to framebuffer? If so I will have to update the descriptor every frame. Will I have to create a new descriptor from imageview every frame? Or is there another way I can do this?
I have read other thread on this title. Don't mark this as duplicate cause that thread is about textures and this is texture from a imageview.
Thanks.

#IAS0601 I will answer questions from Your comment through an answer, as it allows for much longer text to be written, and its formatting is much better. I hope this also answers Your original question, but You don't have to treat like the answer. As I wrote, I'm not sure what You are asking about.
1) In practically all cases, GPU accesses images through image views. They specify additional parameters which define how image is accessed (like for example which part of the image is accessed), but still it is the original image that gets accessed. Image view, as name suggests, is just a view, list of access parameters. It doesn't have any memory bound to it, it doesn't contain any data (apart from the parameters specified during image view creation).
So when You create a framebuffer and render into it, You render into original images or, to be more specific, to those parts of original images which were specified in image views. For example, You have a 2D texture with 3 array layers. You create a 2D image view for the middle (second) layer. Then You use this image view during framebuffer creation. And now when You render into this framebuffer, in fact You are rendering into the second layer of the original 2D texture array.
Another thing - when You later access the same image, and when You use the same image view, You still access the original image. If You rendered something into the image, then You will get the updated data (provided You have done everything correctly, like perform appropriate synchronization operations, layout transition if necessary etc.). I hope this is what You mean by updating image view.
2) I'm not sure what You mean by updating descriptor set. In Vulkan when we update a descriptor set, this means that we specify handles of Vulkan resources that should be used through given descriptor set.
If I understand You correctly - You want to render something into an image. You create an image view for that image and provide that image view during framebuffer creation. Then You render something into that framebuffer. Now You want to read data from that image. You have two options. If You want to access only one sample location that is associated with fragment shader's location, You can do this through an input attachment in the next subpass of the same render pass. But this way You can only perform operations which don't require access to multiple texels, for example a color correction.
But if You want to do something more advanced, like blurring or shadow mapping, if You need access to several texels, You must end a render pass and start another one. In this second render pass, You can read data from the original image through a descriptor set. It doesn't matter when this descriptor set was created and updated (when the handle of image view was specified). If You don't change the handles of resources - meaning, if You don't create a new image or a new image view, You can use the same descriptor set and You will access the data rendered in the first render pass.
If You have problems accessing the data, for example (as You wrote) You get only black colors, this suggests You didn't perform everything correctly - render pass load or store ops are incorrect, or initial and final layouts are incorrect. Or synchronization isn't performed correctly. Unfortunately, without access to Your project, we can't be sure what is wrong.

Related

Vulkan render to texture

In an existing renderer which draws geometry in the swapchain, I need to render some parts of this geometry in a texture, others parts must remain on screen. All the geometry is recorded into one command buffer. I won't need to render this texture every time.
I created destination image, image view and framebuffer, but I don't know what to do now.
I dont think I need a specific pipeline, nor a new specific descriptor set, as everything is correctly rendered on screen.
Do I need another render pass, or a subpass, or anything else?
Exactly, you need a separate renderpass that fills your destination images. As the renderpass stores a reference to the images (as attachments) a separate one is required.
Within that renderpass you then can use subpass dependencies to transition the destination images to the proper layout. Your first transition should be VK_ACCESS_SHADER_READ_BIT to VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT for writing to the destination image and once that's done you transition back from VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT to VK_ACCESS_SHADER_READ_BIT so you can e.g. render your destination images in the visual pass. An alternative would be blitting them to the swap chain if the device supports that.
If you need a reference, you can check out my offscreen rendering sample.

Rendering multiple objects with different textures, vertex buffers, and uniform values in Vulkan

My background is in OpenGL and I'm attempting to learn Vulkan. I'm having a little trouble with setting up a class so I can render multiple objects with different textures, vertex buffers, and UBO values. I've run into an issue where two of my images are drawn, but they flicker and alternate. I'm thinking it must be due to presenting the image after the draw call. Is there a way to delay presentation of an image? Or merge different images together before presenting? My code can be found here, I'm hoping it is enough for someone to get an idea of what I'm trying to do: https://gitlab.com/cwink/Ingin/blob/master/ingin.cpp
Thanks!
You call render twice per frame. And render calls vkQueuePresentKHR, so obviously the two renderings of yours alternate.
You can delay presentation simply by delaying vkQueuePresentKHR call. Let's say you want to show each image for ~1 s. You can simply std::this_thread::sleep_for (std::chrono::seconds(1)); after each render call. (Possibly not the bestest way to do it, but just to get the idea where your problem lies.)
vkQueuePresentKHR does not do any kind of "merging" for you. Typically you "merge images" by simply drawing them into the same swapchain VkImage in the first place, and then present it once.

Metal -- skipping commandBuffer.present(drawable) to not display a frame?

In my Metal app for macOS, I have a situation where I only want to display the render results every so often. I want to complete the rendering pass every frame, and save the drawable texture image to a file, but I only want to display the render every sixteenth frame or so. I tried just skipping commandBuffer.present(drawable) when I don't want to display, but it is not working. It just stops displaying new frames once I do that. After skipping one call to commandBuffer.present(), it just doesn't display any new frames. It does continue to run, however.
Why would that happen? Once I commit a command buffer, is it required for it to be presented?
If I can't get this to work, then I will try to render into an offscreen buffer for these frames I don't want displayed. But it would be extra work and require more memory for the offscreen render buffer, so I'd rather just be able to use my regular onscreen render buffer if possible.
Thanks!
It's not required that a command buffer present a drawable. I think the issue is that, once you've obtained the drawable, it's not returned to the pool maintained by the CAMetalLayer (or, indirectly, MTKView) that provided it until it is presented.
Do not render to a drawable's texture if you don't plan on presenting. Rendering to an off-screen texture is the right approach. In fact, if you always render first to an off-screen texture and then, only for the frames you want to display, copy that to a drawable's texture, then you can leave the framebufferOnly property of the CAMetalLayer with its default true value. In that case, there's a decent chance that you won't increase the memory required (because the drawable's texture is really just part of the screen's backing store).

Cocos3D - background shown through meshes

I imported the .pod file created from Blender and the blue background is shown through the eyelash and eyebrow meshes. Does anyone know why I'm encountering this?
WITHOUT additional material (looking normal except the root of the hair).
WITH new green material added to her left shoulder, the eyebrow and eyelash began showing the background
This issue is caused by the order in which the nodes are being rendered in your scene.
In the first model, the hair is drawn first, then the skin, then the eyebrows and eyelashes. In the second model, the hair, eyebrows and eyelashes are all drawn before the skin. By the time the skin under the hair or eyelashes is drawn, the depth buffer indicates that something closer to the camera has already been drawn, and the engine doesn't bother rendering those skin pixels. But because the eyelashes, eyebrows and hair all contain transparency, we end up looking right through them onto the backdrop.
This design use of a depth buffer is key to all 3D rendering. It's how the engine knows not to render pixels that are being visually occluded by another object, otherwise all we'd ever see was the last object to be rendered.However, when rendering overlapping objects that contain transparency, it's important to get the rendering order correct, so that more distant objects that are behind closer transparent objects are rendered first.
In Cocos3D, there are several tool available for you to order your transparent objects for rendering:
The first, and primary tool, is the drawingSequencer that is managed by the CC3Scene. You can configure several different types of drawing sequencers. The default sequencer is smart enough to render all opaque objects first, then to render the objects that contain transparency in decreasing order of distance from the camera (rendering farther objects first). This works best for most scenes, and in particular where objects are moving around and can move in front of each other unpredictably. Unfortunately, in your custom CC3Scene initialization code (which you sent me per the question comments), you replaced the default drawing sequencer with one that does not sequence transparent objects based on distance. If you remove that change, everything works properly.
Objects that are not explicitly sequenced by distance (as in part 1 above) are rendered in the order in which they are added to the scene. You can therefore also define rendering order by ensuring that the objects are added to your scene in the order in which you want them rendered. This can work well for static models, such as your first character (if you change it to add the hair after the skin).
CC3Node also has a zOrder property, which allows you to override the rendering order explicitly, so that objects with larger zOrder value are rendered before those with smaller zOrder values. This is useful when you have a static model whose components cannot be added in rendering order, or to temporarily override the rendering order of two transparent objects that might be passing in front of each other. Using the zOrder property does depend on using a drawingSequencer that makes use of it (the default drawing sequencer does).
Finally, you can temporarily turn off depth testing or masking when rendering particular nodes, by setting the shouldDisableDepthTest and shouldDisableDepthMask properties to YES on those nodes.

glReadPixels read "out of frames" area

I draw OpenGL 3200x2000 size textured quads. OpenGLView frame size is set to 940x560. It draws quad as it should. Bun when I try to save it as image (using glReadPixels) and set glReadPixels area from (0,0) to (3200,2000). It creates pixel data 3200x2000, but when I save it to file I see small image part (940x560 from bottom left corner) and whole other area is black. So how can I read offscreen area? I tried using Framebuffer, but its very complicated, errors while creating it and etc... Is there any other solution?
Situation visualization:
Original image looks like this (3200x2000):
OpenGLView looks like this (940x560):
Saved image looks like that (3200x2000):
So you're rendering to the window. Well, the window has a particular size. And nothing exists outside of that size.
This is part of something OpenGL calls the "pixel-ownership-test". If a pixel is not owned by the context, then its contents are undefined. Pixels outside of the window are not owned by the context, and therefore their contents are undefined.
This is one reason why framebuffer objects exist: so that you can render outside the size of your window. Though be advised: there is a maximum viewport size limit.
Alternatively, you can render in screen-sized pieces, where you download each piece after each rendering, then move the camera to render the next piece.
You haven't given much details in terms of code, or the platform.
But I think you should be using offscreen rendering, rather than just reading from the rendered window. If you are unfamiliar with using frame buffer objects, here is a minimal example:
https://github.com/datenwolf/codesamples/tree/master/samples/OpenGL/minimalfbo
Edit #1:
Since OP mentioned that the platform is OS X, I am posting my code below, which shows a minimal FBO example in iOS:
https://github.com/glman74/simpleFBO