Vulkan: how to write non-brittle tests-cases - vulkan

I'm working on adding unit-tests to my toy Vulkan renderer.
The current approach that I'm thinking of implementing is like so:
Each unit-test function (the test) is called twice, once to setup the test-data and render the frame, followed by another call to validate the rendered frame.
My issue is that this setup seems brittle. I have to first render a frame, save the image to file, and then compare the rendered frame to what's on file. If anything about the test changes, then I have to resave the image for later comparison.
Question: Is there another, less brittle, way that I could set this up? Specifically, I'm not too crazy about having to compare raw images.

Related

Do we have to do two image layout transitions when creating a new image?

When I want to upload an image to device local memory I first create an image, then I issue a layout transition to transition from UNDEFINED to TRANSFER DESTINATION, then I do a copy buffer to image. Then I transition from TRANSFER DESTINATION to whatever layout I want. Is there a more direct way to do this? In vkCmdCopyBufferToImage there is an argument 'dstImageLayout'. I made the mistake of thinking that the argument tells Vulkan to transition the image automatically to that layout as it copies it. This 'would' seem to me to be more efficient and make more sense, but it's not what I thought it was.
Is there a way to do this without two layout transitions? It's OK if there isn't, I think this is the proper way to do it, I just wanted to make sure.
You do not strictly speaking have to perform two layout transitions. The GENERAL layout can be used with basically anything. So you could just transition it once, copy into it, and use it from there.
However, this would be pointless for several reasons. First, it's reasonable to assume that any layout transition from UNDEFINED will be a no-op as far as actual GPU processing is concerned. Such transitions conceptually trash any of the contents of the image, so there's no point in having the GPU do anything to the image's bytes.
Second, in order to use an image you copied into, you will need some kind of explicit synchronization between the copy operation and the usage of it. Whatever that synchronization is, it may as well include a layout transition. The GPU is going to have to make sure the two don't overlap, so you may as well toss in a layout transition.
Lastly, using GENERAL like this is a premature optimization and therefore should be avoided unless you have profiling data telling you that layout transitions are an actual performance problem (or you have no other choice).
LAYOUT_TRANSFER_DST is by definition the most efficient target for copies. So no other layout can be more efficient.
Some actual GPU might perform no actual layout transitions. The layout system is just a general API abstraction. It is not even defined what "layout" actually is, and the GPU driver may use the API concept whichever way it is beneficial for it.
If a particular picky GPU needs the image in such specific layout when copying into it, then there's no way around it, and there would be two layout transitions no matter how you shape the API. If the GPU does not need it, then it will just elide the layout transitions on its own.

Rendering multiple objects with different textures, vertex buffers, and uniform values in Vulkan

My background is in OpenGL and I'm attempting to learn Vulkan. I'm having a little trouble with setting up a class so I can render multiple objects with different textures, vertex buffers, and UBO values. I've run into an issue where two of my images are drawn, but they flicker and alternate. I'm thinking it must be due to presenting the image after the draw call. Is there a way to delay presentation of an image? Or merge different images together before presenting? My code can be found here, I'm hoping it is enough for someone to get an idea of what I'm trying to do: https://gitlab.com/cwink/Ingin/blob/master/ingin.cpp
Thanks!
You call render twice per frame. And render calls vkQueuePresentKHR, so obviously the two renderings of yours alternate.
You can delay presentation simply by delaying vkQueuePresentKHR call. Let's say you want to show each image for ~1 s. You can simply std::this_thread::sleep_for (std::chrono::seconds(1)); after each render call. (Possibly not the bestest way to do it, but just to get the idea where your problem lies.)
vkQueuePresentKHR does not do any kind of "merging" for you. Typically you "merge images" by simply drawing them into the same swapchain VkImage in the first place, and then present it once.

Moving image layouts with barrier or renderpasses

The vulkan docs mention that moving image layouts in render passes (see VkAttachmentDescription structure) is preferred compared to moving them using barriers (i.e. vkCmdPipelineBarrier). I can understand that since the latter introduce sync points which constrain parallel execution.
Now consider a typical example: A transition from VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL to VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL. In this case the resource is going to be read in the shader, but in order to do that safely it is necessary to synchronize the writing of the color attachment with the reading in the shader. In other words we need to use a barrier anyway and moving the layout in the render pass doesn't seem to give any advantage at all.
Can somehow explain how all this works in detail? In which situations does one have a real advantage of moving layouts in render passes? Are there (practical) layout changes which do not require further synchronization?
Firstly, you are not given a choice. The API forces you to provide finalLayout, and intermediate VkAttachmentReference::layouts. You can use vkCmdPipelineBarrier inside the render pass conditionally (aka subpass self-dependency), but one of the rules is you are not allowed to change the layout of an attached image:
If a VkImageMemoryBarrier is used, the image and image subresource range specified in the barrier must be a subset of one of the image views used by the framebuffer in the current subpass. Additionally, oldLayout must be equal to newLayout, and both the srcQueueFamilyIndex and dstQueueFamilyIndex must be VK_QUEUE_FAMILY_IGNORED.
So during a render pass, you can only change layout using the render pass mechanism, or you must be outside the render pass. That leaves only the "outside render pass" case to discuss:
Good way to think of a render pass is that it (potentially, based on platform) copies the resource (using loadOp) to specialized memory, and when done copies it back (using storeOp) back to general-purpose memory.
That being said, it is reasonable to assume you may get the layout transition to finalLayout for free as part of the storeOp. (And similarly the transition from initialLayout to first VkAttachmentReference::layout as part of the loadOp.) So, it makes sense to have the layout transition as part of the renderpass, if possible\convenient enough.

Vulkan update descriptor every frame

I want to render my scene to a texture and then use that texture in shader so I created a frambuffer using imageview and recorded a command buffer for that. I successfully uploaded and executed the command buffer on gpu but the descriptor of imageview is black. I'm creating a descriptor from the imageview before rendering loop. Is it black because I create it before anything is rendered to framebuffer? If so I will have to update the descriptor every frame. Will I have to create a new descriptor from imageview every frame? Or is there another way I can do this?
I have read other thread on this title. Don't mark this as duplicate cause that thread is about textures and this is texture from a imageview.
Thanks.
#IAS0601 I will answer questions from Your comment through an answer, as it allows for much longer text to be written, and its formatting is much better. I hope this also answers Your original question, but You don't have to treat like the answer. As I wrote, I'm not sure what You are asking about.
1) In practically all cases, GPU accesses images through image views. They specify additional parameters which define how image is accessed (like for example which part of the image is accessed), but still it is the original image that gets accessed. Image view, as name suggests, is just a view, list of access parameters. It doesn't have any memory bound to it, it doesn't contain any data (apart from the parameters specified during image view creation).
So when You create a framebuffer and render into it, You render into original images or, to be more specific, to those parts of original images which were specified in image views. For example, You have a 2D texture with 3 array layers. You create a 2D image view for the middle (second) layer. Then You use this image view during framebuffer creation. And now when You render into this framebuffer, in fact You are rendering into the second layer of the original 2D texture array.
Another thing - when You later access the same image, and when You use the same image view, You still access the original image. If You rendered something into the image, then You will get the updated data (provided You have done everything correctly, like perform appropriate synchronization operations, layout transition if necessary etc.). I hope this is what You mean by updating image view.
2) I'm not sure what You mean by updating descriptor set. In Vulkan when we update a descriptor set, this means that we specify handles of Vulkan resources that should be used through given descriptor set.
If I understand You correctly - You want to render something into an image. You create an image view for that image and provide that image view during framebuffer creation. Then You render something into that framebuffer. Now You want to read data from that image. You have two options. If You want to access only one sample location that is associated with fragment shader's location, You can do this through an input attachment in the next subpass of the same render pass. But this way You can only perform operations which don't require access to multiple texels, for example a color correction.
But if You want to do something more advanced, like blurring or shadow mapping, if You need access to several texels, You must end a render pass and start another one. In this second render pass, You can read data from the original image through a descriptor set. It doesn't matter when this descriptor set was created and updated (when the handle of image view was specified). If You don't change the handles of resources - meaning, if You don't create a new image or a new image view, You can use the same descriptor set and You will access the data rendered in the first render pass.
If You have problems accessing the data, for example (as You wrote) You get only black colors, this suggests You didn't perform everything correctly - render pass load or store ops are incorrect, or initial and final layouts are incorrect. Or synchronization isn't performed correctly. Unfortunately, without access to Your project, we can't be sure what is wrong.

Valueurl Binding On Large Arrays Causes Sluggish User Interface

I have a large data set (some 3500 objects) that returns from a remote server via HTTP. Currently the data is being presented in an NSCollectionView. One aspect of the data is a path pack to the server for a small image that represents the data (think thumbnail for simplicity).
Bindings works fantastically for the data that is already returned, and binding the image via a valueurl binding is easy to do. However, the user interface is very sluggish when scrolling through the data set - which makes me think that the NSCollectionView is retrieving all the image data instead of just the image data used to display the currently viewable images.
I was under the impression that Cocoa controls were smart enough to only retrieve data for the information that is actually being output to the user interface through lazy loading. This certainly seems to be the case with NSTableView - but I could be misguided on this thought.
Should valueurl binding act lazily and, moreover, should it act lazily in an NSCollectionView?
I could create a caching mechanism (in fact I already have such a thing in place for another application - see my post here if you are interested Populating NSImage with data from an asynchronous NSURLConnection) but I really don't want to go this route if I don't have to for this specific implementation as the user could potentially change data sets often and may only want small sub-sets of the data.
Any suggested approaches?
Thanks!
Update
After some more testing it seems that the problem arises because a scroll action through the data set causes each image to be requested from the server. Once all the images have been passed over in the data set the response is very fast.
So question... is there any way of turning off the valueurl fetch while scrolling and turning it back on when scrolling has finished?
My solution is to use a custom caching mechanism like the one I already use for another application. The problem manifests itself because as you scroll past images that have not yet been downloaded, the control triggers itself to go and fetch the as yet non-downloaded files.
Once downloaded the images are available locally and therefore scrolling speed normalizes. The solution is to check to see if the image is available locally and present an alternate app-bundle graphic while the image is being downloaded in the background. Once the image has been downloaded, update the model with the image replacing the stub image that came from the bundle.
This leaves the UI in a very responsive state throughout, leaves the user with the ability to interact and allows for a custom background management of the images.
Of course it would have been nice if Cocoa id all this for me, but then what would I be left to do? :-)