How do Swapchain minImageCount and VK_PRESENT_MODE_IMMEDIATE_KHR relate to each other? - vulkan

If we assign VkSwapchainCreateInfoKHR.presentMode = VK_PRESENT_MODE_IMMEDIATE_KHR; as I understand, we have a single buffer and the image is being presented to the surface immediately,
but at the same time SwapChainDetails.surfaceCapabilities.minImageCount = 2; on my GTX 1070.
Does minImageCount mean that minimum number of buffers should be 2 ?
How does it even work ?

They don't relate to each other at all. How many buffers you have has nothing to do with how they get displayed.
Immediate presentation means that, when you present an image, it is displayed immediately. As stated in the standard:
the presentation engine does not wait for a vertical blanking period to update the current image
If the presentation engine only allowed you one swapchain image, then this would mean that the acquire would have to wait until the image is finished being shown. This would make immediate presentation pointless.
Now, you might think that acquiring such an image ought to happen immediately. But that would mean that you could render to the image while the presentation engine is still reading from it, causing potential corruption. But that's not what immediate presentation is for. Immediate presentation changes nothing about the ownership dynamics between the presentation engine and user code; it only changes the timing of how swapchain images are presented.
The ability to write to an image that is actually being presented is governed by the "shared" present modes (VK_PRESENT_MODE_SHARED_DEMAND_REFRESH_KHR and VK_PRESENT_MODE_SHARED_CONTINUOUS_REFRESH_KHR).

Related

In Vulkan, using FIFO, happens when you don't have an image ready to present when the vertical blank arrives?

I'm new to graphics, and I've been looking at Vulkan presentation modes. I was wondering: in a situation where we've only got 2 images in our swapchain (one that the screen's currently reading from and one that's free), what happens if we don't manage to finish drawing to the currently free image before the next vertical blank? Do we do the presentation and get weird tearing, or do skip the presentation and draw the same image again (I guess giving a "stuttering" effect)? Do we need to define what happens, or is it automatic?
As a side note, is this why people use longer swap chains? i.e. so that if you managed to draw out 2 images to your swap chain while the screen was displaying the last image but now you're running late, at least you can present the newer of the 2 images from before?
I'm not sure how much of this is specific to FIFO or mailbox mode: I guess with mailbox you'll already have used the newest image you've got, so you're stuck again?
[2-image swapchain][1]
[1]: https://i.stack.imgur.com/rxe51.png
Tearing never happens in regular FIFO (or mailbox) mode. When you present an image, this image will be used for all subsequent vblanks until a new image is presented. And since FIFO disallows tearing, in your case, the image will be fully displayed twice.
If you are using a 2-deep swapchain with FIFO, you have to produce each image on time in order to avoid stuttering. With longer swapchains and FIFO, you have more leeway to avoid visible stuttering. With longer swapchains and mailbox, you can get a similar effect, but there will be less visible latency when your application is running on-time.

3 vs 2 VkSwapchain images?

So a Vulkan swapchain basically has a pool of images, user-defined in number, that it allocates when it is created, and images from the pool cycle like this:
1. An unused image is acquired by the program from the swapchain
2. The image is rendered by the program
3. The image is delivered for presentation to the surface.
4. The image is returned to the pool.
Given that, of what advantage is having 3 images in the swapchain rather than 2?
(I'm asking because I received a BestPractice validation complaint that I was using 2 rather than 3.)
With 2 images, isn't it the case that one will be being presented (3.) while the other is rendered (2.), and then they alternate those roles back and forth?
With 3 images, isn't it the case that one of the three will be idle? Particularly if the loop is locked to the refresh rate anyway?
I'm probably missing something.
So what happens if one image is being presented, one has finished being rendered to by the GPU, and you have some work to render for later presentation? Well, if you're rendering to a swapchain image, that GPU work cannot start until image 0 has finished being presented.
Double buffering will therefore lead to stalling the GPU if the time to present a frame is greater than the time to render a frame. Yes, triple buffering will do the same if the render time consistently is shorter than the present time, but your frame time is right on the edge of the present time, then double buffering has a greater chance of wasting GPU time.
Of course, the downside is latency; triple buffering means that the image you display will be seen one frame later than double buffering.

NSTableView infinite scroll or pagination

In relation to How to determine if a user has scrolled to the end of an NSTableView
Thanks Josh.
Is there a way to use this mechanism to implement a NSTableView that provides some sort of infinite scroll or pagination.
The idea is to tell NSTableView to load up to a certain number of records, say 1k records at once and than as user scrolls closer to the end pull another 1k records and maybe forget the first 1k records.
This pattern is well defined/used in web applications and java. Only the visible number of rows is loaded initially and the rest is pulled async as user scrolls up and down the table.
I am interested in some obj-c code or tips on how to code this.
I know about filtering/limiting the number of records that go into the tableview but lets ignore that for a moment.
Thanks.
Given the details you've provided, I'll generalize a bit but here's how I might solve it:
First, I'd set a MUCH SMALLER batch size than 1000 records. If the result count or "the most anybody is ever going to want to see" is indeterminate (and it sounds like it is in your case), the user probably doesn't even care past the first 100 or so. If your user often requests a large, expensive list and immediately wants to see stuff so far away from the beginning they hurl the scroller downward for two minutes straight before they stop and look around, perhaps a more intuitive sort order is needed instead of asking Google Image for 1000 more animated kitten gifs. ;-)
The controller behind the (definitely view-based for view reuse) table view will need some sort of request queue since I assume you're batching things in because they're expensive to retrieve individually. This will manage the asynchronous requesting/okay-now-it's-loaded machinery (I know that's vague but more detail is needed to get more specific). You'll make sure any "currently alive" views will somehow get this "it's ready" notification and will go from some "busy" UI state to displaying the ready item (since we NEVER want to keep the table waiting for a ready-to-display view for the object at a given row, so the view should at least show some "still waiting for details" indication so quick scrolls over lots of rows won't stall anything).
Using a view-based NSTableView and associated data source methods will let the table view handle only keeping enough copies of your custom NSTableCellView around to reuse during scrolling. Since you have to provide a configured view when asked, the view's default state can either be "draw nothing if not ready" or some visually generic placeholder until the object is realized and ready (then you can just reload that row instead of the whole table). This way the table keeps scrolling and drawing rapidly because it doesn't care about what your controller is doing to fulfill the promise of updating the visible rows (that custom cell view of yours will observe its represented object's updates).
You probably want the scrollers to reflect the total number of rows batched in so far if the upper bound is astronomical - reflecting that size would make the scroll grip both tiny and very sensitive. Instead, just grow the scroller (via the table view's row count) by what the user has "requested" so far, all the way back to the beginning of the list. Any time more are batched in, you'll want to add the batch size to your controller's total batched row count. This still lets the scroller zoom by rows the user couldn't distinguish at that speed anyway. You communicate the row count change to the table view by sending it -noteNumberOfRowsChanged and replying to its resulting data source request ( -numberOfRowsInTableView: ) with the updated total row count you stashed in a property of your controller. It'll ask for views for the newly visible rows as needed (which will be in some neutral, unfulfilled visual state until it's realized as before), update the scroll view, lather, rinse, repeat.
You could use NSCache to keep memory usage low. Set its countLimit to several times your batch size and let it drop previous batches if it decides it needs to dump the first n model objects, then batch them back in if the table view suddenly asks for a view for a row no longer in the batch window's range.
Without knowing more about your requirements and architecture, it's hard to get more specific. If I haven't hit the mark, consider editing your question to include more detail. If I'm totally off base from what you're asking for, please clarify. :-)
I know more about iOS, but I think the answer is similar. Table views are intrinsically finite, but you can roll your own custom scroll view to do this. The trick is to set a large content size and implement layout in your subclass (which will get called on every scroll change). In that method, check to see if the content offset is near zero or near the content size. If it is, then translate the content offset back to the center of the content size and translate all the subviews (keep them on one parent content view) by the same distance so the user doesn't see any motion. Make a datasource protocol and keep asking your datasource for "cells" that tile the visible part of the view.
It should be up to the datasource to recognize what we would have called a page-fault in the olden days, to decide that some of the model in memory should be discarded in favor of the model where the user is scrolling.
I poked around for an NS equivalent, but didn't see one on cursory search. Here's a decent-looking reference on the idea done in iOS.

it there any way to reset device in (slimdx, dx9) without disposing all device-related objects?

I am rendering using SlimDX to a control in a form. Since the size of that control might change very often, and there are lots of complex meshes, the traditional free-reset-construct method may be too slow to my taste. Any way to boost it up?
create an additional SwapChain linked to your current window using IDirect3DDevice9::CreateAdditionalSwapChain Method,
then, get the back buffer of the new SwapChain, and, use IDirect3DDevice9::SetRenderTarget method
to set the back buffer of the new SwapChain as the render target,
when you finished your drawings, call the present method of the new SwapChain instead of the IDirect3DDevice9::present,
when your window is resized, just release the additional SwapChain and re-create it with new back buffer sizes and do the render target setting thing again, now, you don`t have to do the device reset which is very slow.
if you have any more questions, email me : xux660#hotmail.com
I am a chinese so my english is not so good, forgive me.

In a double buffer opengl context, is it possible that front and back buffer be the same?

I have a situation in which I ask and get a double-buffering OpenGL context, but when I draw in it, both the front and back buffer are affected. The draw buffer is set to the back buffer (And only the back buffer). If I look in OpenGL Profiler, I do see all that: the value for GL_DRAW_BUFFER (GL_BACK) and the actual back and front buffer being drawn to.
Since I'm working with an NSWindow that has a backing store, We do not see any of this happening on the screen. The problem is that I'm getting screenshots of this window with CGWindowListCreateImage. This function seems to be fetching the image from the front buffer, and not from the screen buffer (Wherever that is...). So the image returned is incomplete: it only contains the elements that are drawn at the moment it is grabbed, even if no flush has been called.
There is a utility in the mac developer package called Pixie. It basically grab the screen at the mouse position, and display it zoomed in so you can analyze it. This program has the same behavior than calling CGWindowListCreateImage: you can see incomplete images. So I guess the problem is not with the way I use CGWindowListCreateImage, but rather with my window or my display...
Also, It does not seems to happen all the time. Not every windows show this behavior, and even for a given window, it seems to come and go, especially if I move the window to a different screen (In a dual display).
Anyone faced this before?