In a double buffer opengl context, is it possible that front and back buffer be the same? - objective-c

I have a situation in which I ask and get a double-buffering OpenGL context, but when I draw in it, both the front and back buffer are affected. The draw buffer is set to the back buffer (And only the back buffer). If I look in OpenGL Profiler, I do see all that: the value for GL_DRAW_BUFFER (GL_BACK) and the actual back and front buffer being drawn to.
Since I'm working with an NSWindow that has a backing store, We do not see any of this happening on the screen. The problem is that I'm getting screenshots of this window with CGWindowListCreateImage. This function seems to be fetching the image from the front buffer, and not from the screen buffer (Wherever that is...). So the image returned is incomplete: it only contains the elements that are drawn at the moment it is grabbed, even if no flush has been called.
There is a utility in the mac developer package called Pixie. It basically grab the screen at the mouse position, and display it zoomed in so you can analyze it. This program has the same behavior than calling CGWindowListCreateImage: you can see incomplete images. So I guess the problem is not with the way I use CGWindowListCreateImage, but rather with my window or my display...
Also, It does not seems to happen all the time. Not every windows show this behavior, and even for a given window, it seems to come and go, especially if I move the window to a different screen (In a dual display).
Anyone faced this before?

Related

In Vulkan, using FIFO, happens when you don't have an image ready to present when the vertical blank arrives?

I'm new to graphics, and I've been looking at Vulkan presentation modes. I was wondering: in a situation where we've only got 2 images in our swapchain (one that the screen's currently reading from and one that's free), what happens if we don't manage to finish drawing to the currently free image before the next vertical blank? Do we do the presentation and get weird tearing, or do skip the presentation and draw the same image again (I guess giving a "stuttering" effect)? Do we need to define what happens, or is it automatic?
As a side note, is this why people use longer swap chains? i.e. so that if you managed to draw out 2 images to your swap chain while the screen was displaying the last image but now you're running late, at least you can present the newer of the 2 images from before?
I'm not sure how much of this is specific to FIFO or mailbox mode: I guess with mailbox you'll already have used the newest image you've got, so you're stuck again?
[2-image swapchain][1]
[1]: https://i.stack.imgur.com/rxe51.png
Tearing never happens in regular FIFO (or mailbox) mode. When you present an image, this image will be used for all subsequent vblanks until a new image is presented. And since FIFO disallows tearing, in your case, the image will be fully displayed twice.
If you are using a 2-deep swapchain with FIFO, you have to produce each image on time in order to avoid stuttering. With longer swapchains and FIFO, you have more leeway to avoid visible stuttering. With longer swapchains and mailbox, you can get a similar effect, but there will be less visible latency when your application is running on-time.

Metal -- skipping commandBuffer.present(drawable) to not display a frame?

In my Metal app for macOS, I have a situation where I only want to display the render results every so often. I want to complete the rendering pass every frame, and save the drawable texture image to a file, but I only want to display the render every sixteenth frame or so. I tried just skipping commandBuffer.present(drawable) when I don't want to display, but it is not working. It just stops displaying new frames once I do that. After skipping one call to commandBuffer.present(), it just doesn't display any new frames. It does continue to run, however.
Why would that happen? Once I commit a command buffer, is it required for it to be presented?
If I can't get this to work, then I will try to render into an offscreen buffer for these frames I don't want displayed. But it would be extra work and require more memory for the offscreen render buffer, so I'd rather just be able to use my regular onscreen render buffer if possible.
Thanks!
It's not required that a command buffer present a drawable. I think the issue is that, once you've obtained the drawable, it's not returned to the pool maintained by the CAMetalLayer (or, indirectly, MTKView) that provided it until it is presented.
Do not render to a drawable's texture if you don't plan on presenting. Rendering to an off-screen texture is the right approach. In fact, if you always render first to an off-screen texture and then, only for the frames you want to display, copy that to a drawable's texture, then you can leave the framebufferOnly property of the CAMetalLayer with its default true value. In that case, there's a decent chance that you won't increase the memory required (because the drawable's texture is really just part of the screen's backing store).

SDL Window shows final frame from the last time the program was run in the background of a new window when I start up a new instance

I'm making a simple game and just messing around with SDL. I have two images currently, and I am practicing making them the background. I make one the background by calling RenderCopy, the DestroyTexture to clear it from memory, and then I present it. I changed the file path from one image to the other to change the background. I ran the program, and now the new image is layered on top of the other. I can fix this problem if I manually do a clean of my computer's memory, and it renders properly. For some reason, SDL is not clearing the image called previously. There is no mention of the old image anywhere else in the code, and all the Destroy functions are called. What is going on?
Edit: I did a little bit of snooping and its not that it even loads the old image; whatever the final render displayed on any program running SDL was before it was run, it will be the background. It is literally like a TV burning an image into the screen; its always there.
The problem was fixed by simply clearing the frame between frames. Didn't have any performance impact.

it there any way to reset device in (slimdx, dx9) without disposing all device-related objects?

I am rendering using SlimDX to a control in a form. Since the size of that control might change very often, and there are lots of complex meshes, the traditional free-reset-construct method may be too slow to my taste. Any way to boost it up?
create an additional SwapChain linked to your current window using IDirect3DDevice9::CreateAdditionalSwapChain Method,
then, get the back buffer of the new SwapChain, and, use IDirect3DDevice9::SetRenderTarget method
to set the back buffer of the new SwapChain as the render target,
when you finished your drawings, call the present method of the new SwapChain instead of the IDirect3DDevice9::present,
when your window is resized, just release the additional SwapChain and re-create it with new back buffer sizes and do the render target setting thing again, now, you don`t have to do the device reset which is very slow.
if you have any more questions, email me : xux660#hotmail.com
I am a chinese so my english is not so good, forgive me.

I want to animate the movement of a foreign OS X app's window

Background: I recently got two monitors and want a way to move the focused window to the other screen and vice versa. I've achieved this by using the Accessibility API. (Specifically, I get an AXUIElementRef that holds the AXUIElement associated with the focused window, then I set the NSAccessibilityPositionAttribute value to move the window.
I have this working almost exactly the way I want it to, except I want to animate the movement of windows. I thought that if I could get the NSWindow somehow, I could get its layer and use CoreAnimation to animate the window movement.
Unfortunately, I found out that this isn't possible. (Correct me I'm wrong though -- if there's a way to do it this way it'd be great!) So I'm asking you all for help. How should I go about animating the movement of the focused window, if I have access to the AXUIElementRef?
-R
--EDIT
I was able to get a crude animation going by creating a while loop and moving the position of the window by a small amount each time to make a successful animation. However, the results are pretty sub-par. As you can guess, it takes a lot of unnecessary processing power, and is still very choppy. There must be a better way.
The best possible way I can imagine would be to perform some hacky property comparison between the AXUIElement info values for the window and the info returned from the CGWindow api. Once you're able to ascertain what windows in the CGWindow API match AXUIElementRefs, you could grab bitmaps of the current window contents, overlay the screen with your own custom animation draw of the faux windows, then as you drop the overlay set the real AXUIElementRef's to the desired-end-animation positions.
Hacky, tho.