Vulkan Rendering - Portion of Surface - vulkan

How to render a vulkan framebuffer(vkImage) in a portion of Surface?
When I draw in framebuffer, vulkan clear all surface with vkColorClear.
The surface has 800x600 but I would like vulkan render 300x200 using a offset 100x100, for example.

When you begin a render pass, you provide the VkRenderPassBeginInfo object. In this object is the renderArea rectangle, which defines the area of each of the attachment images that the render pass will affect. Any pixels of attachments outside of this area are unaffected by render pass operations, including the clear load op and vkCmdClearAttachments.
Note that the renderArea is subject to the limitations of the render area granularity, as queried from vkGetRenderAreaGranularity.

You can subset a window by setting the view rectangle and viewport in the VkGraphicsPipelineCreateInfo structure to the subregion you wish to render. You can dynamically configure the viewport at draw time using vkCmdSetViewport().
For VkCmdClearAttachments() you can set the clear area via the pRects argument (it ignores viewport).

Related

Vulkan render to texture

In an existing renderer which draws geometry in the swapchain, I need to render some parts of this geometry in a texture, others parts must remain on screen. All the geometry is recorded into one command buffer. I won't need to render this texture every time.
I created destination image, image view and framebuffer, but I don't know what to do now.
I dont think I need a specific pipeline, nor a new specific descriptor set, as everything is correctly rendered on screen.
Do I need another render pass, or a subpass, or anything else?
Exactly, you need a separate renderpass that fills your destination images. As the renderpass stores a reference to the images (as attachments) a separate one is required.
Within that renderpass you then can use subpass dependencies to transition the destination images to the proper layout. Your first transition should be VK_ACCESS_SHADER_READ_BIT to VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT for writing to the destination image and once that's done you transition back from VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT to VK_ACCESS_SHADER_READ_BIT so you can e.g. render your destination images in the visual pass. An alternative would be blitting them to the swap chain if the device supports that.
If you need a reference, you can check out my offscreen rendering sample.

How can I overlay my UI render target onto the back buffer using DirectX 11?

I have two render targets, the back buffer and a UI render target where all 2d UI will be drawn.
I have used the graphics debugger to confirm that both render targets are being written to with the correct data, but I'm having trouble combining the two right at the end.
Question:
My world objects are drawn directly to the backbuffer so there is no problem displaying these, but how do I now overlay the UI render target OVER the backbuffer?
Desired effect:
Back buffer render target
UI render target
There's several ways to do this. The easiest is to render your UI elements to a texture that has both a RenderTargetView and a ShaderResourceView, then render the whole texture to the back buffer as a single quad in orthographic projection space. This effectively draws a 2D square containing your UI in screen space on the back buffer. It also has the benefit of allowing transparency.
You could also use the OutputMerger stage to blend the UI render target with the back buffer during rendering of the world geometry. You'd need to be careful how you set up your blend operations, as it could result in items being drawn over the UI, or blending inappropriately.
If your UI is not transparent, you could do the UI rendering first and mark the area under the UI in the stencil buffer, then do your world rendering while the stencil test is enabled. This would cause the GPU to ignore any pixels underneath the UI, and not send them to the pixel shader.
The above could also be modified to write the minimum depth value to the pixels within the UI render target, ensuring all geometry underneath it would fail the depth test. This modification would free up the stencil buffer for mirrors/shadows/etc.
The above all work for flat UIs drawn over the existing 3D world. To actually draw more complex UIs that appear to be a part of the world, you'll need to actually render the elements to 3D objects in the world space, or do complex projection operations to make it seem like they are.

Blender border render internals

I would like to know how Blender's border render works internally. How can Blender compute lights if it has not information about the lights in the tiles he won't render? I have not found any reference (source code excluded) on how this feature of blender works. Can somebody explain it (or give me some reference)?
The render border setting only alters what part of the image is rendered, it does not alter what data is sent to the render engine to generate the image.
You can test this by placing an object with a reflective surface in front of the camera and another object behind the camera, the object behind the camera will show in the reflection. The border setting doesn't change the reflection in the object, it only changes what part of the image is rendered.
Rendering an image starts at the pixel that will be visible in the final image and sends a "ray" into the scene to determine what colour the specific pixel will be. Each ray will bounce around in the scene from object to object to light source based on render settings to calculate the final result. While the render border will reduce the pixels used as the starting point for each ray, it does not reduce the objects or lights in the scene that each ray may come into contact with. Each ray going through the scene will see every visible object and light in the scene that can influence the final result for each pixel.
This conference video explains ray types and might give you a better grasp of how a ray goes through a scene to get the final image.

glReadPixels read "out of frames" area

I draw OpenGL 3200x2000 size textured quads. OpenGLView frame size is set to 940x560. It draws quad as it should. Bun when I try to save it as image (using glReadPixels) and set glReadPixels area from (0,0) to (3200,2000). It creates pixel data 3200x2000, but when I save it to file I see small image part (940x560 from bottom left corner) and whole other area is black. So how can I read offscreen area? I tried using Framebuffer, but its very complicated, errors while creating it and etc... Is there any other solution?
Situation visualization:
Original image looks like this (3200x2000):
OpenGLView looks like this (940x560):
Saved image looks like that (3200x2000):
So you're rendering to the window. Well, the window has a particular size. And nothing exists outside of that size.
This is part of something OpenGL calls the "pixel-ownership-test". If a pixel is not owned by the context, then its contents are undefined. Pixels outside of the window are not owned by the context, and therefore their contents are undefined.
This is one reason why framebuffer objects exist: so that you can render outside the size of your window. Though be advised: there is a maximum viewport size limit.
Alternatively, you can render in screen-sized pieces, where you download each piece after each rendering, then move the camera to render the next piece.
You haven't given much details in terms of code, or the platform.
But I think you should be using offscreen rendering, rather than just reading from the rendered window. If you are unfamiliar with using frame buffer objects, here is a minimal example:
https://github.com/datenwolf/codesamples/tree/master/samples/OpenGL/minimalfbo
Edit #1:
Since OP mentioned that the platform is OS X, I am posting my code below, which shows a minimal FBO example in iOS:
https://github.com/glman74/simpleFBO

Using WebGL or OpenGL ES 2, how do I render the contents of an RBO onscreen?

Using WebGL (which is constrained to the OpenGL ES 2 API), I am successfully rendering to texture and then displaying that texture onscreen. Because it is a texture, it is not being antialiased. If I were rendering to an RBO and then displaying that onscreen, I would be able to take advantage of AA.
My render target setup looks like this:
Create FBO
Bind FBO
Create texture (to be rendered to)
Create and bind depth buffer as RBO
Attach texture and RBO to FBO
And my rendering update loop looks like this:
Render the scene to the FBO created in step #2 above
Render a screen aligned quad with the texture created in step #3 above
With desktop OpenGL, I would call glBlitFramebuffer() instead of drawing the screen aligned quad.
How do I render my scene with antialiasing? Do I need to replace the texture with an RBO? If so, what calls do I use to bind the RBO to draw a screen-aligned quad?
You cannot blit the contents of an RBO to screen in WebGL unless you perform a readback and re-upload to texture to blit, which is rather slow.
WebGL has no support for MSAA on FBOs in any form (neither as RBO nor as RTT).
You can implement your own antialiasing in a variety of ways.
Render at 2:2 size and scale down (google maps with webgl does this)
Render at 1:1 size, run a sobel or laplace edge detection on color and depth, and run a bilateral gaussian blur using edge strength as weight (I've used this technique in some of my demos, it works well, http://codeflow.org/entries/2011/apr/11/advanced-webgl-part-1/ )
Use the morphological antialiasing recipe from GPU Pro 2 (I've yet to try that)