Read stencil texture in shader in WebGL2 - fragment-shader

I have a draw call which is updating the stencil buffer as a DEPTH24_STENCIL8 texture, which I bound to my FBO as a depth-stencil attachment; this works fine. Now I wish to read back the final stencil values in another pass to do something with them. It's my understanding that you can bind such a texture to a shader and be able to sample the depth component... how would I go about reading the stencil component instead though?
Reading the ES 3.1 spec reveals in section 8.19 and 11.1.3.5 that there appears to be a special texture parameter DEPTH_STENCIL_TEXTURE_MODE which controls which of the depth and stencil components is returned in a texture lookup. But this parameter unfortunately appears to be absent from ES 3.0 and WebGL2, which would seem to lock the shader out from ever reading the stencil data.
How do I access the stencil component from my shader? I am using WebGL2. Thanks.

Related

Is it possible to clear stencil buffer within indirect draw call?

I want to implement stencil shadows, and not have to work with individual lights on CPU side (recording buffers that alternate pipelines for each light) - i want to do all the lights in one go. I see it being possible with compute shaders; however I don't have access to ROPs from them, and while using atomics should be possible, it doesn't feel right (transforming R32UINT image into B8G8R8A8UNORM or whatever vkGetPhysicalDeviceSurfaceFormatsKHR may output). Having to do software rasterisation of shadow volumes also feels wrong. Simply using stencil, and outputting 0 color when drawing shadow volumes, then do a quad of actual light is nice, however i don't see any way to clear it inbetween draws. I've also thought of using blending and alpha value, but the only way i could thought of requires special clamping behaviour: not clamp blending inputs, but clamp outputs. And as far as I'm aware, its not possible to read pixels from framebuffer being drawn to in the very same draw call.
I was planning to draw lights one by one: fill the stencil buffer with a draw, draw a light quad on second draw from same draw inderect command, somehow clear it, and continue.
You have a problem before the "somehow clear it" part. Namely, drawing the "light quad" would require changing the stencil parameters from writing stencil values to testing them. Which of course you can't do in the middle of a drawing command.
While bundling geometry into a few draw commands is always good, it's important to remember that Vulkan is not OpenGL. State changes aren't free, and full pipeline changes aren't remarkably cheap, but they're not as costly as they would be under OpenGL. So you shouldn't feel bad about having to break drawing commands up in this manner.
To clear stencil buffer within a draw command is not possible; However I was able to achieve the desired result with special stencil state, late depth-stencil tests, discard and some extra work within shader, at a cost of doing those very things and flexibility.
How it works in my case(depth fail shadows):
For differentiating between passes, I use GL_ARB_shader_draw_parameters for gl_DrawID, but it should be possible through other means
Shadow pass:
In fragment shader, if depth is to be passed, discard // thus, no color writes from it are ever done
In stencil state, front-face fail(both depth and stencil) -> increment; back-face fail -> decrement; // there volumes are counted
Light pass:
If the light triangle is back-facing, output zero; Stencil state, back-face pass -> replace with reference; // there stencil is cleared
Else, calculate color; Stencil state, front-face pass -> doesn't matter.

How can I overlay my UI render target onto the back buffer using DirectX 11?

I have two render targets, the back buffer and a UI render target where all 2d UI will be drawn.
I have used the graphics debugger to confirm that both render targets are being written to with the correct data, but I'm having trouble combining the two right at the end.
Question:
My world objects are drawn directly to the backbuffer so there is no problem displaying these, but how do I now overlay the UI render target OVER the backbuffer?
Desired effect:
Back buffer render target
UI render target
There's several ways to do this. The easiest is to render your UI elements to a texture that has both a RenderTargetView and a ShaderResourceView, then render the whole texture to the back buffer as a single quad in orthographic projection space. This effectively draws a 2D square containing your UI in screen space on the back buffer. It also has the benefit of allowing transparency.
You could also use the OutputMerger stage to blend the UI render target with the back buffer during rendering of the world geometry. You'd need to be careful how you set up your blend operations, as it could result in items being drawn over the UI, or blending inappropriately.
If your UI is not transparent, you could do the UI rendering first and mark the area under the UI in the stencil buffer, then do your world rendering while the stencil test is enabled. This would cause the GPU to ignore any pixels underneath the UI, and not send them to the pixel shader.
The above could also be modified to write the minimum depth value to the pixels within the UI render target, ensuring all geometry underneath it would fail the depth test. This modification would free up the stencil buffer for mirrors/shadows/etc.
The above all work for flat UIs drawn over the existing 3D world. To actually draw more complex UIs that appear to be a part of the world, you'll need to actually render the elements to 3D objects in the world space, or do complex projection operations to make it seem like they are.

Is it possible to process a FBO's color attachement (texture) using a fragment/pixel shader without vertex shader?

Im currently playing around with some terrain-generation stuff using OpenGL ES 2.0 on iOS devices. I have a texture and a heightmap. What I want to do is blur the terrain's texture using a fragment shader, but not on every draw call (just on demand and at the beginning). This is why I decided to process the blurring offscreen inside a FBO and then attach this FBO as a texture to the terrain. Now I'm wondering if it is possible to just add the image(texture) as a color attachement to a newly generated FBO and process it with a fragment shader? Or is there a better approach? No projection, lightning etc. is needed.
You can't circumvent a vertex shader and have your fragment shader do anything. There are plenty of ways to minimize how much the vertex shader does - you can just pass the geometry right through to the fragment shader. Shaders like that are usually called (unsurprisingly) "pass-through shaders" because they just shuffle information on to the next piece of the pipeline without doing a whole lot.

Algorithms where a colour value is not written by fragment shader

I've been reading through the OpenGL ES Shading language specification and there is a section that puzzles me:
7.2 Fragment Shader Special Variables
...
It is not a requirement for the fragment shader to write to either gl_FragColor or gl_FragData. There are
many algorithms, such as shadow volumes, that include rendering passes where a color value is not
written.
I've looked at plenty of articles on shadow volumes and shaders and I can't find any information on how these algorithms can do anything without writing a colour value as there does not seem to be a way of returning data from the vertex shader alone on the ES platform. Desktop GL has geometry shaders which seem to be for this kind of effect, but there is no such thing in ES 2.0 Core.
Is this something that was inadvertently left in from the desktop specification, allowing for extensions or have I just missed something?
I wrote some weeks ago a shadow volume algo with opengl es 2.0.
For this purpose, in some passes, you don't write the color.
For example, you must work with the stencil buffer, with incrementing/decrementing the stencil based on the visible/not visible faces and the silouhette. When you do this work, you must disable the color (GLES20.glColorMask(false, false, false, false);).
If you don't, you will have a lot of artefacts.
The goal here is to update the stencil without updating color (fragment buffer).
More detailed informations on shadow volume and why you need disable color :
http://http.developer.nvidia.com/GPUGems/gpugems_ch09.html
(Sorry for my poor english) :-)

Using WebGL or OpenGL ES 2, how do I render the contents of an RBO onscreen?

Using WebGL (which is constrained to the OpenGL ES 2 API), I am successfully rendering to texture and then displaying that texture onscreen. Because it is a texture, it is not being antialiased. If I were rendering to an RBO and then displaying that onscreen, I would be able to take advantage of AA.
My render target setup looks like this:
Create FBO
Bind FBO
Create texture (to be rendered to)
Create and bind depth buffer as RBO
Attach texture and RBO to FBO
And my rendering update loop looks like this:
Render the scene to the FBO created in step #2 above
Render a screen aligned quad with the texture created in step #3 above
With desktop OpenGL, I would call glBlitFramebuffer() instead of drawing the screen aligned quad.
How do I render my scene with antialiasing? Do I need to replace the texture with an RBO? If so, what calls do I use to bind the RBO to draw a screen-aligned quad?
You cannot blit the contents of an RBO to screen in WebGL unless you perform a readback and re-upload to texture to blit, which is rather slow.
WebGL has no support for MSAA on FBOs in any form (neither as RBO nor as RTT).
You can implement your own antialiasing in a variety of ways.
Render at 2:2 size and scale down (google maps with webgl does this)
Render at 1:1 size, run a sobel or laplace edge detection on color and depth, and run a bilateral gaussian blur using edge strength as weight (I've used this technique in some of my demos, it works well, http://codeflow.org/entries/2011/apr/11/advanced-webgl-part-1/ )
Use the morphological antialiasing recipe from GPU Pro 2 (I've yet to try that)