I've been reading through the OpenGL ES Shading language specification and there is a section that puzzles me:
7.2 Fragment Shader Special Variables
...
It is not a requirement for the fragment shader to write to either gl_FragColor or gl_FragData. There are
many algorithms, such as shadow volumes, that include rendering passes where a color value is not
written.
I've looked at plenty of articles on shadow volumes and shaders and I can't find any information on how these algorithms can do anything without writing a colour value as there does not seem to be a way of returning data from the vertex shader alone on the ES platform. Desktop GL has geometry shaders which seem to be for this kind of effect, but there is no such thing in ES 2.0 Core.
Is this something that was inadvertently left in from the desktop specification, allowing for extensions or have I just missed something?
I wrote some weeks ago a shadow volume algo with opengl es 2.0.
For this purpose, in some passes, you don't write the color.
For example, you must work with the stencil buffer, with incrementing/decrementing the stencil based on the visible/not visible faces and the silouhette. When you do this work, you must disable the color (GLES20.glColorMask(false, false, false, false);).
If you don't, you will have a lot of artefacts.
The goal here is to update the stencil without updating color (fragment buffer).
More detailed informations on shadow volume and why you need disable color :
http://http.developer.nvidia.com/GPUGems/gpugems_ch09.html
(Sorry for my poor english) :-)
Related
I want to implement stencil shadows, and not have to work with individual lights on CPU side (recording buffers that alternate pipelines for each light) - i want to do all the lights in one go. I see it being possible with compute shaders; however I don't have access to ROPs from them, and while using atomics should be possible, it doesn't feel right (transforming R32UINT image into B8G8R8A8UNORM or whatever vkGetPhysicalDeviceSurfaceFormatsKHR may output). Having to do software rasterisation of shadow volumes also feels wrong. Simply using stencil, and outputting 0 color when drawing shadow volumes, then do a quad of actual light is nice, however i don't see any way to clear it inbetween draws. I've also thought of using blending and alpha value, but the only way i could thought of requires special clamping behaviour: not clamp blending inputs, but clamp outputs. And as far as I'm aware, its not possible to read pixels from framebuffer being drawn to in the very same draw call.
I was planning to draw lights one by one: fill the stencil buffer with a draw, draw a light quad on second draw from same draw inderect command, somehow clear it, and continue.
You have a problem before the "somehow clear it" part. Namely, drawing the "light quad" would require changing the stencil parameters from writing stencil values to testing them. Which of course you can't do in the middle of a drawing command.
While bundling geometry into a few draw commands is always good, it's important to remember that Vulkan is not OpenGL. State changes aren't free, and full pipeline changes aren't remarkably cheap, but they're not as costly as they would be under OpenGL. So you shouldn't feel bad about having to break drawing commands up in this manner.
To clear stencil buffer within a draw command is not possible; However I was able to achieve the desired result with special stencil state, late depth-stencil tests, discard and some extra work within shader, at a cost of doing those very things and flexibility.
How it works in my case(depth fail shadows):
For differentiating between passes, I use GL_ARB_shader_draw_parameters for gl_DrawID, but it should be possible through other means
Shadow pass:
In fragment shader, if depth is to be passed, discard // thus, no color writes from it are ever done
In stencil state, front-face fail(both depth and stencil) -> increment; back-face fail -> decrement; // there volumes are counted
Light pass:
If the light triangle is back-facing, output zero; Stencil state, back-face pass -> replace with reference; // there stencil is cleared
Else, calculate color; Stencil state, front-face pass -> doesn't matter.
Currently, I am using SKSpriteKit in order to do all of my graphics stuff in any of my programs. Recently, I’ve been interested in drawing things like the Mandelbrot set, Bifurcation curve, etc.
So to draw these on my screen, I use 1 node per pixel… obviously this means that my program has very low performance with over 100000 nodes on the screen.
I want to find a way of colouring in pixels directly with some command without drawing any nodes. (But I want to stick to Obj-C, Xcode)
Is there some way by accessing Core graphics, or something?
Generally you would use OpenGL ES or Metal to do this.
Here is a tutorial that describes using OpenGL ES shaders with SpriteKit to draw the mandelbrot set:
https://www.weheartswift.com/fractals-xcode-6/
Im having a weird issue in opengl, it goes like this: im designing a 2d engine, so far i coded the routines that let's you draw sprites, rectangle, boxes, translate and scale them... however when i run a small demo of my engine i notice when scaling gradually rectangles in an animation (drawn using 4 vertices and GL_LINE_LOOP), the rectangle edeges seems to bounce between the two neighboring pixels.
I can't determine the source of the problem or even formulate a proper search query in google, if someone can shed some light on this matter. If my question is not understood please let me know.
Building a 2D library on OpenGL ES is going to be problematic for several reasons. First of all, the Khronos specifications state that it is not intended to produce "pixel perfect" rendering. Every OpenGL ES renderer is allowed some variation in rendered results. This is because the actual rendering is implemented in hardware and floating point rounding can be a little different from platform to platform. Even the shader compilers are completely different from one GPU to the next.
Another issue is that most of the GPUs on mobile devices today are tile-based deferred renderers, and they do not typically support partial screen rendering. In other words, every screen update requires replacing the entire frame.
I'm a novice in OpenGL ES 1.1(for IOS) texturing and I have a problem with making the effect of motion blur. During googling, I found that I should render my scene in different time moments to several textures and then draw all these textures on the screen with different alpha values. But the problem is that I don't know how to implement all this!So,my questions are:
How to draw a 2D texture on the screen? Should I make a square and put my texture on it?Or may be, there is a way to draw a texture on the screen directly?
How to draw several textures(one upon another) on the screen with different alpha values?
I've already come up with some ideas, but I'm not sure if they are correct or not.
Thanks in advance!
Well, of course the first advice is, understand the basics before trying to do advanced stuff. Other than that:
Yes indeed, to draw a full-screen texture you just draw a textured screen-sized quad. An orthographic projection would be a good idea in this case, making the screen-alignment of the quad and its proper sizing easier. For getting the textures in the first place (by rendering into them), FBOs might be of help, but I'm not sure they are supported on ES 1 devices, otherwise the good old glCopyTexSubImage2D will do, too, albeit requiring a copy operation.
Well, you just draw multiple textured quads (see 1) one over the other. You might configure the texture environment to scale the texture's color with the quad's base color (glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE)) and give your quads a color of (1, 1, 1, alpha) (of course lighting should be disabled). Additionally you have to enable alpha blending (glEnable(GL_BLEND)) and use an appropriate blending function (glBlendFunc(GL_SRC_ALPHA, GL_ONE) should do).
But if all these terms don't tell you anything, you should rather first learn the basics using a good learning resource before delving into more advanced effects.
I vaguely remember seeing something in OpenGL (not ES, which was still at v1.0 on the iPhone when I came across this, which is why I never used it) that let me specify which edges of my polygons were considered outlines vs those that made up the interior of faces. As such, this isn't the same as the outline of the entire model (which I know how to do), but rather the outline of a planar face with all its tris basically blended into one poly. For instance, in a cube made up of tri's, each face is actually two tris. I want to render the outline of the square, but not the diagonal across the face. Same thing with a hexagon. That takes four tris, but just one outline for the face.
Now yes, I know I can simply test all the edges to see if they share coplanar faces, but I could have sworn I remember seeing somewhere when you're defining the tri mesh data where you could say 'this line outlines a face whereas this one is inside a face.' That way when rendering, you could set a flag that basically says 'Give me a wireframe, but only the wires around the edges of complete faces, not around the tris that make them up.'
BTW, my target is all platforms that support OpenGL ES 2.0 but my dev platform is iOS. Again, this Im pretty sure was originally in OpenGL and may have been depreciated once shaders came on the scene, but I can't even find a reference to this feature to check if that's the case.
The only way I know now is to have one set of vertices, but two separate sets of indices... one for rendering tris, and another for rendering the wireframes of the faces. It's a real pain since I end up hand-coding a lot of this, which again, I'm 99% sure you can define when rendering the lines.
GL_QUADS, glEdgeFlag and glPolygonMode are not supported in OpenGL ES.
You could use LINES to draw the wireframe: To get hidden lines, first draw black filled triangles (with DEPTH on) and then draw the edges you are interested in with GL_LINES.