Passing variables between GLSLES vertex & fragment shaders - opengl-es-2.0

Whenever I look at sample shaders, it seems this type of stuff happens almost by magic; sometimes information is saved into special places like position/color, but other times a fragment shader uses parameters and quite how fragment shader knows where to get this data I can't follow.
Can anyone provide a medium-simple GLES shader which does this, and explain how it works?

Have a look at the OpenGL ES quick reference card.
You're interested in the "Built-In Inputs, Outputs, and Constants" later where GLSL is described, in particular vertex shader outputs and fragment shader inputs.
Additional VS outputs (that become FS inputs) should be declared in both using the varying keyword.

Related

Is there a way to get the buffer size of a uniform or buffer storage block in a shader/pipeline from my host-side code? [duplicate]

Looking for a Vulkan alternative for this; In OpenGL is there a way to get a list of all uniforms & attribs used by a shader program?
Vulkan, as a general rule, does not have querying APIs for any information you have provided to the API. If you give something to the API, and you need to know something about that data, then you're expected to remember what it was.
SPIR-V contains all of the definitions of the various resources and interfaces used by a shader. And SPIR-V is a pretty well-specified format. Since you gave the SPIR-V to Vulkan, you therefore have ample opportunity to know what all of the "uniforms & attribs" in that shader are. So Vulkan has no shader querying API.
There are several tools for introspecting into SPIR-V binaries to extract this kind of information. But Vulkan itself isn't one of them.

What is the "layers" field of the framebuffer create info for?

If the "layers" member of this struct is 3 for example, and I have an image view attachment in the framebuffer at [0] with three layers, will the shader run three times and will I get an in-shader variable telling me which layer it is?
I know that multi-layer rendering was something available in OpenGL, but the catch is that it only works with a geometry shader. Is this what the "layers" field of the framebuffer create info does?
If I want to write to multiple layers without the geometry shader, what are my options? Does VK_EXT_shader_viewport_index_layer help? I know that VK_KHR_multiview will do this, but just want to find out if it's possible without it.

Converting fragment shader into compute shader

I'm learning compute shaders after several years of experience with fragment and vertex shaders. I'd like to convert the algorithms from one of my procedural fragment shaders into a compute shader that uses the same algorithms but outputs the resulting procedural map to a texture and sends it to the CPU. Does anyone know of a tutorial or sample code that will point me in the right direction? I just need a generic framework.

Translating OpeGLES1.1 fixed function pipeline to programmable pipeline on the fly

Is it possible to emulate the completed fixed function pipeline with shaders on the fly? By on the fly mean not rewriting the fixed function code to use shaders but sort of an intermediate driver which receives fixed function GLES calls (possibly caching it for full one frame as there is no direct one to one translation from fixed to programmable pipeline) and outputs equivalent GLES2.0 calls.
And even if it possible then how much work would it really be?
For most of ES 1.1, that looks pretty straightforward. All the typical fixed functionality like transformations, lights, and materials, translates directly into shader code.
For a complete replacement, you would obviously have to implement all the functionality. From skimming over the ES 1.1 entry points, I spotted a few items that would not directly translate to ES 2.0, where the last of these looks particularly problematic:
Arbitrary clipping planes. This is not available in ES 2.0, but not terribly hard to emulate in shaders by calculating a distance in the vertex shader, and then discarding the clipped fragments in the fragment shader.
ES 1.1 has something called "palette textures". From my understanding, it looks somewhat painful to implement in ES 2.0, but possible. You would probably need two textures, one for the indices, and one for the palette, with two levels of sampling in the fragment shader.
ES 1.1 supports logical operations (glLogicOp) as part of the per-fragment operations that are executed after the fragment shader. ES 2.0 does not have this, and I can't think of a good way to replicate it. The only thing that comes to mind is to render, read back the result, do the logical operation on the CPU, and then render the resulting image. And you would have to do that every time the operation is changed.

OpenGL lights, textures, etc. correct way?

Until this moment I've only implemented all the effects in GLSL shaders using inputs, outputs and uniforms, except for a couple of really essential constants like gl_Position, etc. I've read several tutorials, had a lecture on computer graphics and everytime all they implement things by looking at physical model and calculating all the stuff using input values and uniforms. That is a kind of the way I thought it all works.
Now I faced the fact, that there are much more GLSL things, like glLight* API functions and gl_LightSource, gl_Texture constants in GLSL with a big set of light types and lighting models predefined. Seems to be a kind of different way of programming shaders.
I wonder if there are any advantages/disadvantages using one or other way? Did I miss something very important? It looks I'm doing a lot of redundant work.
All the glLight* calls you might find in both GLSL and the OpenGL API are from the old and deprecated fixed-function pipeline!
Now you must do all the calculations yourself through Shaders, as I can guess you're already doing.
Why did they "remove" all the awesome stuff?
They "removed" (deprecated) the Matrix Stack, Light calls, Immediate Mode Rendering, etc. etc. etc. and the list goes one for various reason. But the overall reason is that it's better to implement and control those things yourself.
It requires more work from our side implementing and controlling all those things, though you're in total control of everything and when you actually want to use something.
Using the fixed-function pipeline OpenGL would allocate and load various things you might never even wanted to use.
Also when talking about the Matrix Stack as an example, you would usually (the lazy way) make OpenGL re-calculate the Matrix Stack each render call, using the old glPushMatrix(), glPopMatrix(), glTranslate*(), etc. functions. Now because YOU HAVE TO, you are forced to do all those calculations and handling the Matrices yourself. So now you would realize that most of the Matrices and much more could simply be allocated and calculated once, or atleast not every render call.
Of course they didn't deprecated Immediate Mode Rendering, because we need to implement that ourselves, now we simply need to use Buffers, because they are so much better in every way.
Extra
If you want a great spreadsheet that shows which function are deprecated and which are core functions, and extension functions, etc. Then take a look here, though be aware that this spreadsheet is made by people who use OpenGL and not by the Khronos Group (current developers of OpenGL) nor Silicon Graphics (the creators of OpenGL).
Ignore glLightXXX functions, the related gl_LightXXX variables and all the documentation associated with them. It's all deprecated and if you look closely at the docs, you'll probably that it's several years old or specifically designed for versions of OpenGL <= 2.x. Instead continue to work with your own vertex attributes and set up lighting configuration in your own uniforms however you please based on the model of lighting you want to implement. It's more work, but it's more flexible in the long run.
The OpenGL lighting model that uses glLight pre-dates the programmable shader pipeline, and represent a particular way of doing lighting in the fixed function pipeline.
Once GLSL entered the scene it was possible to use the OpenGL lighting model in conjunction with shaders. You could use the same glLight function and it's related functions to set up your lighting parameters but then write shaders that used the same information in different ways, allowing per-pixel lighting calculations.
Textures are a little more murky, because OpenGL still has a texture model and many of the GL functions relating to textures are still valid, though some are deprecated. However, any documentation that refers to GLSL variables like gl_Texture is similarly out of date. Current OpenGL uses sampler objects for texture access.
If you want to make sure you're doing it the 'modern' way, make sure you create a forward-compatible OpenGL profile of 3.3 or higher or 4.0 or higher, and make sure your shaders declare the appropriate version number as their first line like so:
#version 330
This will cause the use of any deprecated OpenGL function or deprecated shader variable to generate an error so that you know to avoid them.
Current graphics hardware offers an interface to customize any rendering step e.g Vertex Shading, Tesselation, Geometry shading, fragment shading and so on. GLSL is the language to programm or influence the rendering steps of the graphics hardware leveraging this interface.
The predefined function glLight, glTexture and so on belong to the deprecated fixed
graphics pipeline of opengl. Modern OpenGL still supports the functions of this fixed pipeline but it ist strongly recommended to use GLSL for the different rendering steps.
The glLight function is a fixed function which just influences Vertex Processing. So you can just achieve a per vertex shading, which not looks very realistic.
When you programm the lighting on your own within the fragment shader using GLSL you can directly influence any pixel.
So to summarize the main advantage is that a programmer is more flexible and is able to influence every kind of rendering step, which enables you to achieve sophisticated and realistic 3d graphics. The main disadvantage is. You need much more knowledge and (GLSL, graphics pipeline) and much more programming effort to achieve the same result as with fixed functions.
Best regards