difference between gl_position and varying variable? - opengl-es-2.0

Hi all I am new to OpenGL ES 2.0 . I am confused with gl_position and varying variable both will be the output from vertex shader. varying variable will be passed to fragment shader, what about gl_position. Does gl_position influence on varying variable in fragment shader.
gl_position=vec4(-1); what is the meaning of this.
PLease do help me to understand these things in a much better way.

gl_Position is special variable. It is used to calculate which fragment will fragment shader be calculating/shading (it calculates its position). All other varyings are directly interpolated across the primitive.
gl_Position is not available in fragment shader. But there is gl_FragCoord variable available which is calculated from gl_Position so, that x/y values of it changes from 0 to 1 (from one screen side to another), z is depth from 0 (near plane) to 1 (far plane). And w is something like 1/gl_Position.w (feel free to look what it is exactly in OpenGL|ES2 spec).

Related

OpenGL ES 2.0 texture on a single trianlge in a mesh

I am trying to draw texture on a quad which has two triangles. But my objective is to draw texture on a single triangle only (within a mesh), the other triangle is to be left empty.
How can i achieve this ? any sample program or pseudo code will be of a lot help.
Follow the steps below
Check if vertices are correct using the frag shader.
gl_FragColor = vec4(1.0,0.0,0.0,1.0); // The rectangle must be red
If 1. is okay, check uv values.
if 1. is not okay. use this vertices and uv values.
vertices = -1.0,-1.0, 1.0,-1.0, -1.0,1.0, 1.0,1.0
UVs = 0.0,0.0, 1.0,0.0, 0.0,1.0, 1.0,1.0
That's it. You are all set for the next step

In WebGL what are the differences between an attribute, a uniform, and a varying variable?

Is there an analogy that I can think of when comparing these different types, or how these things work?
Also, what does uniforming a matrix mean?
Copied directly from http://www.lighthouse3d.com/tutorials/glsl-tutorial/data-types-and-variables/. The actual site has much more detailed information and would be worthwhile to check out.
Variable Qualifiers
Qualifiers give a special meaning to the variable. The following
qualifiers are available:
const – The declaration is of a compile time constant.
attribute – Global variables that may change per vertex, that are passed from the OpenGL application to vertex shaders. This qualifier
can only be used in vertex shaders. For the shader this is a
read-only variable. See Attribute section.
uniform – Global variables that may change per primitive [...], that are passed from the OpenGL
application to the shaders. This qualifier can be used in both vertex
and fragment shaders. For the shaders this is a read-only variable.
See Uniform section.
varying – used for interpolated data between a vertex shader and a fragment shader. Available for writing in the vertex shader, and
read-only in a fragment shader. See Varying section.
As for an analogy, const and uniform are like global variables in C/C++, one is constant and the other can be set. Attribute is a variable that accompanies a vertex, like color or texture coordinates. Varying variables can be altered by the vertex shader, but not by the fragment shader, so in essence they are passing information down the pipeline.
uniform are per-primitive parameters (constant during an entire draw call) ;
attribute are per-vertex parameters (typically : positions, normals, colors, UVs, ...) ;
varying are per-fragment (or per-pixel) parameters : they vary from pixels to pixels.
It's important to understand how varying works to program your own shaders.
Let's say you define a varying parameter v for each vertex of a triangle inside the vertex shader. When this varying parameter is sent to the fragment shader, its value is automatically interpolated based on the position of the pixel to draw.
In the following image, the red pixel received an interpolated value of the varying parameter v. That's why we call them "varying".
For the sake of simplicity the example given above uses bilinear interpolation, which assumes that all the pixels drawn have the same distance from the camera. For accurate 3D rendering, graphic devices use perspective-correct interpolation which takes into account the depth of a pixel.
In WebGL what are the differences between an attribute, a uniform, and a varying variable?
In OpenGL, a "program" is a collection of "shaders" (smaller programs), which are connected to each other in a pipeline.
// "program" contains a shader pipeline:
// vertex shader -> other shaders -> fragment shader
//
const program = initShaders(gl, "vertex-shader", "fragment-shader");
gl.useProgram(program);
Shaders process vertices (vertex shader), geometries (geometry shader), tessellation (tessellation shader), fragments (pixel shader), and other batch process tasks (compute shader) needed to rasterize a 3D model.
OpenGL (WebGL) shaders are written in GLSL (a text-based shader language compiled on the GPU).
// Note: As of 2017, WebGL only supports Vertex and Fragment shaders
<!-- Vertex Shader -->
<script id="shader-vs" type="x-shader/x-vertex">
// <-- Receive from WebGL application
uniform vec3 vertexVariableA;
// attribute is supported in Vertex Shader only
attribute vec3 vertexVariableB;
// --> Pass to Fragment Shader
varying vec3 variableC;
</script>
<!-- Fragment Shader -->
<script id="shader-fs" type="x-shader/x-fragment">
// <-- Receive from WebGL application
uniform vec3 fragmentVariableA;
// <-- Receive from Vertex Shader
varying vec3 variableC;
</script>
Keeping these concepts in mind:
Shaders can pass data to the next shader in the pipeline (out, inout), and they can also accept data from the WebGL application or a previous shader (in).
The Vertex and Fragment shaders (any shader really) can use a uniform variable, to receive data from the WebGL application.
// Pass data from WebGL application to shader
const uniformHandle = gl.glGetUniformLocation(program, "vertexVariableA");
gl.glUniformMatrix4fv(uniformHandle, 1, false, [0.1, 0.2, 0.3], 0);
The Vertex Shader can also receive data from the WebGL application with the attribute variable, which can be enabled or disabled as needed.
// Pass data from WebGL application to Vertex Shader
const attributeHandle = gl.glGetAttribLocation(mProgram, "vertexVariableB");
gl.glEnableVertexAttribArray(attributeHandle);
gl.glVertexAttribPointer(attributeHandle, 3, gl.FLOAT, false, 0, 0);
The Vertex Shader can pass data to the Fragment Shader using the varying variable. See GLSL code above (varying vec3 variableC;).
Uniforms are another way to pass data from our application on the CPU to the shaders on the GPU, but uniforms are slightly different compared to vertex attributes. First of all, uniforms are global. Global, meaning that a uniform variable is unique per shader program object, and can be accessed from any shader at any stage in the shader program. Second, whatever you set the uniform value to, uniforms will keep their values until they're either reset or updated
I like the description from https://learnopengl.com/Getting-started/Shaders , because the the word per-primitive is not intuitive

Specular Mapping in OpenGL ES 2.0

I am a new bid in the world of OpenGL ES 2.0. I am trying to implement specular mapping using OpenGL ES 2.0 on iOS platform. As per my knowledge,in specular mapping we extract the value for the specular component of light from specular map texture. What i am doing in vertex shader is as follows:
vec3 N = NormalMatrix * Normal;
vec3 L = normalize(LightPosition);
vec3 E = normalize(EyePosition);
vec3 H = normalize(L + E);
vec4 Specular=(texture2D(sampler_spec, TextureCoordIn)).rgba;
float df = max(0.0, dot(N, L));
float sf = max(0.0, dot(N, H));
sf = pow(sf, Specular.a);
vec3 color = AmbientMaterial + df * DiffuseMaterial + sf * Specular.rgb * SpecularMaterial;
DestinationColor = vec4(color, 1); `
But I can't see any specular effect in my game. I don't know where i am going wrong. Please give your valuable suggestions.
Well your computations look quite reasonable. The problem is, you're doing per-vertex lighting. This means the lighting is computed per vertex (as you're doing it in the vertex shader) and interpolated accross the triangles. Therefore your lighting quality highly depends on the tessellation quality of your mesh.
If you have rather large triangles, such high frequency effects like specular highlights won't really show. Especially when using textures. Keep in mind that the reason for using textures is to provide surface detail at a sub-triangle level, but at the moment you're reading the texture per vertex, so the specular could just be a vertex attribute.
So the first step would be to move the lighting computations into the fragment shader. In the vertex shader you just compute N, L and E (don't forget to normalize) and put them out as varyings. In the fragment shader you do the rest of the computation, based on the interpolated N, L and E (don't forget to renormalize again).
If all these concepts of varyings and per-fragment lighting are a bit high-fetched at the moment, you should delve a little deeper into the basics of shaders and look for tutorials on simple per-fragment lighting shaders. These can then easily adapted for things like specular mapping or bump mapping, ...

Using multiple vertex shaders on the same program

I'm trying to implements projection, using a vertex shader.
Is there a way to have a separate vertex shader to handle set the gl_Position, and having another vertex shader to set the values required for the fragment shader?
The problem I have it that only the main() function of the first vertex shader is called.
Edit:
I found a way to make it work, by combining the shader sources instead of using multiple independant shaders. I'm not sure if this is the best way to do it, but it seems to work nicely.
main_shader.vsh
attribute vec4 src_color;
varying vec4 dst_color; // forward declaration
void transform(void);
void main(void)
{
dst_color = src_color;
transform();
}
transform_2d.vsh
attribute vec4 position;
void transform(void)
{
gl_Position = position;
}
Then use it as such:
char merged[2048];
strcat(merged, main_shader_src);
strcat(merged, transform_shader_src);
// create and compile shader with merged as source
in OpenGL ES, the only way is to concatenate shader sources, but in OpenGL, there are some interesting functions that allow you to do what you want:
GL_ARB_shader_subroutine (part of OpenGL 4.0 core)
- That does pretty much what you wanted
GL_ARB_separate_shader_objects (part of OpenGL 4.1 core)
- This extension allows you to use (mix) vertex and fragment shaders in different programs, so if you have one vertex shader and several fragment shaders (e.g. for different effects), then this extensions is for you.
I admit this is slightly offtopic, but i think it's good to know (also, might be useful for someone).

OpenGL Diffuse Lighting Shader Bug?

The Orange book, section 16.2, lists implementing diffuse lighting as:
void main()
{
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
vec4 V = gl_ModelViewMatrix * gl_vertex;
vec3 L = normalize(lightPos - V.xyz);
gl_FrontColor = gl_Color * vec4(max(0.0, dot(N, L));
}
However, when I run this, the lighting changes when I move my camera.
On the other hand, when I change
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
to
vec3 N = normalize(gl_Normal);
I get diffuse lighting that works like the fixed pipeline.
What is this gl_NormalMatrix, what did removing it do, ... and is this a bug in the orange book ... or am I setting up my OpenGl code improperly?
[For completeness, the fragment shader just copies the color]
OK, I hope there's nothing wrong with answering your question after over half a year? :)
So there are two things to discuss here:
a) What should the shader look like
You SHOULD transform your normals by the modelview matrix - that's a given. Consider what would happen if you don't - your modelview matrix can contain some kind of rotation. Your cube would be rotated, but the normals would still point in the old direction! This is clearly wrong.
So: When you transform your vertices by modelview matrix, you should also transform the normals. Your normals are vec3 not vec4, and you're not interested in translations (normals only contain direction), so you can just multiply your normal by mat3(gl_ModelViewMatrix), which is the upper-left 3-3 submatrix.
Then: This is ALMOST correct, but still a bit wrong - the reasons are well-described on Lighthouse 3D - go have a read. Long story short, instead of mat3(gl_ModelViewMatrix), you have to multiply by an inverse transpose of that.
And OpenGL 2 is very helpful and precalculates this for you as gl_NormalMatrix. Hence, the correct code is:
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
b) But it's different from fixed pipeline, why?
The first thing which comes to my mind is that "something's wrong with your usage of fixed pipeline".
I'm not really keen on FP (long live shaders!), but as far as I can remember, when you specify your lights via glLightParameterfv(GL_LIGHT_POSITION, something), this was affected by the modelview matrix. It was easy (at least for me :)) to make a mistake of specifying the light position (or light direction for directional lights) in the wrong coordinate system.
I'm not sure if I remember correctly how that worked back then since I use GL3 and shaders nowadays, but let me try... what was your state of modelview matrix? I think it just might be possible that you have specified the directional light direction in object space instead of eye space, so that your light would rotate together with your object. IDK if that's relevant here, but make sure to pay attention to that when using FF. That's a mistake I remember myself doing often when I was still using GL 1.1.
Depending on the modelview state, you could specify the light in:
eye (camera) space,
world space,
object space.
Make sure which one it is.
Huh.. I hope that makes the topic more clear for you. The conclusions are:
always transform your normals along with your vertices in your vertex shaders, and
if it looks different from what you expect, think how you specify your light positions. (Maybe you want to multiply the light postion vector in a shader too? The remarks about light position coordinate systems still hold)