I'm trying to implements projection, using a vertex shader.
Is there a way to have a separate vertex shader to handle set the gl_Position, and having another vertex shader to set the values required for the fragment shader?
The problem I have it that only the main() function of the first vertex shader is called.
Edit:
I found a way to make it work, by combining the shader sources instead of using multiple independant shaders. I'm not sure if this is the best way to do it, but it seems to work nicely.
main_shader.vsh
attribute vec4 src_color;
varying vec4 dst_color; // forward declaration
void transform(void);
void main(void)
{
dst_color = src_color;
transform();
}
transform_2d.vsh
attribute vec4 position;
void transform(void)
{
gl_Position = position;
}
Then use it as such:
char merged[2048];
strcat(merged, main_shader_src);
strcat(merged, transform_shader_src);
// create and compile shader with merged as source
in OpenGL ES, the only way is to concatenate shader sources, but in OpenGL, there are some interesting functions that allow you to do what you want:
GL_ARB_shader_subroutine (part of OpenGL 4.0 core)
- That does pretty much what you wanted
GL_ARB_separate_shader_objects (part of OpenGL 4.1 core)
- This extension allows you to use (mix) vertex and fragment shaders in different programs, so if you have one vertex shader and several fragment shaders (e.g. for different effects), then this extensions is for you.
I admit this is slightly offtopic, but i think it's good to know (also, might be useful for someone).
Related
I'm working on a project based on BGFX and I'm trying to define with a fragment shader is BGFX is running in OpenGL or DirectX.
gl_FragColor = texture2D(color_tex, UV0);
I need this information to access a texture, as the texture coordinate (UV0) is different between GL and DirectX.
I could create a specific version of the shader for both APIs but there must be a most clever way to handle this. I looked in BGFX documentation but couldn't find anything about this point.
Furthermore, isn't the whole point of BGFX to abstract this kind of APIs differences ?
BGFX provides a series of macros that let the shader preprocessor to know in what context it is working.
You will find an example here : https://github.com/bkaradzic/bgfx/blob/69fb21f50e1136d9f3828a134bb274545b4547cb/examples/41-tess/matrices.sh#L22
In your case, your SL code could read like this:
#if BGFX_SHADER_LANGUAGE_GLSL
vec2 UV0_corrected = vec2(1.0, 1.0) + vec2(-1.0, -1.0) * UV0;
#else
vec2 UV0_corrected = vec2(1.0, 0.0) + vec2(-1.0, 1.0) * UV0;
#endif
Having an arbitrary polyhedron in CGAL (one that can be convex, concave or, even, have holes) how can I triangulate its faces so that I can create OpenGL Buffers for rendering?
I have seen the convex_hull_3() returns a polyhedron that has triangulated faces, but it won't do what I want for arbitrary polyhedrons.
The header file <CGAL/triangulate_polyhedron.h> contains a non-documented function
template <typename Polyhedron>
void triangulate_polyhedron(Polyhedron& p)
that is working with CGAL::Exact_predicates_inexact_constructions_kernel for example.
The Polygon Mesh Processing package provides the function CGAL::Polygon_mesh_processing::triangulate_faces with multiple overloads. The simplest thing to do would be
typedef CGAL::Simple_cartesian<float> Kernel;
typedef CGAL::Polyhedron_3<Kernel> Polyhedron_3;
Polyhedron_3 polyhedron = load_my_polyhedron();
CGAL::Polygon_mesh_processing::triangulate_faces(polyhedron);
After that, all faces in polyhedron are triangles.
The function modifies the model in-place, so one has to use a HalfedgeDS that supports removal. This is the default, but, for example, HalfedgeDS_vector won't do.
See also an official example that uses Surface_mesh instead of Polyhedron_3:
Polygon_mesh_processing/triangulate_faces_example.cpp
Is there an analogy that I can think of when comparing these different types, or how these things work?
Also, what does uniforming a matrix mean?
Copied directly from http://www.lighthouse3d.com/tutorials/glsl-tutorial/data-types-and-variables/. The actual site has much more detailed information and would be worthwhile to check out.
Variable Qualifiers
Qualifiers give a special meaning to the variable. The following
qualifiers are available:
const – The declaration is of a compile time constant.
attribute – Global variables that may change per vertex, that are passed from the OpenGL application to vertex shaders. This qualifier
can only be used in vertex shaders. For the shader this is a
read-only variable. See Attribute section.
uniform – Global variables that may change per primitive [...], that are passed from the OpenGL
application to the shaders. This qualifier can be used in both vertex
and fragment shaders. For the shaders this is a read-only variable.
See Uniform section.
varying – used for interpolated data between a vertex shader and a fragment shader. Available for writing in the vertex shader, and
read-only in a fragment shader. See Varying section.
As for an analogy, const and uniform are like global variables in C/C++, one is constant and the other can be set. Attribute is a variable that accompanies a vertex, like color or texture coordinates. Varying variables can be altered by the vertex shader, but not by the fragment shader, so in essence they are passing information down the pipeline.
uniform are per-primitive parameters (constant during an entire draw call) ;
attribute are per-vertex parameters (typically : positions, normals, colors, UVs, ...) ;
varying are per-fragment (or per-pixel) parameters : they vary from pixels to pixels.
It's important to understand how varying works to program your own shaders.
Let's say you define a varying parameter v for each vertex of a triangle inside the vertex shader. When this varying parameter is sent to the fragment shader, its value is automatically interpolated based on the position of the pixel to draw.
In the following image, the red pixel received an interpolated value of the varying parameter v. That's why we call them "varying".
For the sake of simplicity the example given above uses bilinear interpolation, which assumes that all the pixels drawn have the same distance from the camera. For accurate 3D rendering, graphic devices use perspective-correct interpolation which takes into account the depth of a pixel.
In WebGL what are the differences between an attribute, a uniform, and a varying variable?
In OpenGL, a "program" is a collection of "shaders" (smaller programs), which are connected to each other in a pipeline.
// "program" contains a shader pipeline:
// vertex shader -> other shaders -> fragment shader
//
const program = initShaders(gl, "vertex-shader", "fragment-shader");
gl.useProgram(program);
Shaders process vertices (vertex shader), geometries (geometry shader), tessellation (tessellation shader), fragments (pixel shader), and other batch process tasks (compute shader) needed to rasterize a 3D model.
OpenGL (WebGL) shaders are written in GLSL (a text-based shader language compiled on the GPU).
// Note: As of 2017, WebGL only supports Vertex and Fragment shaders
<!-- Vertex Shader -->
<script id="shader-vs" type="x-shader/x-vertex">
// <-- Receive from WebGL application
uniform vec3 vertexVariableA;
// attribute is supported in Vertex Shader only
attribute vec3 vertexVariableB;
// --> Pass to Fragment Shader
varying vec3 variableC;
</script>
<!-- Fragment Shader -->
<script id="shader-fs" type="x-shader/x-fragment">
// <-- Receive from WebGL application
uniform vec3 fragmentVariableA;
// <-- Receive from Vertex Shader
varying vec3 variableC;
</script>
Keeping these concepts in mind:
Shaders can pass data to the next shader in the pipeline (out, inout), and they can also accept data from the WebGL application or a previous shader (in).
The Vertex and Fragment shaders (any shader really) can use a uniform variable, to receive data from the WebGL application.
// Pass data from WebGL application to shader
const uniformHandle = gl.glGetUniformLocation(program, "vertexVariableA");
gl.glUniformMatrix4fv(uniformHandle, 1, false, [0.1, 0.2, 0.3], 0);
The Vertex Shader can also receive data from the WebGL application with the attribute variable, which can be enabled or disabled as needed.
// Pass data from WebGL application to Vertex Shader
const attributeHandle = gl.glGetAttribLocation(mProgram, "vertexVariableB");
gl.glEnableVertexAttribArray(attributeHandle);
gl.glVertexAttribPointer(attributeHandle, 3, gl.FLOAT, false, 0, 0);
The Vertex Shader can pass data to the Fragment Shader using the varying variable. See GLSL code above (varying vec3 variableC;).
Uniforms are another way to pass data from our application on the CPU to the shaders on the GPU, but uniforms are slightly different compared to vertex attributes. First of all, uniforms are global. Global, meaning that a uniform variable is unique per shader program object, and can be accessed from any shader at any stage in the shader program. Second, whatever you set the uniform value to, uniforms will keep their values until they're either reset or updated
I like the description from https://learnopengl.com/Getting-started/Shaders , because the the word per-primitive is not intuitive
Hi all I am new to OpenGL ES 2.0 . I am confused with gl_position and varying variable both will be the output from vertex shader. varying variable will be passed to fragment shader, what about gl_position. Does gl_position influence on varying variable in fragment shader.
gl_position=vec4(-1); what is the meaning of this.
PLease do help me to understand these things in a much better way.
gl_Position is special variable. It is used to calculate which fragment will fragment shader be calculating/shading (it calculates its position). All other varyings are directly interpolated across the primitive.
gl_Position is not available in fragment shader. But there is gl_FragCoord variable available which is calculated from gl_Position so, that x/y values of it changes from 0 to 1 (from one screen side to another), z is depth from 0 (near plane) to 1 (far plane). And w is something like 1/gl_Position.w (feel free to look what it is exactly in OpenGL|ES2 spec).
I am a new bid in the world of OpenGL ES 2.0. I am trying to implement specular mapping using OpenGL ES 2.0 on iOS platform. As per my knowledge,in specular mapping we extract the value for the specular component of light from specular map texture. What i am doing in vertex shader is as follows:
vec3 N = NormalMatrix * Normal;
vec3 L = normalize(LightPosition);
vec3 E = normalize(EyePosition);
vec3 H = normalize(L + E);
vec4 Specular=(texture2D(sampler_spec, TextureCoordIn)).rgba;
float df = max(0.0, dot(N, L));
float sf = max(0.0, dot(N, H));
sf = pow(sf, Specular.a);
vec3 color = AmbientMaterial + df * DiffuseMaterial + sf * Specular.rgb * SpecularMaterial;
DestinationColor = vec4(color, 1); `
But I can't see any specular effect in my game. I don't know where i am going wrong. Please give your valuable suggestions.
Well your computations look quite reasonable. The problem is, you're doing per-vertex lighting. This means the lighting is computed per vertex (as you're doing it in the vertex shader) and interpolated accross the triangles. Therefore your lighting quality highly depends on the tessellation quality of your mesh.
If you have rather large triangles, such high frequency effects like specular highlights won't really show. Especially when using textures. Keep in mind that the reason for using textures is to provide surface detail at a sub-triangle level, but at the moment you're reading the texture per vertex, so the specular could just be a vertex attribute.
So the first step would be to move the lighting computations into the fragment shader. In the vertex shader you just compute N, L and E (don't forget to normalize) and put them out as varyings. In the fragment shader you do the rest of the computation, based on the interpolated N, L and E (don't forget to renormalize again).
If all these concepts of varyings and per-fragment lighting are a bit high-fetched at the moment, you should delve a little deeper into the basics of shaders and look for tutorials on simple per-fragment lighting shaders. These can then easily adapted for things like specular mapping or bump mapping, ...