Problem with precision qualifiers in a shader for android - love2d

I had a problem rendering with a shader for Android on Löve2d due to the precision of the floats, so I wanted to add these precompilation instructions so that the floats are automatically highp if possible.
#ifdef GL_ES
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
#endif
But I get this error:
ERROR: highp : overloaded functions must have the same parameter precision qualifiers for argument 1
ERROR: highp : overloaded functions must have the same parameter precision qualifiers for argument 3
ERROR: highp : overloaded functions must have the same parameter precision qualifiers for argument 4
Which matches this line:
vec4 effect(vec4 color, Image texture, vec2 texture_coords, vec2 screen_coords)
My question is why and how to fix it? Because on the examples I saw on the Löve2d forum it seemed to work.
The version of OpenGL ES on the device that produces this error is 3.2 if that helps.

Related

How do I know from within a BGFX shader it is running on the OpenGL API?

I'm working on a project based on BGFX and I'm trying to define with a fragment shader is BGFX is running in OpenGL or DirectX.
gl_FragColor = texture2D(color_tex, UV0);
I need this information to access a texture, as the texture coordinate (UV0) is different between GL and DirectX.
I could create a specific version of the shader for both APIs but there must be a most clever way to handle this. I looked in BGFX documentation but couldn't find anything about this point.
Furthermore, isn't the whole point of BGFX to abstract this kind of APIs differences ?
BGFX provides a series of macros that let the shader preprocessor to know in what context it is working.
You will find an example here : https://github.com/bkaradzic/bgfx/blob/69fb21f50e1136d9f3828a134bb274545b4547cb/examples/41-tess/matrices.sh#L22
In your case, your SL code could read like this:
#if BGFX_SHADER_LANGUAGE_GLSL
vec2 UV0_corrected = vec2(1.0, 1.0) + vec2(-1.0, -1.0) * UV0;
#else
vec2 UV0_corrected = vec2(1.0, 0.0) + vec2(-1.0, 1.0) * UV0;
#endif

In WebGL what are the differences between an attribute, a uniform, and a varying variable?

Is there an analogy that I can think of when comparing these different types, or how these things work?
Also, what does uniforming a matrix mean?
Copied directly from http://www.lighthouse3d.com/tutorials/glsl-tutorial/data-types-and-variables/. The actual site has much more detailed information and would be worthwhile to check out.
Variable Qualifiers
Qualifiers give a special meaning to the variable. The following
qualifiers are available:
const – The declaration is of a compile time constant.
attribute – Global variables that may change per vertex, that are passed from the OpenGL application to vertex shaders. This qualifier
can only be used in vertex shaders. For the shader this is a
read-only variable. See Attribute section.
uniform – Global variables that may change per primitive [...], that are passed from the OpenGL
application to the shaders. This qualifier can be used in both vertex
and fragment shaders. For the shaders this is a read-only variable.
See Uniform section.
varying – used for interpolated data between a vertex shader and a fragment shader. Available for writing in the vertex shader, and
read-only in a fragment shader. See Varying section.
As for an analogy, const and uniform are like global variables in C/C++, one is constant and the other can be set. Attribute is a variable that accompanies a vertex, like color or texture coordinates. Varying variables can be altered by the vertex shader, but not by the fragment shader, so in essence they are passing information down the pipeline.
uniform are per-primitive parameters (constant during an entire draw call) ;
attribute are per-vertex parameters (typically : positions, normals, colors, UVs, ...) ;
varying are per-fragment (or per-pixel) parameters : they vary from pixels to pixels.
It's important to understand how varying works to program your own shaders.
Let's say you define a varying parameter v for each vertex of a triangle inside the vertex shader. When this varying parameter is sent to the fragment shader, its value is automatically interpolated based on the position of the pixel to draw.
In the following image, the red pixel received an interpolated value of the varying parameter v. That's why we call them "varying".
For the sake of simplicity the example given above uses bilinear interpolation, which assumes that all the pixels drawn have the same distance from the camera. For accurate 3D rendering, graphic devices use perspective-correct interpolation which takes into account the depth of a pixel.
In WebGL what are the differences between an attribute, a uniform, and a varying variable?
In OpenGL, a "program" is a collection of "shaders" (smaller programs), which are connected to each other in a pipeline.
// "program" contains a shader pipeline:
// vertex shader -> other shaders -> fragment shader
//
const program = initShaders(gl, "vertex-shader", "fragment-shader");
gl.useProgram(program);
Shaders process vertices (vertex shader), geometries (geometry shader), tessellation (tessellation shader), fragments (pixel shader), and other batch process tasks (compute shader) needed to rasterize a 3D model.
OpenGL (WebGL) shaders are written in GLSL (a text-based shader language compiled on the GPU).
// Note: As of 2017, WebGL only supports Vertex and Fragment shaders
<!-- Vertex Shader -->
<script id="shader-vs" type="x-shader/x-vertex">
// <-- Receive from WebGL application
uniform vec3 vertexVariableA;
// attribute is supported in Vertex Shader only
attribute vec3 vertexVariableB;
// --> Pass to Fragment Shader
varying vec3 variableC;
</script>
<!-- Fragment Shader -->
<script id="shader-fs" type="x-shader/x-fragment">
// <-- Receive from WebGL application
uniform vec3 fragmentVariableA;
// <-- Receive from Vertex Shader
varying vec3 variableC;
</script>
Keeping these concepts in mind:
Shaders can pass data to the next shader in the pipeline (out, inout), and they can also accept data from the WebGL application or a previous shader (in).
The Vertex and Fragment shaders (any shader really) can use a uniform variable, to receive data from the WebGL application.
// Pass data from WebGL application to shader
const uniformHandle = gl.glGetUniformLocation(program, "vertexVariableA");
gl.glUniformMatrix4fv(uniformHandle, 1, false, [0.1, 0.2, 0.3], 0);
The Vertex Shader can also receive data from the WebGL application with the attribute variable, which can be enabled or disabled as needed.
// Pass data from WebGL application to Vertex Shader
const attributeHandle = gl.glGetAttribLocation(mProgram, "vertexVariableB");
gl.glEnableVertexAttribArray(attributeHandle);
gl.glVertexAttribPointer(attributeHandle, 3, gl.FLOAT, false, 0, 0);
The Vertex Shader can pass data to the Fragment Shader using the varying variable. See GLSL code above (varying vec3 variableC;).
Uniforms are another way to pass data from our application on the CPU to the shaders on the GPU, but uniforms are slightly different compared to vertex attributes. First of all, uniforms are global. Global, meaning that a uniform variable is unique per shader program object, and can be accessed from any shader at any stage in the shader program. Second, whatever you set the uniform value to, uniforms will keep their values until they're either reset or updated
I like the description from https://learnopengl.com/Getting-started/Shaders , because the the word per-primitive is not intuitive

difference between gl_position and varying variable?

Hi all I am new to OpenGL ES 2.0 . I am confused with gl_position and varying variable both will be the output from vertex shader. varying variable will be passed to fragment shader, what about gl_position. Does gl_position influence on varying variable in fragment shader.
gl_position=vec4(-1); what is the meaning of this.
PLease do help me to understand these things in a much better way.
gl_Position is special variable. It is used to calculate which fragment will fragment shader be calculating/shading (it calculates its position). All other varyings are directly interpolated across the primitive.
gl_Position is not available in fragment shader. But there is gl_FragCoord variable available which is calculated from gl_Position so, that x/y values of it changes from 0 to 1 (from one screen side to another), z is depth from 0 (near plane) to 1 (far plane). And w is something like 1/gl_Position.w (feel free to look what it is exactly in OpenGL|ES2 spec).

CGFloat: round, floor, abs, and 32/64 bit precision

TLDR: How do I call standard floating point code in a way that compiles both 32 and 64 bit CGFloats without warnings?
CGFloat is defined as either double or float, depending on the compiler settings and platform. I'm trying to write code that works well in both situations, without generating a lot of warnings.
When I use functions like floor, abs, ceil, and other simple floating point operations, I get warnings about values being truncated. For example:
warning: implicit conversion shortens 64-bit value into a 32-bit value
I'm not concerned about correctness or loss of precision in of calculations, as I realize that I could just use the double precision versions of all functions all of the time (floor instead of floorf, etc); however, I would have to tolerate these errors.
Is there a way to write code cleanly that supports both 32 bit and 64 bit floats without having to either use a lot of #ifdef __ LP64 __ 's, or write wrapper functions for all of the standard floating point functions?
You may use those functions from tgmath.h.
#include <tgmath.h>
...
double d = 1.5;
double e = floor(d); // will choose the 64-bit version of 'floor'
float f = 1.5f;
float g = floor(f); // will choose the 32-bit version of 'floorf'.
If you only need a few functions you can use this instead:
#if CGFLOAT_IS_DOUBLE
#define roundCGFloat(x) round(x)
#define floorCGFloat(x) floor(x)
#define ceilCGFloat(x) ceil(x)
#else
#define roundCGFloat(x) roundf(x)
#define floorCGFloat(x) floorf(x)
#define ceilCGFloat(x) ceilf(x)
#endif

Using multiple vertex shaders on the same program

I'm trying to implements projection, using a vertex shader.
Is there a way to have a separate vertex shader to handle set the gl_Position, and having another vertex shader to set the values required for the fragment shader?
The problem I have it that only the main() function of the first vertex shader is called.
Edit:
I found a way to make it work, by combining the shader sources instead of using multiple independant shaders. I'm not sure if this is the best way to do it, but it seems to work nicely.
main_shader.vsh
attribute vec4 src_color;
varying vec4 dst_color; // forward declaration
void transform(void);
void main(void)
{
dst_color = src_color;
transform();
}
transform_2d.vsh
attribute vec4 position;
void transform(void)
{
gl_Position = position;
}
Then use it as such:
char merged[2048];
strcat(merged, main_shader_src);
strcat(merged, transform_shader_src);
// create and compile shader with merged as source
in OpenGL ES, the only way is to concatenate shader sources, but in OpenGL, there are some interesting functions that allow you to do what you want:
GL_ARB_shader_subroutine (part of OpenGL 4.0 core)
- That does pretty much what you wanted
GL_ARB_separate_shader_objects (part of OpenGL 4.1 core)
- This extension allows you to use (mix) vertex and fragment shaders in different programs, so if you have one vertex shader and several fragment shaders (e.g. for different effects), then this extensions is for you.
I admit this is slightly offtopic, but i think it's good to know (also, might be useful for someone).