No uniform with name in shader exclusively on Raspberry Pi - opengl-es-2.0

Similar Questions:
No uniform with name in shader ,
Fragment shader: No uniform with name in shader
LibGDX: libgdx.badlogicgames.com
LibOnPi: www.habitualcoder.com/?page_id=257
I am attempting to run LibGDX on a Raspberry Pi with little luck. After some trial and error I eventually got it to start throwing the error "no uniform with name 'mvp' in shader". The problem is much like the similar questions, however in my situation it seems to me that 'mvp' is actually being used by the shader to set the positions.
The really strange part is that it runs on the PC (Windows 7 x64 in Eclipse ADT) just fine, but not on the Pi. Does the pi handle shaders differently, if not, what is causing this error to be thrown exclusively on the pi?
Vertex_Shader =
"attribute vec3 a_position; \n"
+ "attribute vec4 a_color; \n"
+ "attribute vec2 a_texCoords; \n"
+ "uniform mat4 mvp; \n"
+ "varying vec4 v_color; \n" + "varying vec2 tCoord; \n"
+ "void main() { \n"
+ " v_color = a_color; \n"
+ " tCoord = a_texCoords; \n"
+ " gl_Position = mvp * vec4(a_position, 1f); \n"
+ "}";
Fragment_Shader =
"precision mediump float; \n"
+ "uniform sampler2D u_texture; \n"
+ "uniform int texture_Enabled; \n"
+ "varying vec4 v_color; \n"
+ "varying vec2 tCoord; \n"
+ "void main() { \n"
+ " vec4 texColor = texture2D(u_texture, tCoord); \n"
+ " gl_FragColor = ((texture_Enabled == 1)?texColor:v_color); \n"
+ "}";
...
shader = new ShaderProgram(Vertex_Shader, Fragment_Shader);
...
shader.setUniformMatrix("mvp", camera.combined);
I also noticed this question:
c++ OpenGL glGetUniformLocation for Sampler2D returns -1 on Raspberry PI but works on Windows
which is quite similar, however implementing the proposed solution of putting the "#version 150" at the top of the shader just broke it on the PC too. (stated that there was no uniform with name 'mvp')
EDIT:
1 - Added Fragment Shader at request of keaukraine
2 - Fix found by Keaukraine and ArttuPeltonen. Raspberry Pi requires a version number in the shader. OpenGl-ES 2.0 uses version 100

Answer provided by keaukraine and ArttuPeltonen
Raspberry Pi requires a version number in the shader. OpenGl-ES 2.0 uses version 100. It did not work when I initially tried it due to forgetting to add whitespace. "#version 100attribute..." is not the same as "#version 100\nattribute"
Example Final Shader:
Vertex_Shader =
"#version 100\n"
+ "attribute vec3 a_position; \n"
+ "attribute vec4 a_color; \n"
+ "attribute vec2 a_texCoords; \n"
+ "uniform mat4 mvp; \n"
+ "varying vec4 v_color; \n" + "varying vec2 tCoord; \n"
+ "void main() { \n"
+ " v_color = a_color; \n"
+ " tCoord = a_texCoords; \n"
+ " gl_Position = mvp * vec4(a_position, 1f); \n"
+ "}";
Thank you both.

Related

In Vulkan, the output color of vertices in vertex shader is different than what I am getting in fragment shader

In Vulkan, I had written a simple program to draw lines with fixed color, with simple vertex shader and fragement shader. But the colors input to fragment shaders are different than what is set in vertices. I checked with RenderDoc, and the colors passed to the vertex shader are correct (1,1,1,1) for both vertices of a line and also checked its output, its also same. But in Fragment shader, the colors I am getting are (1,1,0,1). Dont understand why this is happening. Irrespetive of what colors vertex shader emit, the input in fragment shader is always yellow.
Vertex shader:
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 texcoord;
out vec4 io_color;
out vec2 io_uv;
out vec4 io_position2;
layout(std140, binding = 0) uniform UniformBlock_uTransform
{
mat4 uTransform;
};
layout(std140, binding = 1) uniform UniformBlock_uTransform2
{
mat4 uTransform2;
};
void main ()
{
io_uv = texcoord;
io_color = vec4(1,1,1,1); //Just to debug it
gl_Position = uTransform * position;
io_position2 = uTransform2 * position;
}
//Fragement :
in vec4 io_color;
layout(location = 0) out vec4 result;
void main ()
{
result = io_color;
}
Try adding output and input layout qualifiers to the values you pass from one shader to the other to ensure that they actually point to the same location:
VS:
layout (location = 0) out vec4 io_color;
FS:
layout (location = 0) in vec4 io_color;
I recommend always using that syntax to connect shader out- and inputs.
Check if color write mask is not disabled for blue channel.

What is differences between glVertexAttribPointer and glVertexAttrib1f

In opengl es 2.0 when I wanted to change the attribute named "a_degree" in vertex shader at first I used glVertexAttribPointer and glEnableVertexAttribArray with true parameters but the behaviour is totally different when I used glVertexAttrib1f why?
here is my shaders code:
const char* CGraphic::VERTEX_SHADER_SOURCE =
"attribute vec4 a_position; \n"
"attribute vec2 a_texCoord; \n"
"attribute vec4 a_color; \n"
"attribute float a_degree; \n"
"varying lowp vec4 v_color; \n"
"varying vec2 v_texCoord; \n"
"void main() \n"
"{ \n"
" float radianS = a_degree* "
" (3.14159265/180.0); \n"
" float s = sin(radianS); \n"
" float c = cos(radianS); \n"
" mat4 mvpMatrix=mat4( \n"
" c,-s,0,0, "
" s,c,0,0, "
" 0,0,1,0, "
" 0,0,0,1); \n"
" v_color = a_color; \n"
" gl_Position = a_position*mvpMatrix; \n"
" v_texCoord = a_texCoord; \n"
"} \n";
const char* CGraphic::FRAGMENT_SHADER_SOURCE =
"precision mediump float; \n"
" \n"
"varying vec4 vColor; \n"
"varying vec2 v_texCoord; \n"
"uniform sampler2D s_texture; \n"
" \n"
"void main() \n"
"{ \n"
" gl_FragColor = texture2D( s_texture, v_texCoord );\n"
"} \n";
use with:
glEnableVertexAttribArray ( m_shaderData.rotateLoc );
glVertexAttribPointer ( m_shaderData.rotateLoc, 1, GL_FLOAT,
GL_FALSE, 0, &degree );
vs
glVertexAttrib1f(m_shaderData.rotateLoc,degree);
In fact glVertexAttrib1f work fine in this situation and my texture rotate correctly but with glVertexAttribPointer just one point of the texture rotate that isn't my desire.
glVertexAttrib allows you to specify a fixed value for the attribute.
In contrast, glVertexAttribPointer when enabled via glEnableVertexAttribArray allows you to specify a unique value for each vertex.
Read more here:
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glVertexAttrib.xml
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glVertexAttribPointer.xml
So, if you are drawing a triangle with multiple points, you would need to specify a separate degree for each point when using glVertexAttribPointer. Thus, degree would need to be a float[], while it looks like you're only specifying a single value right now as a float.
Most likely the values after degree in memory are zeros, which is why the other points are not rotating.
If you want the value to be the same, you CAN use glVertexAttrib. If you're never going to specify it per vertex, using a uniform value is likely better.

GPUImage and Shader Compile Errors

I'm implementing a custom filter creating two strings one as a vertex shader and the other one as a fragment one.
I'm in the early stage and I was just trying to compile and run a program with custom shaders just copying the one used in GPUImage.
Here the shaders:
NSString *const CustomShaderV = SHADER_STRING
(
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
attribute vec4 inputTextureCoordinate2;
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
textureCoordinate2 = inputTextureCoordinate2.xy;
}
);
NSString *const CustomShaderF = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform highp float bVal;
uniform highp float hVal;
uniform highp float sVal;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 alphaColor = texture2D(inputImageTexture2, textureCoordinate2);
lowp vec4 outputColor;
outputColor = textureColor;
gl_FragColor = outputColor;
}
);
However whenever I initialize a filter with such shaders my app crash with this log.
2013-11-24 17:58:40.243[865:60b] Shader compile log:
ERROR: 0:1: 'CustomShaderV' : syntax error syntax error
2013-11-24 17:58:40.245[865:60b] Failed to compile vertex shader
2013-11-24 17:58:40.248[865:60b] Shader compile log:
ERROR: 0:1: 'CustomShaderF' : syntax error syntax error
2013-11-24 17:58:40.249[865:60b] Failed to compile fragment shader
2013-11-24 17:58:40.250[865:60b] Program link log: ERROR: One or more attached shaders not successfully compiled
2013-11-24 17:58:40.252[865:60b] Fragment shader compile log: (null)
2013-11-24 17:58:40.253[865:60b] Vertex shader compile log: (null)
The shaders are correct to me since are just cut and copy of other working shaders included in the GPUImage projects.
The call inside the code to the two shaders above is as following:
customFilter = [[GPUImageFilter alloc] initWithVertexShaderFromString:#"CustomShaderV" fragmentShaderFromString:#"CustomShaderF"];
My goal is to blend two videos that's why two textures are present in the shaders.
Based on the comment by OP,
here is the call: customFilter = [[GPUImageFilter alloc] initWithVertexShaderFromString:#"CustomShaderV" fragmentShaderFromString:#"CustomShaderF"];
the error was that the variable name was quoted and being used as a string. Fix was:
Change that to: [[GPUImageFilter alloc] initWithVertexShaderFromString:CustomShaderV fragmentShaderFromString:CustomShaderF];

OpenGL ES 2 squeeze distortion filter like photobooth app

I'm tryng to code a squeeze effect like the one in photobooth ios or osx app; I'm new to shaders, but I was able to implement GPUImage library and tweek some shaders; however all that I was able to obtain is a spherical distortion, and the final result is a bit different from what I would like to achieve.
here is some code I modded from #import "GPUImageBulgeDistortionFilter.h"
in particular I was using this code
NSString *const kGPUImageBulgeDistortionFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec2 center;
uniform highp float radius;
uniform highp float scale;
void main()
{
highp vec2 textureCoordinateToUse = textureCoordinate;
highp float dist = distance(center, textureCoordinate);
highp float PI = 3.14159265358979323846264;//stone added
textureCoordinateToUse -= center;
textureCoordinateToUse.x = 0.5+textureCoordinateToUse.x*cos(textureCoordinateToUse.x*PI/radius)*scale;
textureCoordinateToUse.y = 0.5 + textureCoordinateToUse.y*cos(textureCoordinateToUse.y*PI/radius)*scale;
gl_FragColor = texture2D(inputImageTexture, textureCoordinateToUse );
}
);
This code is using cos and sin and PI so it's definitely in the spherical range; any hint to make it more planar, with a tiny unstretched part in the middle will be a great help !

OpenglES 2.0 GL_POINTS disappear

I am using gl_points as sprites, when points reach screen boundary they disappear, problem is that my points are larger than 1 pixel, so they disappear when half of point beaks trough screen bound, i am assuming that there is some kind of culling turned on to remove points that are offscreen, question is how to turn it off.
GLES20Renderer.programLight = GLES20.glCreateProgram();
int vertexShaderLight = GLES20Renderer.loadShader(GLES20.GL_VERTEX_SHADER, GLES20Renderer.vertexShaderCodeLight);
int fragmentShaderLight = GLES20Renderer.loadShader(GLES20.GL_FRAGMENT_SHADER, GLES20Renderer.fragmentShaderCodeLight);
GLES20.glAttachShader(GLES20Renderer.programLight, vertexShaderLight);
GLES20.glAttachShader(GLES20Renderer.programLight, fragmentShaderLight);
GLES20.glLinkProgram(GLES20Renderer.programLight);
uPLocationLight = GLES20.glGetUniformLocation(GLES20Renderer.programLight, "uP");
uVPositionLocationLight = GLES20.glGetUniformLocation(GLES20Renderer.programLight, "uVPosition");
GLES20.glUseProgram(GLES20Renderer.programLight);
GLES20.glUniform4f(uVPositionLocationLight, LightPosInEyeSpace[0], LightPosInEyeSpace[1], LightPosInEyeSpace[2], LightPosInEyeSpace[3]);
GLES20.glUniformMatrix4fv(uPLocationLight, 1, false, ProjectionMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_POINTS, 0, 1);
private static final String vertexShaderCodeLight =
"uniform vec4 uVPosition; \n"
+ "uniform mat4 uP; \n"
+ "void main(){ \n"
+ " gl_PointSize = 15.0; \n"
+ " gl_Position = uP * uVPosition; \n"
+ "} \n";
private static final String fragmentShaderCodeLight =
"#ifdef GL_FRAGMENT_PRECISION_HIGH \n"
+ "precision highp float; \n"
+ "#else \n"
+ "precision mediump float; \n"
+ "#endif \n"
+ "void main(){ \n"
+ " gl_FragColor = vec4(1.0,1.0,1.0,1.0); \n"
+ "} \n";
GL_POINTS are clipped at the center. This is just a limitation of GL_POINTS. If you can't live with this just use regular quads (assuming performance can deal with it)
Increasing viewport size helps, or you can render it to texture and zoom :)