normalising vertex and normal coordinates Open GL ES 2.0 - objective-c

I have a model created in blender and exported to an .obj file. I have written a parser that reads in the co ordinates for of the vertices textures and normals. I have been dividing all the co-ordinats by a constant applicable to the program to reduce the size of the mode so that it fits the screen(this is a temporary measure). This works fine except for the lighting which doesn't work, Im left with a black 3D object when it should be coloured. After researching it on the web, I think this could be because the normals aren't of length one? If this is true how can I scale my co-ordinates so that they fit the screen and get the lighting to work?
Vertex Shader
//
// Created by Jake Cunningham on 13/10/2012.
// Copyright (c) 2012 Jake Cunningham. All rights reserved.
//
attribute vec4 position;
attribute vec3 normal;
varying lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
attribute vec2 TextCo;
varying vec2 textCoOut;
void main()
{
vec3 eyeNormal = normalize(normalMatrix * normal);
vec3 lightPosition = vec3(0.0, 0.0, 1.0);
vec4 diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));
colorVarying = diffuseColor * nDotVP;
gl_Position = modelViewProjectionMatrix * position;
textCoOut = TextCo;
}
Fragment Shader:
// Created by Jake Cunningham on 13/10/2012.
// Copyright (c) 2012 Jake Cunningham. All rights reserved.
//
varying lowp vec4 colorVarying;
varying lowp vec2 textCoOut;
uniform sampler2D texture;
void main()
{
gl_FragColor = colorVarying * texture2D(texture, textCoOut);
}
code from view controller.
glEnable(GL_DEPTH_TEST);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, loader.currentCountOfVerticies * sizeof(GLfloat) * 3, arrayOfVerticies, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glGenVertexArraysOES(1, &_normalArray);
glBindVertexArrayOES(_normalArray);
glGenBuffers(1, &_normalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _normalBuffer);
glBufferData(GLKVertexAttribNormal, loader.currentCountOfNormals * sizeof(GLfloat) * 3,loader.arrayOfNormals , GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glGenVertexArraysOES(1, &_textureArray);
glBindVertexArrayOES(_textureArray);
glGenBuffers(1, &_textureBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _textureBuffer);
glBufferData(GL_ARRAY_BUFFER, loader.currentCountOfTextureCoordinates * sizeof(GLfloat) * 2, loader.arrayOftextureCoOrdinates, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 8, BUFFER_OFFSET(0));
glBindVertexArrayOES(0);

If you are using shaders, you can use the normalize() operation on your vertices and normals within your GLSL code.
You could also have a look at the obj2opengl script which scales, centers, and normalizes your model, converting OBJ files into header files ready for iOS implementation. I've also extended that script into mtl2opengl to include support for MTL files and make it a bit more light-weight (with an Xcode example too).

Related

shader value conversion error when passing value between vertex and fragment shader

I have the following fragment and vertex shader.
Vertex:
#version 450
layout(location = 0) in vec2 Position;
layout(location = 1) in vec4 Color;
layout(location = 0) out vec2 fPosition;
void main()
{
gl_Position = vec4(Position, 0, 1);
fPosition = Position;
}
Fragment:
#version 450
layout(location = 0) in vec2 fPosition;
layout(location = 0) out vec4 fColor;
void main() {
vec4 colors[4] = vec4[](
vec4(1.0, 0.0, 0.0, 1.0),
vec4(0.0, 1.0, 0.0, 1.0),
vec4(0.0, 0.0, 1.0, 1.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
fColor = vec4(1.0);
for(int row = 0; row < 2; row++) {
for(int col = 0; col < 2; col++) {
float dist = distance(fPosition, vec2(-0.50 + col, 0.50 - row));
float delta = fwidth(dist);
float alpha = smoothstep(0.45-delta, 0.45, dist);
fColor = mix(colors[row*2+col], fColor, alpha);
}
}
}
But when compiling this I am getting the following error:
cannot convert from ' gl_Position 4-component vector of float Position' to 'layout( location=0) smooth out highp 2-component vector of float'
And i have no clue how to fix it. (this is my first time doing shader programming).
If additional information is needed please let me know.
1.
You do not need to specify layouts when transferring variables between vertex shader and fragment shader. Remove the layout(location = 0) parameter for the fPosition variable in the vertex and fragment shader.
2.
You only need to specify layout if you passing the variables (your position buffers) to the vertex shader through buffers. To add on, variables like positions, normals and textureCoords must always pass through the vertex shader first and then to the fragment shader.
3.
When exporting your final colour (fColor in your case) from the fragment shader, you do not need to pass a location, just specify the vector4 variable as out vec4 fColor; openGL detects it automatically.
4.
The error you actually got was telling you that you were assigning vector4 variable (fColor) to your already stored vec2 variables (fPosition). Note: In your vertex shader at attribute (location) "0", you had accessed the vertices that you had loaded, but you tried to assign a vector4 to the same location later in the fragment shader. OpenGL does not automatically overwrite data like that.

Generating Vertices in OpenGL ES 2.0 Vertex Shader

I am trying to draw a triangle with three vertices a,b,c. Typically, i would have these three vertex co-ordinates in an array and pass it as an attribute to my vertex shader.
But , is it possible to generate these vertex co-ordinates within the vertex shader itself rather than passing as an attribute (i.e. per vertex co-ordinate value for corresponding vertex position). If yes, any sample program to support it ?
Thanks.
You can create variables in vertex shader and pass it to fragment shader. An example:
Vertex shader
precision highp float;
uniform float u_time;
uniform float u_textureSize;
uniform mat4 u_mvpMatrix;
attribute vec4 a_position;
// This will be passed into the fragment shader.
varying vec2 v_textureCoordinate0;
void main()
{
// Create texture coordinate
v_textureCoordinate0 = a_position.xy / u_textureSize;
gl_Position = u_mvpMatrix * a_position;
}
And the fragment shader:
precision mediump float;
uniform float u_time;
uniform float u_pixel_amount;
uniform sampler2D u_texture0;
// Interpolated texture coordinate per fragment.
varying vec2 v_textureCoordinate0;
void main(void)
{
vec2 size = vec2( 1.0 / u_pixel_amount, 1.0 / u_pixel_amount);
vec2 uv = v_textureCoordinate0 - mod(v_textureCoordinate0,size);
gl_FragColor = texture2D( u_texture0, uv );
gl_FragColor.a=1.0;
}
How you can see, vector 2D named v_textureCoordinate0 is created in vertex shader and its interpolated value is used in fragment shader.
I hope it help you.

texture coordinates in opengl es 2

I have a png file with different sprites , in opengl 1. I have selected the picture with:
// the png dimensions are 512x512
gl.glMatrixMode(GL10.GL_TEXTURE);
// x and y are the coordinates of the selected drawing
gl.glTranslatef(x/512f, y/512f, 0);
// w and h are the width and
height of the selected drawing
gl.glScalef(w/512f, h/512f, 0);
I have no idea in opengl2 , i read this tutorial:
http://www.learnopengles.com/android-lesson-four-introducing-basic-texturing/
it is not difficult ,but you can only change the values of w and h (equivalent of
gl.glScalef(w/512f, h/512f, 0);
)
Do any other tutorial or solution?
So a tutorial you've read is what you need. Read previous tutorials from that website. The main difference in gles2 from gles1 is that all drawing happens inside shaders(fragment and vertex). Here is a part of texture bounding from my code and fragment shader source.
GLuint textureId;
// Generate a texture object
glGenTextures ( 1, textureId );
// Bind the texture object
glBindTexture ( GL_TEXTURE_2D, textureId );
/ Load the texture
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, where_you_store_unpacked_texture_data );
// Set the filtering mode
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
// Bind the texture
glActiveTexture ( GL_TEXTURE0 );
glBindTexture ( GL_TEXTURE_2D, textureId );
Then after you have bound a texture, you can pass its id into fragment shader.
Fragment shader is something like this:
const char* pszFragShader_text = "\
precision mediump float;\
\
varying vec3 v_texCoord_text;\
uniform sampler2D s_texture_text;\
void main (void)\
{\
gl_FragColor = texture2D( s_texture_text, v_texCoord_text.xy );\
}";

opengl es 2.0 - optimizing fragment shader

I am developing a game for Android/iOS and need to optimize the rendering.
The game enables the user to deform terrain so i am using a gray scale image for the terrain (value of 1 in the terrain means solid ground, and 0 means no ground) and applying a fragment shader on it (there is also a background image). This works very well with 60 fps constant, but the problem is that i also need to render a border on the terrains edge. So to do so i blur the edges when deforming and in the fragment shader i draw the border based on the terrains density/transparency (the border is a 1x64 texture).
The problem is that when rendering the border i need to do a dynamic texture read which drops the frame rate to 20. Is there any way i could optimize this? If i would replace the border texture with a uniform float array would it help or would it be the same as reading from a 2d texture?
The shader code:
varying mediump vec2 frag_background_texcoord;
varying mediump vec2 frag_density_texcoord;
varying mediump vec2 frag_terrain_texcoord;
uniform sampler2D density_texture;
uniform sampler2D terrain_texture;
uniform sampler2D mix_texture;
uniform sampler2D background_texture;
void main()
{
lowp vec4 background_color = texture2D(background_texture, frag_background_texcoord);
lowp vec4 terrain_color = texture2D(terrain_texture, frag_terrain_texcoord);
highp float density = texture2D(density_texture, frag_density_texcoord).a;
if(density > 0.5)
{
lowp vec4 mix_color = texture2D(mix_texture, vec2(density, 1.0)); <- dynamic texture read (FPS drops to 20), would replacing this with a uniform float array help (would also need to calculate the index in the array)?
gl_FragColor = mix(terrain_color, mix_color, mix_color.a);
} else
{
gl_FragColor = background_color;
}
}
Figured it out. The way i fixed it was to remove all branching. Runs of ~60fps now.
The optimized code:
varying mediump vec2 frag_background_texcoord;
varying mediump vec2 frag_density_texcoord;
varying mediump vec2 frag_terrain_texcoord;
uniform sampler2D density_texture;
uniform sampler2D terrain_texture;
uniform sampler2D mix_texture;
uniform sampler2D background_texture;
void main()
{
lowp vec4 background_color = texture2D(background_texture, frag_background_texcoord);
lowp vec4 terrain_color = texture2D(terrain_texture, frag_terrain_texcoord);
lowp vec4 mix_color = texture2D(mix_texture, vec2(density, 0.0));
lowp float density = texture2D(density_texture, frag_density_texcoord).a;
gl_FragColor = mix(mix(bg_color, terrain_color, mix_color.r), mix_color, mix_color.a);
}

Shader not showing up properly

I've been playing with shaders with a toy called ShaderToy and trying to create a top-down view water effect for a 2D game based on the code (for the shader) from Jonas Wagner. You can easily copy/paste this code in ShaderToy and see the effect.
The shader looks cool in ShaderToy, but when I try to replicate the same in my code, something goes wrong, see image below:
http://ivan.org.es/temp/shader_problems.png
My vertex shader (I don't know what's the one used in ShaderToy):
uniform mat4 Projection;
attribute vec2 Position;
void main(){
gl_Position = Projection*vec4(Position, 0.0, 1.0);
}
The Fragment shader:
precision lowp float;
vec3 sunDirection = normalize(vec3(0.0, -1.0, 0.0));
vec3 sunColor = vec3(1.0, 0.8, 0.7);
vec3 eye = vec3(0.0, 1.0, 0.0);
vec4 getNoise(vec2 uv){
vec2 uv0 = (uv/103.0)+vec2(iGlobalTime/17.0, iGlobalTime/29.0);
vec2 uv1 = uv/107.0-vec2(iGlobalTime/-19.0, iGlobalTime/31.0);
vec2 uv2 = uv/vec2(897.0, 983.0)+vec2(iGlobalTime/101.0, iGlobalTime/97.0);
vec2 uv3 = uv/vec2(991.0, 877.0)-vec2(iGlobalTime/109.0, iGlobalTime/-113.0);
vec4 noise = (texture2D(iChannel0, uv0)) +
(texture2D(iChannel0, uv1)) +
(texture2D(iChannel0, uv2)) +
(texture2D(iChannel0, uv3));
return noise*0.5-1.0;
}
void sunLight(const vec3 surfaceNormal, const vec3 eyeDirection, float shiny, float spec, float diffuse, inout vec3 diffuseColor, inout vec3 specularColor){
vec3 reflection = normalize(reflect(-sunDirection, surfaceNormal));
float direction = max(0.0, dot(eyeDirection, reflection));
specularColor += pow(direction, shiny)*sunColor*spec;
diffuseColor += max(dot(sunDirection, surfaceNormal),0.0)*sunColor*diffuse;
}
void main(){
vec2 uv = gl_FragCoord.xy / iResolution.xy;
uv *= 100.0;
vec4 noise = getNoise(uv);
vec3 surfaceNormal = normalize(noise.xzy*vec3(2.0, 1.0, 2.0));
vec3 diffuse = vec3(0.3);
vec3 specular = vec3(0.0);
vec3 worldToEye = vec3(0.0, 1.0, 0.0);//eye-worldPosition;
vec3 eyeDirection = normalize(worldToEye);
sunLight(surfaceNormal, eyeDirection, 100.0, 1.5, 0.5, diffuse, specular);
gl_FragColor = vec4((diffuse+specular+vec3(0.1))*vec3(0.3, 0.5, 0.9), 1.0);
}
Please notice that the fragment shader code is exactly the same in ShaderToy and in my engine, it seems to me like uv coords from gl_FragCoord are somehow wrong or a precision problem, because after a while the effect goes worse and worse. I'm using an orthographic projection, but it shouldn't have too much to do with this since I'm getting uv coordinates directly from screen.
Some insights on what's going on?
It turns out that I was loading my textures with
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The shader noise function was expecting GL_REPEAT instead.