I am a new bid in the world of OpenGL ES 2.0. I am trying to implement specular mapping using OpenGL ES 2.0 on iOS platform. As per my knowledge,in specular mapping we extract the value for the specular component of light from specular map texture. What i am doing in vertex shader is as follows:
vec3 N = NormalMatrix * Normal;
vec3 L = normalize(LightPosition);
vec3 E = normalize(EyePosition);
vec3 H = normalize(L + E);
vec4 Specular=(texture2D(sampler_spec, TextureCoordIn)).rgba;
float df = max(0.0, dot(N, L));
float sf = max(0.0, dot(N, H));
sf = pow(sf, Specular.a);
vec3 color = AmbientMaterial + df * DiffuseMaterial + sf * Specular.rgb * SpecularMaterial;
DestinationColor = vec4(color, 1); `
But I can't see any specular effect in my game. I don't know where i am going wrong. Please give your valuable suggestions.
Well your computations look quite reasonable. The problem is, you're doing per-vertex lighting. This means the lighting is computed per vertex (as you're doing it in the vertex shader) and interpolated accross the triangles. Therefore your lighting quality highly depends on the tessellation quality of your mesh.
If you have rather large triangles, such high frequency effects like specular highlights won't really show. Especially when using textures. Keep in mind that the reason for using textures is to provide surface detail at a sub-triangle level, but at the moment you're reading the texture per vertex, so the specular could just be a vertex attribute.
So the first step would be to move the lighting computations into the fragment shader. In the vertex shader you just compute N, L and E (don't forget to normalize) and put them out as varyings. In the fragment shader you do the rest of the computation, based on the interpolated N, L and E (don't forget to renormalize again).
If all these concepts of varyings and per-fragment lighting are a bit high-fetched at the moment, you should delve a little deeper into the basics of shaders and look for tutorials on simple per-fragment lighting shaders. These can then easily adapted for things like specular mapping or bump mapping, ...
Related
First, before I go on, I have read through: SceneKit painting on texture with texture coordinates which seems to suggest I'm on the right track.
I have a complex SCNGeometry representing a hexasphere. It's rendering really well, and with a full 60fps on all of my test devices.
At the moment, all of the hexagons are being rendered with a single material, because, as I understand it, every SCNMaterial I add to my geometry adds another draw call, which I can't afford.
Ultimately, I want to be able to color each of the almost 10,000 hexagons individually, so adding another material for each one is not going to work.
I had been planning to limit the color range to (say) 100 colors, and then move hexagons between different geometries, each with their own colored material, but that won't work because SCNGeometry says it works with an immutable set of vertices.
So, my current thought/plan is to use a shader modifier as suggested by #rickster in the above-mentioned question to somehow modify the color of individual hexagons (or sets of 4 triangles).
The thing is, I sort of understand the Apple doco referred to, but I don't understand how to provide the shader with what I think must essentially be an array of colour information, somehow indexed so that the shader knows which triangles to give what colors.
The code I have now, that creates the geometry reads as:
NSData *indiceData = [NSData dataWithBytes:oneMeshIndices length:sizeof(UInt32) * indiceIndex];
SCNGeometryElement *oneMeshElement =
[SCNGeometryElement geometryElementWithData:indiceData
primitiveType:SCNGeometryPrimitiveTypeTriangles
primitiveCount:indiceIndex / 3
bytesPerIndex:sizeof(UInt32)];
[oneMeshElements addObject:oneMeshElement];
SCNGeometrySource *oneMeshNormalSource =
[SCNGeometrySource geometrySourceWithNormals:oneMeshNormals count:normalIndex];
SCNGeometrySource *oneMeshVerticeSource =
[SCNGeometrySource geometrySourceWithVertices:oneMeshVertices count:vertexIndex];
SCNGeometry *oneMeshGeom =
[SCNGeometry geometryWithSources:[NSArray arrayWithObjects:oneMeshVerticeSource, oneMeshNormalSource, nil]
elements:oneMeshElements];
SCNMaterial *mat1 = [SCNMaterial material];
mat1.diffuse.contents = [UIColor greenColor];
oneMeshGeom.materials = #[mat1];
SCNNode *node = [SCNNode nodeWithGeometry:oneMeshGeom];
If someone can shed some light on how to provide the shader with a way to color each triangle indexed by the indices in indiceData, that would be fantastic.
EDIT
I've tried looking at providing the shader with a texture as a container for color information that would be indexed by the VertexID however it seems that SceneKit doesn't make the VertexID available. My thought was to provide this texture (actually just an array of bytes, 1 per hexagon on the hexasphere), via the SCNMaterialProperty class and then, in the shader, pull out the appropriate byte, based on the vertex number. That byte would be used to index an array of fixed colors and the resultant color for each vertex would then give the desired result.
Without a VertexID, this idea won't work, unless there is some other, similarly useful piece of data...
EDIT 2
Perhaps I am stubborn. I've been trying to get this to work, and as an experiment I created an image that is basically a striped rainbow and wrote the following shader, thinking it would basically colour my sphere with the rainbow.
It doesn't work. The entire sphere is drawn using the colour in the top left corner of the image.
My shaderModifer code is:
#pragma arguments
sampler2D colorMap;
uniform sampler2D colorMap;
#pragma body
vec4 color = texture2D(colorMap, _surface.diffuseTexcoord);
_surface.diffuse.rgba = color;
and I apply this using the code:
SCNMaterial *mat1 = [SCNMaterial material];
mat1.locksAmbientWithDiffuse = YES;
mat1.doubleSided = YES;
mat1.shaderModifiers = #{SCNShaderModifierEntryPointSurface :
#"#pragma arguments\nsampler2D colorMap;\nuniform sampler2D colorMap;\n#pragma body\nvec4 color = texture2D(colorMap, _surface.diffuseTexcoord);\n_surface.diffuse.rgba = color;"};
colorMap = [SCNMaterialProperty materialPropertyWithContents:[UIImage imageNamed:#"rainbow.png"]];
[mat1 setValue:colorMap forKeyPath:#"colorMap"];
I had thought that the _surface.diffuseTexcoord would be appropriate but I'm beginning to think I need to somehow map that to a coordinate in the image by knowing the dimensions of the image and interpolating somehow.
But if this is the case, what units are _surface.diffuseTexcoord in? How do I know the min/max range of this so that I can map it to the image?
Once again, I'm hoping someone can steer me in the right direction if these attempts are wrong.
EDIT 3
OK, so I know I'm on the right track now. I've realised that by using _surface.normal instead of _surface.diffuseTexcoord I can use that as a latitude/longitude on my sphere to map to an x,y in the image and I now see the hexagons being colored based on the color in the colorMap however it doesn't matter what I do (so far); the normal angles seem to be fixed in relation to the camera position, so when I move the camera to look at a different point of the sphere, the colorMap doesn't rotate with it.
Here is the latest shader code:
#pragma arguments
sampler2D colorMap;
uniform sampler2D colorMap;
#pragma body
float x = ((_surface.normal.x * 57.29577951) + 180.0) / 360.0;
float y = 1.0 - ((_surface.normal.y * 57.29577951) + 90.0) / 180.0;
vec4 color = texture2D(colorMap, vec2(x, y));
_output.color.rgba = color;
ANSWER
So I solved the problem. It turned out that there was no need for a shader to achieve my desired results.
The answer was to use a mappingChannel to provide the geometry with a set of texture coordinates for each vertex. These texture coordinates are used to pull color data from the appropriate texture (it all depends on how you set up your material).
So, whilst I did manage to get a shader working, there were performance issues on older devices, and using a mappingChannel was much much better, working at 60fps on all devices now.
I did find though that despite the documentation saying that a mapping channel is a series of CGPoint objects, that wouldn't work on 64 bit devices because CGPoint seems to use doubles instead of floats.
I needed to define my own struct:
typedef struct {
float x;
float y;
} MyPoint;
MyPoint oneMeshTextureCoordinates[vertexCount];
and then having built up an array of these, one for each vertex, I then created the mappingChannel source as follows:
SCNGeometrySource *textureMappingSource =
[SCNGeometrySource geometrySourceWithData:
[NSData dataWithBytes:oneMeshTextureCoordinates
length:sizeof(MyPoint) * vertexCount]
semantic:SCNGeometrySourceSemanticTexcoord
vertexCount
floatComponents:YES
componentsPerVector:2
bytesPerComponent:sizeof(float)
dataOffset:0
dataStride:sizeof(MyPoint)];
EDIT:
In response to a request, here is a project that demonstrates how I use this. https://github.com/pkclsoft/HexasphereDemo
I am trying to draw texture on a quad which has two triangles. But my objective is to draw texture on a single triangle only (within a mesh), the other triangle is to be left empty.
How can i achieve this ? any sample program or pseudo code will be of a lot help.
Follow the steps below
Check if vertices are correct using the frag shader.
gl_FragColor = vec4(1.0,0.0,0.0,1.0); // The rectangle must be red
If 1. is okay, check uv values.
if 1. is not okay. use this vertices and uv values.
vertices = -1.0,-1.0, 1.0,-1.0, -1.0,1.0, 1.0,1.0
UVs = 0.0,0.0, 1.0,0.0, 0.0,1.0, 1.0,1.0
That's it. You are all set for the next step
Hi all I am new to OpenGL ES 2.0 . I am confused with gl_position and varying variable both will be the output from vertex shader. varying variable will be passed to fragment shader, what about gl_position. Does gl_position influence on varying variable in fragment shader.
gl_position=vec4(-1); what is the meaning of this.
PLease do help me to understand these things in a much better way.
gl_Position is special variable. It is used to calculate which fragment will fragment shader be calculating/shading (it calculates its position). All other varyings are directly interpolated across the primitive.
gl_Position is not available in fragment shader. But there is gl_FragCoord variable available which is calculated from gl_Position so, that x/y values of it changes from 0 to 1 (from one screen side to another), z is depth from 0 (near plane) to 1 (far plane). And w is something like 1/gl_Position.w (feel free to look what it is exactly in OpenGL|ES2 spec).
Is there a way to include aspect ratio correction without using matrices in OpenGL ES?
I am writing a simple shader to rotate a texture.
void main()
{
mat2 rotX = mat2(cosA, sinA, -sinA, cosA);
vec4 a_pos = a_position;
a_pos.xy = a_position.xy * rotZ;
gl_Position = a_pos;
}
But the problem is that the image is getting skewed when rotating.
In normal openGL, we use something like gluPerspective(fov, (float)windowWidth/(float)windowHeight, zNear, zFar);
How do i do the same with shaders?
Note: I'd prefer not using a matrix.
Include aspect ratio fix in geometry of rendered object? I did so in my font rendering tool, position of verts in each rect is corrected by aspect ratio of screen, and yes i know that its better and easier do use matrix fix but i didnt know it when i was writing this tool, and it works fine:)
You can manually translate a code of GluPerspective to shader:
http://www.opengl.org/wiki/GluPerspective_code
But it is not efficient to calcuelte this matrix for each vertex. So you can manually calculate it for your device screen. Look at these posts
Just change the matrix multiplication order for rotation:
a_pos.xy = rotZ * a_position.xy;
The Orange book, section 16.2, lists implementing diffuse lighting as:
void main()
{
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
vec4 V = gl_ModelViewMatrix * gl_vertex;
vec3 L = normalize(lightPos - V.xyz);
gl_FrontColor = gl_Color * vec4(max(0.0, dot(N, L));
}
However, when I run this, the lighting changes when I move my camera.
On the other hand, when I change
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
to
vec3 N = normalize(gl_Normal);
I get diffuse lighting that works like the fixed pipeline.
What is this gl_NormalMatrix, what did removing it do, ... and is this a bug in the orange book ... or am I setting up my OpenGl code improperly?
[For completeness, the fragment shader just copies the color]
OK, I hope there's nothing wrong with answering your question after over half a year? :)
So there are two things to discuss here:
a) What should the shader look like
You SHOULD transform your normals by the modelview matrix - that's a given. Consider what would happen if you don't - your modelview matrix can contain some kind of rotation. Your cube would be rotated, but the normals would still point in the old direction! This is clearly wrong.
So: When you transform your vertices by modelview matrix, you should also transform the normals. Your normals are vec3 not vec4, and you're not interested in translations (normals only contain direction), so you can just multiply your normal by mat3(gl_ModelViewMatrix), which is the upper-left 3-3 submatrix.
Then: This is ALMOST correct, but still a bit wrong - the reasons are well-described on Lighthouse 3D - go have a read. Long story short, instead of mat3(gl_ModelViewMatrix), you have to multiply by an inverse transpose of that.
And OpenGL 2 is very helpful and precalculates this for you as gl_NormalMatrix. Hence, the correct code is:
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
b) But it's different from fixed pipeline, why?
The first thing which comes to my mind is that "something's wrong with your usage of fixed pipeline".
I'm not really keen on FP (long live shaders!), but as far as I can remember, when you specify your lights via glLightParameterfv(GL_LIGHT_POSITION, something), this was affected by the modelview matrix. It was easy (at least for me :)) to make a mistake of specifying the light position (or light direction for directional lights) in the wrong coordinate system.
I'm not sure if I remember correctly how that worked back then since I use GL3 and shaders nowadays, but let me try... what was your state of modelview matrix? I think it just might be possible that you have specified the directional light direction in object space instead of eye space, so that your light would rotate together with your object. IDK if that's relevant here, but make sure to pay attention to that when using FF. That's a mistake I remember myself doing often when I was still using GL 1.1.
Depending on the modelview state, you could specify the light in:
eye (camera) space,
world space,
object space.
Make sure which one it is.
Huh.. I hope that makes the topic more clear for you. The conclusions are:
always transform your normals along with your vertices in your vertex shaders, and
if it looks different from what you expect, think how you specify your light positions. (Maybe you want to multiply the light postion vector in a shader too? The remarks about light position coordinate systems still hold)