texture coordinates in opengl es 2 - opengl-es-2.0

I have a png file with different sprites , in opengl 1. I have selected the picture with:
// the png dimensions are 512x512
gl.glMatrixMode(GL10.GL_TEXTURE);
// x and y are the coordinates of the selected drawing
gl.glTranslatef(x/512f, y/512f, 0);
// w and h are the width and
height of the selected drawing
gl.glScalef(w/512f, h/512f, 0);
I have no idea in opengl2 , i read this tutorial:
http://www.learnopengles.com/android-lesson-four-introducing-basic-texturing/
it is not difficult ,but you can only change the values of w and h (equivalent of
gl.glScalef(w/512f, h/512f, 0);
)
Do any other tutorial or solution?

So a tutorial you've read is what you need. Read previous tutorials from that website. The main difference in gles2 from gles1 is that all drawing happens inside shaders(fragment and vertex). Here is a part of texture bounding from my code and fragment shader source.
GLuint textureId;
// Generate a texture object
glGenTextures ( 1, textureId );
// Bind the texture object
glBindTexture ( GL_TEXTURE_2D, textureId );
/ Load the texture
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, where_you_store_unpacked_texture_data );
// Set the filtering mode
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
// Bind the texture
glActiveTexture ( GL_TEXTURE0 );
glBindTexture ( GL_TEXTURE_2D, textureId );
Then after you have bound a texture, you can pass its id into fragment shader.
Fragment shader is something like this:
const char* pszFragShader_text = "\
precision mediump float;\
\
varying vec3 v_texCoord_text;\
uniform sampler2D s_texture_text;\
void main (void)\
{\
gl_FragColor = texture2D( s_texture_text, v_texCoord_text.xy );\
}";

Related

shader value conversion error when passing value between vertex and fragment shader

I have the following fragment and vertex shader.
Vertex:
#version 450
layout(location = 0) in vec2 Position;
layout(location = 1) in vec4 Color;
layout(location = 0) out vec2 fPosition;
void main()
{
gl_Position = vec4(Position, 0, 1);
fPosition = Position;
}
Fragment:
#version 450
layout(location = 0) in vec2 fPosition;
layout(location = 0) out vec4 fColor;
void main() {
vec4 colors[4] = vec4[](
vec4(1.0, 0.0, 0.0, 1.0),
vec4(0.0, 1.0, 0.0, 1.0),
vec4(0.0, 0.0, 1.0, 1.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
fColor = vec4(1.0);
for(int row = 0; row < 2; row++) {
for(int col = 0; col < 2; col++) {
float dist = distance(fPosition, vec2(-0.50 + col, 0.50 - row));
float delta = fwidth(dist);
float alpha = smoothstep(0.45-delta, 0.45, dist);
fColor = mix(colors[row*2+col], fColor, alpha);
}
}
}
But when compiling this I am getting the following error:
cannot convert from ' gl_Position 4-component vector of float Position' to 'layout( location=0) smooth out highp 2-component vector of float'
And i have no clue how to fix it. (this is my first time doing shader programming).
If additional information is needed please let me know.
1.
You do not need to specify layouts when transferring variables between vertex shader and fragment shader. Remove the layout(location = 0) parameter for the fPosition variable in the vertex and fragment shader.
2.
You only need to specify layout if you passing the variables (your position buffers) to the vertex shader through buffers. To add on, variables like positions, normals and textureCoords must always pass through the vertex shader first and then to the fragment shader.
3.
When exporting your final colour (fColor in your case) from the fragment shader, you do not need to pass a location, just specify the vector4 variable as out vec4 fColor; openGL detects it automatically.
4.
The error you actually got was telling you that you were assigning vector4 variable (fColor) to your already stored vec2 variables (fPosition). Note: In your vertex shader at attribute (location) "0", you had accessed the vertices that you had loaded, but you tried to assign a vector4 to the same location later in the fragment shader. OpenGL does not automatically overwrite data like that.

Vulkan compute shader. Smooth uv coordinates

I have this shader:
#version 450
layout(binding = 0) buffer b0 {
vec2 src[ ];
};
layout(binding = 1) buffer b1 {
vec2 dst[ ];
};
layout(binding = 2) buffer b2 {
int index[ ];
};
layout (local_size_x = 1, local_size_y = 1, local_size_z = 1) in;
void main()
{
int ind =int(gl_GlobalInvocationID.x);
vec2 norm;
norm=src[index[ind*3+2]]-src[index[ind*3]]+src[index[ind*3+1]]-src[index[ind*3]];
norm/=2.0;
dst[index[ind*3]] +=norm;
norm=src[index[ind*3+0]]-src[index[ind*3+1]]+src[index[ind*3+2]]-src[index[ind*3+1]];
dst[index[ind*3+1]] +=norm;
norm=src[index[ind*3+1]]-src[index[ind*3+2]]+src[index[ind*3+0]]-src[index[ind*3+2]];
norm/=2.0;
dst[index[ind*3+2]] +=norm;
}
Because dst buffer is not "atomic", the summation is incorrect.
Is there any way to solve this problem? My answer is no, but if i missed something.
For each vertex in polygon I am calculating a vector from vertex to the center of polygon. Different polygons has the same vertices.
dst - is a vertex buffer, the result of the summation of those shifts from vertex to polygon center.
Each time I have different computation results.

normalising vertex and normal coordinates Open GL ES 2.0

I have a model created in blender and exported to an .obj file. I have written a parser that reads in the co ordinates for of the vertices textures and normals. I have been dividing all the co-ordinats by a constant applicable to the program to reduce the size of the mode so that it fits the screen(this is a temporary measure). This works fine except for the lighting which doesn't work, Im left with a black 3D object when it should be coloured. After researching it on the web, I think this could be because the normals aren't of length one? If this is true how can I scale my co-ordinates so that they fit the screen and get the lighting to work?
Vertex Shader
//
// Created by Jake Cunningham on 13/10/2012.
// Copyright (c) 2012 Jake Cunningham. All rights reserved.
//
attribute vec4 position;
attribute vec3 normal;
varying lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
attribute vec2 TextCo;
varying vec2 textCoOut;
void main()
{
vec3 eyeNormal = normalize(normalMatrix * normal);
vec3 lightPosition = vec3(0.0, 0.0, 1.0);
vec4 diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));
colorVarying = diffuseColor * nDotVP;
gl_Position = modelViewProjectionMatrix * position;
textCoOut = TextCo;
}
Fragment Shader:
// Created by Jake Cunningham on 13/10/2012.
// Copyright (c) 2012 Jake Cunningham. All rights reserved.
//
varying lowp vec4 colorVarying;
varying lowp vec2 textCoOut;
uniform sampler2D texture;
void main()
{
gl_FragColor = colorVarying * texture2D(texture, textCoOut);
}
code from view controller.
glEnable(GL_DEPTH_TEST);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, loader.currentCountOfVerticies * sizeof(GLfloat) * 3, arrayOfVerticies, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glGenVertexArraysOES(1, &_normalArray);
glBindVertexArrayOES(_normalArray);
glGenBuffers(1, &_normalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _normalBuffer);
glBufferData(GLKVertexAttribNormal, loader.currentCountOfNormals * sizeof(GLfloat) * 3,loader.arrayOfNormals , GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glGenVertexArraysOES(1, &_textureArray);
glBindVertexArrayOES(_textureArray);
glGenBuffers(1, &_textureBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _textureBuffer);
glBufferData(GL_ARRAY_BUFFER, loader.currentCountOfTextureCoordinates * sizeof(GLfloat) * 2, loader.arrayOftextureCoOrdinates, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 8, BUFFER_OFFSET(0));
glBindVertexArrayOES(0);
If you are using shaders, you can use the normalize() operation on your vertices and normals within your GLSL code.
You could also have a look at the obj2opengl script which scales, centers, and normalizes your model, converting OBJ files into header files ready for iOS implementation. I've also extended that script into mtl2opengl to include support for MTL files and make it a bit more light-weight (with an Xcode example too).

How seperate y-planar, u-planar and uv-planar from yuv bi planar in ios?

In application i used AVCaptureVideo. i got video in kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format.
now i am getting y-planar and uv-planar from imagebuffer.
CVPlanarPixelBufferInfo_YCbCrBiPlanar *planar = CVPixelBufferGetBaseAddress(imageBuffer);
size_t y-offset = NSSwapBigLongToHost(planar->componentInfoY.offset);
size_t uv-offset = NSSwapBigLongToHost(planar->componentInfoCbCr.offset);
here yuv is biplanar format(y+UV).
what is UV-planar? is this uuuu,yyyy format or uvuvuvuv format?
How to i get u-planar and y-planar seperatly?
can any one pls help me?
The Y plane represents the luminance component, and the UV plane represents the Cb and Cr chroma components.
In the case of kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format, you will find the luma plane is 8bpp with the same dimensions as your video, your chroma plane will be 16bpp, but only a quarter of the size of the original video. You will have one Cb and one Cr component per pixel on this plane.
so if your input video is 352x288, your Y plane will be 352x288 8bpp, and your CbCr 176x144 16bpp. This works out to be about the same amount of data as a 12bpp 352x288 image, half what would be required for RGB888 and still less than RGB565.
So in the buffer, Y will look like this
[YYYYY . . . ]
and UV
[UVUVUVUVUV . . .]
vs RGB being, of course,
[RGBRGBRGB . . . ]
Below code copy yuv data from pixelBuffer whose format is kCVPixelFormatType_420YpCbCr8BiPlanarFullRange.
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
size_t pixelWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t pixelHeight = CVPixelBufferGetHeight(pixelBuffer);
// y bite size
size_t y_size = pixelWidth * pixelHeight;
// uv bite size
size_t uv_size = y_size / 2;
uint8_t *yuv_frame = malloc(uv_size + y_size);
// get base address of y
uint8_t *y_frame = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// copy y data
memcpy(yuv_frame, y_frame, y_size);
// get base address of uv
uint8_t *uv_frame = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
// copy uv data
memcpy(yuv_frame + y_size, uv_frame, uv_size);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

Calculate ray direction vector from screen coordanate

I'm looking for a better way (or a note that this is the best way) to transfer a pixel coordinate to its corresponding ray direction from a arbitrary camera position/direction.
My current method is as follows. I define a "camera" as a position vector, lookat vector, and up vector, named as such. (Note that the lookat vector is a unit vector in the direction the camera is facing, NOT where (position - lookat) is the direction, as is the standard in XNA's Matrix.CreateLookAt) These three vectors can uniquely define a camera position. Here's the actual code (well, not really the actual, a simplified abstracted version) (Language is HLSL)
float xPixelCoordShifted = (xPixelCoord / screenWidth * 2 - 1) * aspectRatio;
float yPixelCoordShifted = yPixelCoord / screenHeight * 2 - 1;
float3 right = cross(lookat, up);
float3 actualUp = cross(right, lookat);
float3 rightShift = mul(right, xPixelCoordShifted);
float3 upShift = mul(actualUp, yPixelCoordShifted);
return normalize(lookat + rightShift + upShift);
(the return value is the direction of the ray)
So what I'm asking is this- What's a better way to do this, maybe using matrices, etc. The problem with this method is that if you have too wide a viewing angle, the edges of the screen get sort of "radially stretched".
You can calculate it (ray) in pixel shader, HLSL code:
float4x4 WorldViewProjMatrix; // World*View*Proj
float4x4 WorldViewProjMatrixInv; // (World*View*Proj)^(-1)
void VS( float4 vPos : POSITION,
out float4 oPos : POSITION,
out float4 pos : TEXCOORD0 )
{
oPos = mul(vPos, WorldViewProjMatrix);
pos = oPos;
}
float4 PS( float4 pos : TEXCOORD0 )
{
float4 posWS = mul(pos, WorldViewProjMatrixInv);
float3 ray = posWS.xyz / posWS.w;
return float4(0, 0, 0, 1);
}
The information about your camera's position and direction is in View matrix (Matrix.CreateLookAt).