Initialising a C Struct Array - Objective C - OpenGLES - objective-c

I have the following Vertex struct in my OpenGL ES app :
typedef struct Vertex {
float Position[3];
float Color[4];
} Vertex;
In my header I then declare :
Vertex *Vertices;
Then in my init method :
int array = 4;
Vertices = (Vertex *)malloc(array * sizeof(Vertex));
I then later setup the mesh as follows, where vertices array in this case has 4 vertices :
- (void)setupMesh {
int count = 0;
for (VerticeObject * object in verticesArray) {
Vertices[count].Position[0] = object.x;
Vertices[count].Position[1] = object.y;
Vertices[count].Position[2] = object.z;
Vertices[count].Color[0] = 0.9f;
Vertices[count].Color[1] = 0.9f;
Vertices[count].Color[2] = 0.9f;
Vertices[count].Color[3] = 1.0f;
count ++;
}
}
Can anyone spot what I am doing wrong here ? When I pass this Vertices object to OpenGL nothing is drawn, whereas if I hard code the Vertices array as :
Vertex Vertices [] = {
{{0.0, 0.0, 0}, {0.9, 0.9, 0.9, 1}},
{{0.0 , 0.0 , 0}, {0.9, 0.9, 0.9, 1}},
{{0.0, 0.0, 0}, {0.9, 0.9, 0.9, 1}},
{{0.0, 0.0, 0}, {0.9, 0.9, 0.9, 1}},
};
Everything works ?

I think the problem is that before you had a array allocated on the stack where now you have a pointer(memory address) to a block of memory on the heap. So when you wright stuff like sizeof(Vertices) your original sizeof(Vertices) would result in 4 vertices each holding 3 floats position and 4 floats color -> 4 * (3 + 4) * 4(float = 4 bytes) = 112 bytes. Where sizeof(aPointer) = 4 bytes. OpenGL is a C library an not super easy to work with so you should really brush up on you C skills before trying to get it running. Also there in a GLKView class now days that will make all the setup allot easier.
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
Try to allocate same size as the array of vertices. In your case 4 * sizeof(Vertex).
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 4, Vertices, GL_STATIC_DRAW);
If that doesn't work you can easily fix the problem by replacing your dynamically allocated array for a statically allocated since you know at compile time how big it needs to be.
Vertex Vertices[4];
Then set the values in your loop as you do.

Related

MacOS MTKView metal self.device.newBufferWithBytes crashes with assert

I want to draw a simple triangle and it crashes after I am trying to create MTLBuffer.
static float vertexes[] = {
0.0, 0.5, 0.0,
-0.5f, -0.5f, 0.0,
0.5, -0.5f, 0.0
};
id <MTLBuffer> buffer = [self.device newBufferWithBytes:vertexes
length:sizeof(vertexes) options:MTLResourceStorageModePrivate];
Here is the assert:
-[MTLDebugDevice newBufferWithBytes:length:options:]:392: failed assertion `storageModePrivate incompatible with ...WithBytes variant of newBuffer'
So how to create a buffer from the vertexes using MTLResourceStorageModePrivate option?
You must create a temporary blit buffer and use it to copy the contents to the private buffer. Here's example code:
buffer = [self.device newBufferWithLength:sizeof( vertexes )
options:MTLResourceStorageModePrivate];
id<MTLBuffer> blitBuffer = [self.device newBufferWithBytes:vertexes
length:sizeof( vertexes )
options:MTLResourceCPUCacheModeDefaultCache];
id <MTLCommandBuffer> cmd_buffer = [commandQueue commandBuffer];
id <MTLBlitCommandEncoder> blit_encoder = [cmd_buffer blitCommandEncoder];
[blit_encoder copyFromBuffer:blitBuffer
sourceOffset:0
toBuffer:buffer
destinationOffset:0
size:sizeof( vertexes )];
[blit_encoder endEncoding];
[cmd_buffer commit];
[cmd_buffer waitUntilCompleted];

UIImageEffects: white image when Gaussian radius above 280, vImageBoxConvolve_ARGB8888 issue?

I'm using the Gaussian blur algorithm found in Apple's UIImageEffects example:
CGFloat inputRadius = blurRadius * inputImageScale;
if (inputRadius - 2. < __FLT_EPSILON__)
inputRadius = 2.;
uint32_t radius = floor((inputRadius * 3. * sqrt(2 * M_PI) / 4 + 0.5) / 2);
radius |= 1; // force radius to be odd so that the three box-blur methodology works.
NSInteger tempBufferSize = vImageBoxConvolve_ARGB8888(inputBuffer, outputBuffer, NULL, 0, 0, radius, radius, NULL, kvImageGetTempBufferSize | kvImageEdgeExtend);
void *tempBuffer = malloc(tempBufferSize);
vImageBoxConvolve_ARGB8888(inputBuffer, outputBuffer, tempBuffer, 0, 0, radius, radius, NULL, kvImageEdgeExtend);
vImageBoxConvolve_ARGB8888(outputBuffer, inputBuffer, tempBuffer, 0, 0, radius, radius, NULL, kvImageEdgeExtend);
vImageBoxConvolve_ARGB8888(inputBuffer, outputBuffer, tempBuffer, 0, 0, radius, radius, NULL, kvImageEdgeExtend);
free(tempBuffer);
vImage_Buffer *temp = inputBuffer;
inputBuffer = outputBuffer;
outputBuffer = temp;
I'm also working with some fairly large images. Unfortunately, when the radius gets over 280, the blurred image suddenly becomes almost completely blank, regardless of the resolution. What's going on here? Does vImageBoxConvolve_ARGB8888 have an undocumented kernel width/height limit? Or does it have to do with the way the box kernel width is computed from the radius?
EDIT:
Found a similar question here: vImageBoxConvolve: errors when kernel size > 255. A Gaussian radius of 280 roughly translates to a 260 size kernel, so that part matches up.
The box and tent convolves can run into a problem where the value modulo overflows the 31-bit accumulator. However 255 seems a bit narrow for that. There should be another 7 bits of headroom at least for 255x255. Certainly, check the error code returned by the function. If it says everything is fine, then this seems bug worthy. Attach some sample code to help Apple reproduce the problem to help ensure it is fixed.

How can i read data by glReadPixels?

I'm working on openGL es on Android.
Now i meet a problem. I defined a float array, which is used to pass to fragment shader.
float[] data = new float[texWidth*texHeight];
// test data
for (int i = 0; i < data.length; i++) {
data[i] = 0.123f;
}
1. initTexture:
glGenTextures...
glBindTexture...
glTexParameteri...
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, fb);
2.FBO:
glGenBuffers...
glBindFramebuffer...
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texId, 0);
3.onDrawFrame:
glUseProgram(mProgram);...
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);...
IntBuffer fb = BufferUtils.iBufferAllocateDirect(texWidth*texHeight);
glReadPixels(0, 0, texWidth, texHeight, GL_RGBA, GL_UNSIGNED_BYTE, fb);
System.out.println(Integer.toHexString(fb.get(0)));
System.out.println(Integer.toHexString(fb.get(1)));
System.out.println(Integer.toHexString(fb.get(2)));
fragment shader:
precision mediump float;
uniform sampler2D sTexture;
varying vec2 vTexCoord;
void main()
{
tex = texture2D(sTexture, vTexCoord.st);
vec4 color = tex;
gl_FragColor = color;
}
So, how can i get the float data(0.123f, which i defined before) whith glReadPixels? Now what i get is ff000000(ABGR), so i suspect shader doesn't get the data through this way. Can someone tell me why and how can i deal with it? i am a newbie on it and really appreciate it.
Your main problem happens before glReadPixels(). The primary issue is with the way you use glTexImage2D():
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0,
GL_RGBA, GL_UNSIGNED_BYTE, fb);
The GL_UNSIGNED_BYTE value for the 8th argument specifies that the data passed in consists of unsigned bytes. However, the values in your buffer are floats. So your float values are interpreted as bytes, which can't possibly end well because they are completely different formats, with different sizes and memory layouts.
Now, you might be tempted to do this instead:
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0,
GL_RGBA, GL_FLOAT, fb);
This would work in desktop OpenGL, which supports implicit format conversions as part of specifying texture data. But it is not supported in OpenGL ES. In ES 2.0, GL_FLOAT is not even a legal value for the format argument. In ES 3.0, it is legal, but only for internal formats that actually store floats, like GL_RGBA16F or GL_RGBA32F. It is an error to use it in combination with the GL_RGBA internal format (3rd argument).
So unless you use float textures in ES 3.0 (which consume much more memory), you need to convert your original data to bytes. If you have float values between 0.0 and 1.0, you can do that by multiplying them by 255, and rounding to the next integer.
Then you can read them back also as bytes with glReadPixels(), and should get the same values again.

How to handle the orthographic projection when auto-rotating screen?

I have this method for performing the ortho projection:
void myGL::ApplyOrtho(float maxX, float maxY) const
{
float a = 1.0f / maxX;
float b = 1.0f / maxY;
float ortho[16] = {
a, 0, 0, 0,
0, b, 0, 0,
0, 0, -1, 0,
0, 0, 0, 1};
GLint projectionUniform = glGetUniformLocation(m_simpleProgram, "Projection");
glUniformMatrix4fv(projectionUniform, 1, 0, &ortho[0]);
}
It works fine for iPad screen when I do this:
ApplyOrtho(2, 2*1024/768);
Here's my rendered image:
However, when I rotate to landscape, it looks like this:
Now my assumption is this is because the ApplyOrtho matrix is setting a fixed projection and that projection does not rotate while the image is rotating within that projection, thus getting displayed fatter.
Incidentally, this is the rotation:
void myGL::ApplyRotation(float degrees) const
{
float radians = degrees * 3.14159f / 180.0f;
float s = std::sin(radians);
float c = std::cos(radians);
float zRotation[16] = {
c, s, 0, 0,
-s, c, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
};
GLint modelviewUniform = glGetUniformLocation(m_simpleProgram, "Modelview");
glUniformMatrix4fv(modelviewUniform, 1, 0, &zRotation[0]);
}
It is used right before drawing.
So I experimented and tried this at the same time I rotate:
ApplyOrtho(2*1024/768, 2);
However this has no effect whatsoever, even though the rotation is definitely happening at the same time. My image remains "fat".
Is my interpretation of why the fatness is happening correct?
How to handle the orthographic projection when auto-rotating screen?
UDPATE: Also tried this on iPhone using the 2/3 dimensions of the screen (not iPhone 5) and using ApplyOrtho(2,3) and ApplyOrtho(3,2) but the "fat" triangle in landscape remains.
Also: the viewport is setup just once, before the first Ortho:
glViewport(0, 0, width, height);
Where width and height are the dimensions of the Portrait screen.
The cause of the above discrepancies is that the orthographic projection is not matching the width and height ratio of the screen, thus the X and Y coordinates are not the same screen size. Making the orthographic ratio match the viewport ratio resolves this issue. As a result, when rotating, the image will remain exactly the same shape and size.

glreadpixel gl_depth_component returns 0?

I am looking for a solution to intersection point of a cube and a line. So i used
GLES20.glReadPixels(touchX, touchY, 1, 1, GLES20.GL_DEPTH_COMPONENT, GLES20.GL_FLOAT, zz);
and i showed the zz , but result was 0. so how could i get the depth buffer value of a Cube when i touched on the cube(actually on the 2d screen). I use GLES20 and Android API level15.And My code is below.
ByteBuffer PixelBuffer = ByteBuffer.allocateDirect(4);
ByteBuffer zBuffer = ByteBuffer.allocateDirect(4);
PixelBuffer.order(ByteOrder.nativeOrder());
PixelBuffer.position(0);
zBuffer.order(ByteOrder.nativeOrder());
zBuffer.position(0);
FloatBuffer zz;
zz = zBuffer.asFloatBuffer();
GLES20.glReadPixels(touchX, touchY, 1, 1, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, PixelBuffer);
GLES20.glReadPixels(touchX, touchY, 1, 1, GLES20.GL_DEPTH_COMPONENT, GLES20.GL_FLOAT, zz);
by the way picking color works fine.
Thanks!
You forget to prepare target framebuffer to read... Try like this:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0, 0, deviceWidth, deviceHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
Or just write simple shader and render your zbuffer data into your FBO, simething like
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
and then read color information form this FBO...