I am developing an iPhone game using OpenGL ES 1.1 and need to use vertex buffer objects in order to render 500+ particles without the performance decreasing.
My game was able to draw successfully using the non-VBO method, but now that I've attempted to incorporate VBOs, nothing is drawing anymore.
Please help me to identify what I am doing wrong and provide a correct example.
I have a class called TexturedQuad which consists of the following:
// TexturedQuad.h
enum {
ATTRIB_POSITION,
ATTRIB_TEXTURE_COORD
};
#interface TexturedQuad : NSObject {
GLfloat *textureCoords; // 8 elements for 4 points (u and v)
GLfloat *vertices; // 8 elements for 4 points (x and y)
GLubyte *indices; // how many elements does this need?
// vertex buffer array IDs generated using glGenBuffers(..)
GLuint vertexBufferID;
GLuint textureCoordBufferID;
GLuint indexBufferID;
// ...other ivars
}
// #synthesize in .m file for each property
#property (nonatomic, readwrite) GLfloat *textureCoords;
#property (nonatomic, readwrite) GLfloat *vertices;
#property (nonatomic, readwrite) GLubyte *indices;
#property (nonatomic, readwrite) GLuint vertexBufferID;
#property (nonatomic, readwrite) GLuint textureCoordBufferID;
#property (nonatomic, readwrite) GLuint indexBufferID;
// vertex buffer object methods
- (void) createVertexBuffers;
- (void) createTextureCoordBuffers;
- (void) createIndexBuffer;
In TexturedQuad.m, the vertex buffers are created:
- (void) createVertexBuffers {
glGenBuffers(1, &vertexBufferID);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
}
- (void) createTextureCoordBuffers {
glGenBuffers(1, &textureCoordBufferID);
glBindBuffer(GL_ARRAY_BUFFER, textureCoordBufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(textureCoords), textureCoords, GL_STATIC_DRAW);
}
- (void) createIndexBuffer {
glGenBuffers(1, &indexBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLubyte) * 16, indices, GL_STATIC_DRAW);
}
The above VBO creation methods are invoked by a custom AtlasLibrary class which initializes each TexturedQuad instance.
Firstly, the vertices are arranged in the following format:
// bottom left
quad.vertices[0] = xMin;
quad.vertices[1] = yMin;
// bottom right
quad.vertices[2] = xMax;
quad.vertices[3] = yMin;
// top left
quad.vertices[4] = xMin;
quad.vertices[5] = yMax;
// top right
quad.vertices[6] = xMax;
quad.vertices[7] = yMax;
Secondly, texture coordinates are arranged in the following format (flipped to account for OpenGL ES's tendency to mirror images):
// top left (of texture)
quad.textureCoords[0] = uMin;
quad.textureCoords[1] = vMax;
// top right
quad.textureCoords[2] = uMax;
quad.textureCoords[3] = vMax;
// bottom left
quad.textureCoords[4] = uMin;
quad.textureCoords[5] = vMin;
// bottom right
quad.textureCoords[6] = uMax;
quad.textureCoords[7] = vMin;
...next, the VBO-creation methods are called (in AtlasLibrary)
[quad createVertexBuffers];
[quad createTextureCoordBuffers];
[quad createIndexBuffer];
Now the meat and potatoes. The SceneObject class. SceneObjects are objects in the game that are renderable. They reference a TexturedQuad instance and contain information about rotation, translation, and scale.
Here is the render method in SceneObject:
- (void) render {
// binds texture in OpenGL ES if not already bound
[[AtlasLibrary sharedAtlasLibrary] ensureContainingTextureAtlasIsBoundInOpenGLES:self.containingAtlasKey];
glPushMatrix();
glTranslatef(translation.x, translation.y, translation.z);
glRotatef(rotation.x, 1, 0, 0);
glRotatef(rotation.y, 0, 1, 0);
glRotatef(rotation.z, 0, 0, 1);
glScalef(scale.x, scale.y, scale.z);
// change alpha
glColor4f(1.0, 1.0, 1.0, alpha);
// vertices
glBindBuffer(GL_ARRAY_BUFFER, texturedQuad.vertexBufferID);
glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, sizeof(GL_FLOAT), &texturedQuad.vertices[0]);
// texture coords
glBindBuffer(GL_ARRAY_BUFFER, texturedQuad.textureCoordBufferID);
glVertexAttribPointer(ATTRIB_TEXTURE_COORD, 2, GL_FLOAT, GL_FALSE, sizeof(GL_FLOAT), &texturedQuad.textureCoords[0]);
// bind index buffer array
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, texturedQuad.indexBufferID);
// draw
glDrawElements(GL_TRIANGLE_STRIP, sizeof(texturedQuad.indices) / sizeof(texturedQuad.indices[0]), GL_UNSIGNED_BYTE, texturedQuad.indices);
glPopMatrix();
}
I have a strong feeling that either my indices array is structured incorrectly or that the glDrawElements(..) function is called incorrectly.
To answer this question, please:
identify what I am doing incorrectly that would cause OpenGL ES to not draw my SceneObjects.
provide the correct way to do what I am trying to do (according to my framework, please)
provide any suggestions or links which may help (optional)
Thanks so much!
I'm not experienced in the differences between OpenGL and OpenGL ES, but I'm not seeing any calls to glEnableVertexAttribArray() in your code.
The function is suspiciously absent in the OpenGL ES 1.1 docs, but is in 2.0, and is being used in Apple's OpenGL ES VBO article (thanks JSPerfUnkn0wn).
Here are some other good (though, non-ES) tutorials on Vertex Buffer Objects, and Index Buffer Objects.
Unless I'm missing something, where are you loading the textures? You're setting texture coordinates but I see no glBindTexture. And I'm also assuming alpha is a valid value.
See Apple OpenGL ES VBO article, namely from Listing 9-1, and this OpenGL ES VBO tutorial.
Related
This is best I've come up with for blitting a 24-bit BGR image out to an NSView.
I did trim a significant amount of CPU time by ensuring that the NSWindow host also had the same colorSpace.
I think there are 4 or 5 pixel copies going on here:
in the vImage conversion (required)
calling CGDataProviderCreateWithData
calling CGImageCreate
creating the NSBitmapImageRep bitmap
in the final blit with drawInRect (required)
Anyone want to chime in on improving it?
Any help would be much appreciated.
{
// one-time setup code
CGColorSpaceRef useColorSpace = nil;
int w = 1920;
int h = 1080;
[theWindow setColorSpace: [NSColorSpace genericRGBColorSpace]];
// setup vImage buffers (not listed here)
// srcBuffer is my 24-bit BGR image (malloc-ed to be w*h*3)
// dstBuffer is for the resulting 32-bit RGBA image (malloc-ed to be w*h*4)
...
// this is called # 30-60fps
if (!useColorSpace)
useColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
vImage_Error err = vImageConvert_BGR888toRGBA8888(srcBuffer, NULL, 0xff, dstBuffer, NO, 0);
CGDataProviderRef newProvider = CGDataProviderCreateWithData(NULL,dstBuffer->data,w*h*4,myReleaseProvider);
CGImageRef myImageRGBA = CGImageCreate(w, h, 8, 32, w*4, useColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, newProvider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(newProvider);
// store myImageRGBA in an array of frames (using NSObject wrappers) for later access (setNeedsDisplay:)
...
}
- (void)drawRect:(NSRect)dirtyRect
{
// this is called # 30-60fps
CGImageRef storedImage = ...; // retrieve from array
NSBitmapImageRep *repImg = [[NSBitmapImageRep alloc] initWithCGImage:storedImage];
CGRect myFrame = CGRectMake(0,0,CGImageGetWidth(storedImage),CGImageGetHeight(storedImage));
[repImg drawInRect:myFrame fromRect:myFrame operation:NSCompositeCopy fraction:1.0 respectFlipped:TRUE hints:nil];
// free image from array (not listed here)
}
// this is called when the CGDataProvider is ready to release its data
void myReleaseProvider (void *info, const void *data, size_t size)
{
if (data) {
free((void *)data);
data=nil;
}
}
Use CGColorSpaceCreateDeviceRGB instead of genericRGB to avoid colorspace conversion inside CG. Use kCGImageAlphaNoneSkipLast instead of kCGImageAlphaLast since we know alpha is opaque to allow for a copy instead of a blend.
After you make those changes, it would be useful to run an Instruments time profile on it to show where the time is going.
I'm desperatly trying to draw a filled square with Cocos2D and I can't manage to find an example on how to do it :
Here is my draw method. I succeeded in drawing a square but I can't manage to fill it !
I've read that I need to use a OpenGL method called glDrawArrays with a parameter GL_TRIANGLE_FAN in order to draw a filled square and that's what I tried.
-(void) draw
{
// Disable textures - we want to draw with plaine colors
ccGLEnableVertexAttribs( kCCVertexAttribFlag_Position | kCCVertexAttribFlag_Color );
float l_fRedComponent = 0;
float l_fGreenComponent = 0;
float l_fBlueComponent = 0;
float l_fAlphaComponent = 0;
[mpColor getRed:&l_fRedComponent green:&l_fGreenComponent blue:&l_fBlueComponent alpha:&l_fAlphaComponent];
ccDrawColor4F(l_fRedComponent, l_fGreenComponent, l_fBlueComponent, l_fAlphaComponent);
glLineWidth(10);
CGPoint l_bottomLeft, l_bottomRight, l_topLeft, l_topRight;
l_bottomLeft.x = miPosX - miWidth / 2.0f;
l_bottomLeft.y = miPosY - miHeight / 2.0f;
l_bottomRight.x = miPosX + miWidth / 2.0f;
l_bottomRight.y = miPosY - miHeight / 2.0f;
l_topRight.x = miPosX + miWidth / 2.0f;
l_topRight.y = miPosY + miHeight / 2.0f;
l_topLeft.x = miPosX - miWidth / 2.0f;
l_topLeft.y = miPosY + miHeight / 2.0f;
CGPoint vertices[] = { l_bottomLeft, l_bottomRight, l_topRight, l_topLeft, l_bottomLeft };
int l_arraySize = sizeof(vertices) / sizeof(CGPoint) ;
// My old way of doing this, it draws a square, but not filled.
//ccDrawPoly( vertices, l_arraySize, NO);
// Deprecated method :(
//glVertexPointer(2, GL_FLOAT, 0, vertices);
// I've found something related to this method to replace the deprecated one, but can't understand this method !
glVertexAttribPointer(kCCVertexAttrib_Position, 3, GL_FLOAT, GL_FALSE, 0, vertices);
glDrawArrays(GL_TRIANGLE_FAN, 0, l_arraySize);
}
I've found some examples with the old version of Cocos2D (1.0) but since it's been upgraded to version 2.0 "lately" all the examples I find give me compilation errors !
Could anyone enlight my path here please ?
I didn't know today is "Reinvent the Wheel" day. :)
ccDrawSolidRect(CGPoint origin, CGPoint destination, ccColor4F color);
If you were to go all crazy and wanted to draw filled polygons, there's also:
ccDrawSolidPoly(const CGPoint *poli, NSUInteger numberOfPoints, ccColor4F color);
The "solid" methods are new in Cocos2D 2.x.
You can simply create CCLayerColor instance with needed content size and use it as filled square. In other case you have to triangulate your polygon (it will have two triangles in case of square) and draw it using OpenGL.
---EDIT
Didn't test this code, find it with google, but it seems to work fine.
http://www.deluge.co/?q=cocos-2d-custom-filled-polygon
I've been looking at the new OpenGL framework for iOS, aptly named GLKit, and have been playing around with porting some existing OpenGL 1.0 code to OpenGL ES 2.0 just to dip my toe in the water and get to grips with things.
After reading the API and a whole ream of other best practices provided by Apple and the OpenGL documentation, i've had it pretty much ingrained into me that I should be using Vertex Buffer Objects and using "elements" or rather, vertex indices. There seems to be a lot of mention of optimising memory storage by using padding where necessary too but that's a conversation for another day perhaps ;)
I read on SO a while ago about the benefits of using NSMutableData over classic malloc/free and wanted to try and take this approach when writing my VBO. So far i've managed to bungle together a snippet that looks like i'm heading down the right track but i'm not entirely sure about how much data a VBO should contain. Here's what i've got so far:
//import headers
#import <GLKit/GLKit.h>
#pragma mark -
#pragma mark InterleavingVertexData
//vertex buffer object struct
struct InterleavingVertexData
{
//vertices
GLKVector3 vertices;
//normals
GLKVector3 normal;
//color
GLKVector4 color;
//texture coordinates
GLKVector2 texture;
};
typedef struct InterleavingVertexData InterleavingVertexData;
#pragma mark -
#pragma mark VertexIndices
//vertex indices struct
struct VertexIndices
{
//vertex indices
GLuint a;
GLuint b;
GLuint c;
};
typedef struct VertexIndices VertexIndices;
//create and return a vertex index with specified indices
static inline VertexIndices VertexIndicesMake(GLuint a, GLuint b, GLuint c)
{
//declare vertex indices
VertexIndices vertexIndices;
//set indices
vertexIndices.a = a;
vertexIndices.b = b;
vertexIndices.c = c;
//return vertex indices
return vertexIndices;
}
#pragma mark -
#pragma mark VertexBuffer
//vertex buffer struct
struct VertexBuffer
{
//vertex data
NSMutableData *vertexData;
//vertex indices
NSMutableData *indices;
//total number of vertices
NSUInteger totalVertices;
//total number of indices
NSUInteger totalIndices;
};
typedef struct VertexBuffer VertexBuffer;
//create and return a vertex buffer with allocated data
static inline VertexBuffer VertexBufferMake(NSUInteger totalVertices, NSUInteger totalIndices)
{
//declare vertex buffer
VertexBuffer vertexBuffer;
//set vertices and indices count
vertexBuffer.totalVertices = totalVertices;
vertexBuffer.totalIndices = totalIndices;
//set vertex data and indices
vertexBuffer.vertexData = nil;
vertexBuffer.indices = nil;
//check vertices count
if(totalVertices > 0)
{
//allocate data
vertexBuffer.vertexData = [[NSMutableData alloc] initWithLength:(sizeof(InterleavingVertexData) * totalVertices)];
}
//check indices count
if(totalIndices > 0)
{
//allocate data
vertexBuffer.indices = [[NSMutableData alloc] initWithLength:(sizeof(VertexIndices) * totalIndices)];
}
//return vertex buffer
return vertexBuffer;
}
//grow or shrink a vertex buffer
static inline void VertexBufferResize(VertexBuffer *vertexBuffer, NSUInteger totalVertices, NSUInteger totalIndices)
{
//check adjusted vertices count
if(vertexBuffer->totalVertices != totalVertices && totalVertices > 0)
{
//set vertices count
vertexBuffer->totalVertices = totalVertices;
//check data is valid
if(vertexBuffer->vertexData)
{
//allocate data
[vertexBuffer->vertexData setLength:(sizeof(InterleavingVertexData) * totalVertices)];
}
else
{
//allocate data
vertexBuffer->vertexData = [[NSMutableData alloc] initWithLength:(sizeof(InterleavingVertexData) * totalVertices)];
}
}
//check adjusted indices count
if(vertexBuffer->totalIndices != totalIndices && totalIndices > 0)
{
//set indices count
vertexBuffer->totalIndices = totalIndices;
//check data is valid
if(vertexBuffer->indices)
{
//allocate data
[vertexBuffer->indices setLength:(sizeof(VertexIndices) * totalIndices)];
}
else
{
//allocate data
vertexBuffer->indices = [[NSMutableData alloc] initWithLength:(sizeof(VertexIndices) * totalIndices)];
}
}
}
//release vertex buffer data
static inline void VertexBufferRelease(VertexBuffer *vertexBuffer)
{
//set vertices and indices count
vertexBuffer->totalVertices = 0;
vertexBuffer->totalIndices = 0;
//check vertices are valid
if(vertexBuffer->vertexData)
{
//clean up
[vertexBuffer->vertexData release];
vertexBuffer->vertexData = nil;
}
//check indices are valid
if(vertexBuffer->indices)
{
//clean up
[vertexBuffer->indices release];
vertexBuffer->indices = nil;
}
}
Currently, the interleaving vertex data contains enough to store the vertices, normals, colors and texture coordinates for each vertex. I was under the impression that there would be an equal number of vertices and indices but in practice this obviously isn't the case so for this reason, the indices are part of the VBO rather than the InterleavingVertexData.
Question Updated:
I've updated the code above after managing to wrangle it into a working state. Hopefully it will come in useful to someone in the future.
Now that i've managed to set everything up, i'm having trouble getting the expected results from rendering the content bound to the VBO. Here's the code i've got so far for loading my data into OpenGL:
//generate buffers
glGenBuffers(2, buffers);
//bind vertices buffer
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBufferData(GL_ARRAY_BUFFER, (sizeof(InterleavingVertexData) * vertexBuffer.totalVertices), self.vertexData, GL_STATIC_DRAW);
//bind indices buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, (sizeof(VertexIndices) * vertexBuffer.totalIndices), self.vertexIndices, GL_STATIC_DRAW);
//reset buffers
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
And the code for rendering everything:
//enable required attributes
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glEnableVertexAttribArray(GLKVertexAttribColor);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
//bind buffers
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
//set shape attributes
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, vertices));
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_TRUE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, normal));
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_TRUE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, color));
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_TRUE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, texture));
//draw shape
glDrawElements(GL_TRIANGLES, vertexBuffer.totalIndices, GL_UNSIGNED_INT, (void *)0);
//reset buffers
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
//disable atttributes
glDisableVertexAttribArray(GLKVertexAttribTexCoord0);
glDisableVertexAttribArray(GLKVertexAttribColor);
glDisableVertexAttribArray(GLKVertexAttribNormal);
glDisableVertexAttribArray(GLKVertexAttribPosition);
Whilst my iPhone hasn't yet exploded with awesome graphics of unicorns shooting rainbows from their eyes, I haven't been able to render a simple shape in it's entirety without tearing my hair out.
From the rendering it looks as though only 1/3rd of each shape is being drawn, perhaps 1/2 depending on the viewing angle. It seems the culprit it the count parameter passed to glDrawElements as fiddling with this has differing results but I've read the documentation and checked the value over and over again and it does indeed expect the total number of indices (which is what i'm passing currently).
As I mentioned in my original question, i'm quite confused by VBO's currently or rather, confused by the implementation rather than the concept at least. If anyone would be so kind as to cast an eye over my implementation, that would be super awesome as i'm sure i've made a rookie error somewhere along the way but you know how it is when you stare at something for hours on end with no progress.
Thanks for reading!
I think I see your problem.
You've got a struct, VertexIndices which contains three indices, or the indices for one triangle. When you bind your IBO (Index Buffer Object, the buffer object containing your indices), you do this:
glBufferData(GL_ELEMENT_ARRAY_BUFFER, (sizeof(VertexIndices) * vertexBuffer.totalIndices), self.vertexIndices, GL_STATIC_DRAW);
Which is fine. The size parameter in glBufferData is in bytes so you're multiplying sizeof(3 floats) by the number of groups of 3 floats that you have. Great.
But then when you actually call glDrawElements, you do this:
glDrawElements(GL_TRIANGLES, vertexBuffer.totalIndices, GL_UNSIGNED_INT, (void *)0);
However, the vertexBuffer.totalIndices is equal to the number of VertexIndices structs you've got, which is equal to the total number of indices / 3 (or total number of triangles). So you need to do one of the following:
Easy fix yet stupid: glDrawElements(..., vertexBuffer.totalIndices * 3, ...);
Proper yet more work: vertexBuffer.totalIndices should contain the actual total number of indices that you've got, not the total number of triangles you're rendering.
You need to do one of these because right now totalIndices contains the total number VertexIndices you've got, and each one has 3 indices. The right thing to do here is either rename totalIndices to totalTriangles, or keep track of the actual total number of indices somewhere.
I want to draw polygons with triangle fan. I get the polygons as a data structure with a count of the number of edges followed by an array of coordinates. I figured out that it should work something like this:
-(void) fillarea:(int16_t) count vertices:(int16_t*) pxyarray {
int valueCount = count*2;
GLfloat vertexBuffer[valueCount];
for (int i=0; i<valueCount; i++) {
vertexBuffer[i] = pxyarray[i];
}
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_TRIANGLE_FAN, 0, valueCount);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
It seems to work perfect with triangles, however as soon as I use polygons with more edges (squares, pentagons,...) they all draw another triangle to the origin at 0,0. Can someone explain to me what is happening here.
If it helps some examples for polygons I defined to be drawn with this method:
int16_t verticesTriangle[6] = {50,50,100,50,100,100};
[self fillarea:3 vertices:verticesTriangle];
int16_t verticesSquare[8] = {100,100,150,100,150,150,100,150};
[self fillarea:4 vertices:verticesSquare];
int16_t vertices5[10] = {150,50,175,25,200,50,200,100,150,100};
[self fillarea:5 vertices:vertices5];
int16_t vertices6[12] = {250,50,275,25,300,50,300,75,275,100,250,75};
[self fillarea:6 vertices:vertices6];
The answer to the problem is actually very simple. glDrawArrays wants to know the number of vertices not the number of values handed over to it. So instead of writing:
glDrawArrays(GL_TRIANGLE_FAN, 0, valueCount); // valueCount = 6 for a triangle
I need to write:
glDrawArrays(GL_TRIANGLE_FAN, 0, count); // count = 3 for a triangle
You got the wrong Type To Draw Squares and Rectangle you need to user GL_TRIANGLE_STRIP.
With GL_TRIANGLE_FAN the first Vertex is your center and all triangles will be generated with the center and the last inserted vertex and your actual vertex.
I am trying to learn OpenGL ES 2.0 to do some iPhone game development. I have read through multiple tutorials and some of the OpenGL ES 2.0 spec. All of the examples I have seen have created a single mesh, loaded it into a vertex buffer and then rendered it (with the expected translation, rotation, gradient, etc.)
My question is this: how do you render multiple objects in your scene that have different meshes and are moving independently? If I have a car and a motorcycle for example, can I create 2 vertex buffers and keep the mesh data for both around for each render call, and then just send in different matrices for the shader for each object? Or do I need to somehow translate the meshes and then combine them into a single mesh so that they can be rendered in one pass? I'm looking for more of the high-level strategy / program structure rather than code examples. I think I just have the wrong mental modal of how this works.
Thanks!
The best way I found to do this is using VAOs in addition to VBOs.
I'll first answer you question using VBOs only.
First of all, assume you have the two meshes of your two objects stored in the following arrays:
GLuint _vertexBufferCube1;
GLuint _vertexBufferCube2;
where:
GLfloat gCubeVertexData1[36] = {...};
GLfloat gCubeVertexData2[36] = {...};
And you also have to vertix buffers:
GLuint _vertexBufferCube1;
GLuint _vertexBufferCube2;
Now, to draw those two cubes (without VAOs), you have to do something like that:
in draw function (from OpenGLES template):
//Draw first object, bind VBO, adjust your attributes then call DrawArrays
glGenBuffers(1, &_vertexBufferCube1);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferCube1);
glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData1), gCubeVertexData1, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glDrawArrays(GL_TRIANGLES, 0, 36);
//Repeat for second object:
glGenBuffers(1, &_vertexBufferCube2);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferCube2);
glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData2), gCubeVertexData2, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glUseProgram(_program);
glDrawArrays(GL_TRIANGLES, 0, 36);
This will answer you question. But now to use VAOs, your draw function code is much simpler (which is good because it is the repeated function):
First you will define to VAOs:
GLuint _vertexArray1;
GLuint _vertexArray2;
and then you will do all the steps previously done in draw method, you will do it in setupGL function but after binding to the VAO. Then in your draw function you just bind to the VAO you want.
VAO here is like a profile that contains a lot of properties (imagine a smart device profile). Instead of changing color, desktop, fonts.. etc every time you wish to change them, you do that once and save it under a profile name. Then you just switch the profile.
So you do that once, inside setupGL, then you switch between them in draw.
Of course you may say that you could have put the code (without VAO) in a function and call it. That's true, but VAOs are more efficient according to Apple:
http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html#//apple_ref/doc/uid/TP40008793-CH107-SW1
Now to the code:
In setupGL:
glGenVertexArraysOES(1, &_vertexArray1); //Bind to first VAO
glBindVertexArrayOES(_vertexArray1);
glGenBuffers(1, &_vertexBufferCube1); //All steps from this one are done to first VAO only
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferCube1);
glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData1), gCubeVertexData1, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glGenVertexArraysOES(1, &_vertexArray2); // now bind to the second
glBindVertexArrayOES(_vertexArray2);
glGenBuffers(1, &_vertexBufferCube2); //repeat with the second mesh
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferCube2);
glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData2), gCubeVertexData2, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glBindVertexArrayOES(0);
Then finally in your draw method:
glBindVertexArrayOES(_vertexArray1);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArrayOES(_vertexArray2);
glDrawArrays(GL_TRIANGLES, 0, 36);
You maintain separate vertex/index buffers for different objects, yes. For example, you might have a RenderedObject class, and each instance would have it's own vertex buffer. One RenderedObject might take it's vertices from a house mesh, one might come from a character mesh, etc.
During rendering you set the appropriate transform/rotation/shading for the vertex buffer you're working with, perhaps something like:
void RenderedObject::render()
{
...
//set textures/shaders/transformations
glBindBuffer(GL_ARRAY_BUFFER, bufferID);
glDrawArrays(GL_TRIANGLE_STRIP, 0, vertexCount);
...
}
As mentioned in there other answer, the bufferID is just a GLuint not the entire contents of the buffer. If you need more details on creating vertex buffers and filling them with data, I'm happy to add those as well.
I realize this is an older post, but I was trying to find instructions on how to render multiple objects within OpenGL. I found a great tutorial, which describes how to render multiple objects and could be easily extended to render objects of different types (i.e. one cube, one pyramid).
The tutorial I'm posting also describes how to render objects using GLKit. I found it helpful and thought I'd repost it here. I hope it helps you too!
http://games.ianterrell.com/opengl-basics-with-glkit-in-ios5-encapsulated-drawing-and-animation/
If the meshes are different, you keep them in different vertex buffers. If they are similar (eg. animation, color) you pass arguments to the shader. You only have to keep the handles to the VBOs, not the vertex data itself if you don't plan on animating the object on the application side. Device side animation is possible.
I am hopefully-contributing to this older post, because I undertook to solve this problem a different way. Like the question asker, I have seen lots of "one-object" examples. I undertook to place all vertices into a single VBO, and then save the offset to that object's position (per object), rather than a buffer handle. It worked. The offset can be given as a parameter to glDrawElements as below. It seems obvious in retrospect, but I was not convinced until I saw it work. Please note that I have been working with "vertex pointer" rather than the more current "vertex attribute pointer". I am working towards the latter so I can leverage shaders.
All the objects "bind" to the same vertex buffer, prior to calling "draw elements".
gl.glVertexPointer( 3, GLES20.GL_FLOAT, 0, vertexBufferOffset );
GLES20.glDrawElements(
GLES20.GL_TRIANGLES, indicesCount,
GLES20.GL_UNSIGNED_BYTE, indexBufferOffset
);
I did not find anywhere spelled out what was the purpose of this offset, so I took a chance. Also, this gotcha: you have to specify the offset in bytes, not vertices or floats. That is, multiply by four to get the correct position.
It is possible when using shaders, to use the same program for all objects without having to compile, link and create one for each. To do this, simply store the GLuint value to the program and then for each object "glUseProgram(programId);". As a result personal experience, i use a singleton to manage GLProgram structures.. (included below :))
#interface TDShaderSet : NSObject {
NSMutableDictionary *_attributes;
NSMutableDictionary *_uniforms;
GLuint _program;
}
#property (nonatomic, readonly, getter=getUniforms) NSMutableDictionary *uniforms;
#property (nonatomic, readonly, getter=getAttributes) NSMutableDictionary *attributes;
#property (nonatomic, readonly, getter=getProgram) GLuint program;
- (GLint) uniformLocation:(NSString*)name;
- (GLint) attribLocation:(NSString*)name;
#end
#interface TDProgamManager : NSObject
+ (TDProgamManager *) sharedInstance;
+ (TDProgamManager *) sharedInstanceWithContext:(EAGLContext*)context;
#property (nonatomic, readonly, getter=getAllPrograms) NSArray *allPrograms;
- (BOOL) loadShader:(NSString*)shaderName referenceName:(NSString*)refName;
- (TDShaderSet*) getProgramForRef:(NSString*)refName;
#end
#interface TDProgamManager () {
NSMutableDictionary *_glPrograms;
EAGLContext *_context;
}
#end
#implementation TDShaderSet
- (GLuint) getProgram
{
return _program;
}
- (NSMutableDictionary*) getUniforms
{
return _uniforms;
}
- (NSMutableDictionary*) getAttributes
{
return _attributes;
}
- (GLint) uniformLocation:(NSString*)name
{
NSNumber *number = [_uniforms objectForKey:name];
if (!number) {
GLint location = glGetUniformLocation(_program, name.UTF8String);
number = [NSNumber numberWithInt:location];
[_uniforms setObject:number forKey:name];
}
return number.intValue;
}
- (GLint) attribLocation:(NSString*)name
{
NSNumber *number = [_attributes objectForKey:name];
if (!number) {
GLint location = glGetAttribLocation(_program, name.UTF8String);
number = [NSNumber numberWithInt:location];
[_attributes setObject:number forKey:name];
}
return number.intValue;
}
- (id) initWithProgramId:(GLuint)program
{
self = [super init];
if (self) {
_attributes = [[NSMutableDictionary alloc] init];
_uniforms = [[NSMutableDictionary alloc] init];
_program = program;
}
return self;
}
#end
#implementation TDProgamManager {
#private
}
static TDProgamManager *_sharedSingleton = nil;
- (NSArray *) getAllPrograms
{
return _glPrograms.allValues;
}
- (TDShaderSet*) getProgramForRef:(NSString *)refName
{
return (TDShaderSet*)[_glPrograms objectForKey:refName];
}
- (BOOL) loadShader:(NSString*)shaderName referenceName:(NSString*)refName
{
NSAssert(_context, #"No Context available");
if ([_glPrograms objectForKey:refName]) return YES;
[EAGLContext setCurrentContext:_context];
GLuint vertShader, fragShader;
NSString *vertShaderPathname, *fragShaderPathname;
// Create shader program.
GLuint _program = glCreateProgram();
// Create and compile vertex shader.
vertShaderPathname = [[NSBundle mainBundle] pathForResource:shaderName ofType:#"vsh"];
if (![self compileShader:&vertShader type:GL_VERTEX_SHADER file:vertShaderPathname]) {
NSLog(#"Failed to compile vertex shader");
return NO;
}
// Create and compile fragment shader.
fragShaderPathname = [[NSBundle mainBundle] pathForResource:shaderName ofType:#"fsh"];
if (![self compileShader:&fragShader type:GL_FRAGMENT_SHADER file:fragShaderPathname]) {
NSLog(#"Failed to compile fragment shader");
return NO;
}
// Attach vertex shader to program.
glAttachShader(_program, vertShader);
// Attach fragment shader to program.
glAttachShader(_program, fragShader);
// Bind attribute locations.
// This needs to be done prior to linking.
glBindAttribLocation(_program, GLKVertexAttribPosition, "a_position");
glBindAttribLocation(_program, GLKVertexAttribNormal, "a_normal");
glBindAttribLocation(_program, GLKVertexAttribTexCoord0, "a_texCoord");
// Link program.
if (![self linkProgram:_program]) {
NSLog(#"Failed to link program: %d", _program);
if (vertShader) {
glDeleteShader(vertShader);
vertShader = 0;
}
if (fragShader) {
glDeleteShader(fragShader);
fragShader = 0;
}
if (_program) {
glDeleteProgram(_program);
_program = 0;
}
return NO;
}
// Release vertex and fragment shaders.
if (vertShader) {
glDetachShader(_program, vertShader);
glDeleteShader(vertShader);
}
if (fragShader) {
glDetachShader(_program, fragShader);
glDeleteShader(fragShader);
}
TDShaderSet *_newSet = [[TDShaderSet alloc] initWithProgramId:_program];
[_glPrograms setValue:_newSet forKey:refName];
return YES;
}
- (BOOL) compileShader:(GLuint *)shader type:(GLenum)type file:(NSString *)file
{
GLint status;
const GLchar *source;
source = (GLchar *)[[NSString stringWithContentsOfFile:file encoding:NSUTF8StringEncoding error:nil] UTF8String];
if (!source) {
NSLog(#"Failed to load vertex shader");
return NO;
}
*shader = glCreateShader(type);
glShaderSource(*shader, 1, &source, NULL);
glCompileShader(*shader);
#if defined(DEBUG)
GLint logLength;
glGetShaderiv(*shader, GL_INFO_LOG_LENGTH, &logLength);
if (logLength > 0) {
GLchar *log = (GLchar *)malloc(logLength);
glGetShaderInfoLog(*shader, logLength, &logLength, log);
NSLog(#"Shader compile log:\n%s", log);
free(log);
}
#endif
glGetShaderiv(*shader, GL_COMPILE_STATUS, &status);
if (status == 0) {
glDeleteShader(*shader);
return NO;
}
return YES;
}
- (BOOL) linkProgram:(GLuint)prog
{
GLint status;
glLinkProgram(prog);
#if defined(DEBUG)
GLint logLength;
glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength);
if (logLength > 0) {
GLchar *log = (GLchar *)malloc(logLength);
glGetProgramInfoLog(prog, logLength, &logLength, log);
NSLog(#"Program link log:\n%s", log);
free(log);
}
#endif
glGetProgramiv(prog, GL_LINK_STATUS, &status);
if (status == 0) {
return NO;
}
return YES;
}
- (BOOL) validateProgram:(GLuint)prog
{
GLint logLength, status;
glValidateProgram(prog);
glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength);
if (logLength > 0) {
GLchar *log = (GLchar *)malloc(logLength);
glGetProgramInfoLog(prog, logLength, &logLength, log);
NSLog(#"Program validate log:\n%s", log);
free(log);
}
glGetProgramiv(prog, GL_VALIDATE_STATUS, &status);
if (status == 0) {
return NO;
}
return YES;
}
#pragma mark - Singleton stuff... Don't mess with this other than proxyInit!
- (void) proxyInit
{
_glPrograms = [[NSMutableDictionary alloc] init];
}
- (id) init
{
Class myClass = [self class];
#synchronized(myClass) {
if (!_sharedSingleton) {
if (self = [super init]) {
_sharedSingleton = self;
[self proxyInit];
}
}
}
return _sharedSingleton;
}
+ (TDProgamManager *) sharedInstance
{
#synchronized(self) {
if (!_sharedSingleton) {
_sharedSingleton = [[self alloc] init];
}
}
return _sharedSingleton;
}
+ (TDProgamManager *) sharedInstanceWithContext:(EAGLContext*)context
{
#synchronized(self) {
if (!_sharedSingleton) {
_sharedSingleton = [[self alloc] init];
}
_sharedSingleton->_context = context;
}
return _sharedSingleton;
}
+ (id) allocWithZone:(NSZone *)zone
{
#synchronized(self) {
if (!_sharedSingleton) {
return [super allocWithZone:zone];
}
}
return _sharedSingleton;
}
+ (id) copyWithZone:(NSZone *)zone
{
return self;
}
#end
Note that once data-spaces (attributes/uniforms) are passed in, you DONT have to pass them in each render cycle but only when invalidated. This results a serious GPU performance gain.
Per the VBO side of things, the answer above spells out how best to deal with this. Per the orientation side of the equation, you'll need a mechanism to nest tdobjects inside each other (similar to UIView and children under iOS) and then evaluation relative rotations to parents etc.
Good luck !