I'm rendering an interleaved vbo using the following code which works fine.
glVertexPointer(3, GL_FLOAT, sizeof(InterleavedVertexData), (GLvoid*)((char*)0));
glNormalPointer(GL_FLOAT, sizeof(InterleavedVertexData), (GLvoid*)((char*)0+3*sizeof(GLfloat)));
glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(InterleavedVertexData), (GLvoid*)((char*)0+6*sizeof(GL_UNSIGNED_BYTE)));
When I change glColorPointer's pointer paramater to use GLubyte i don't see anything rendered on the screen? I'm defining colour as GLubyte in my struct also.
glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(InterleavedVertexData), (GLvoid*)((char*)0+6*sizeof(GLubyte)));
GLubyte is a type. GL_UNSIGNED_BYTE is an integer constant which is often used to indicate that you will pass a GLubyte in a pointer.
sizeof(GLubyte) is always 1 by definition. Taking sizeof(GL_UNSIGNED_BYTE) will typically return 4 or 8, because it's an integer constant, and has the size of whatever your system's integer size is.
GL_UNSIGNED_BYTE is a symbolic const while GLubyte is a type. GLubyte is commonly implemented as a typedef of unsigned char; you can confirm this by looking at your gl.h.
You should use GL_UNSIGNED_BYTE inside your OpenGL method to specify the type of data you are passing along and use GLubyte to compute the size of your data.
Related
I'm working with a C Api in Objective-C. This API needs to use some specific structures.
I have some problems trying to figure out how to work with arrays of structure.
I show you something more with some code:
I have the class Sprite where I define the struct Vertex.
Every instance of Sprite has its own array of Vertices.
#import <Foundation/Foundation.h>
typedef struct {
float Position[3];
float Color[4];
} Vertex;
#interface Sprite : NSObject{
Vertex *_vertices;
}
//Getter setter methods
-(void)setVertices:(Vertex*)vx;
-(Vertex*)vertices;
#end
In another class I use the Sprite class in this way:
I create a Vertex array and I assign this struct to one instance of Sprite:
Sprite *spr = [[Sprite alloc]init];
Vertex vertices2[] = {
{{1, -1, 0}, {1, 0, 0, 1}},
{{1, 1, 0}, {1, 0, 0, 1}},
{{-1, 1, 0}, {1, 1, 0, 1}},
{{-1, -1, 0}, {1, 1, 0, 1}}
};
spr.vertices = vertices2;
Now if I perform sizeOf directly on the vertices2 struct I get the value 112 and performing sizeOf on spr.vertices I get 4. Why??? I mean this is the same struct with the same values.
My doubt is that I'm working with array[] and with pointer* in the wrong way...
How can I modify the Sprite class to use array of struct the correct way?
sizeof computes the size of its argument at compile time. If you pass it a pointer, it will return the size of the pointer type (usually 4 or 8 bytes). Even if you pass it a dereferenced pointer, i.e. sizeof(*spr.vertices), you will get the size of the Vertex type, not your particular instance.
If you need to know the size of the instance at runtime, put it (the size) in another ivar.
spr.vertices is a pointer. The vertices2 is an actual array of your Vertex objects. They are different data types.
I'm confused about one strange thing....I have an unsigned char array.... I allocate it using calloc and record some bytes data in it... but when I free this unsigned char and allocate it again, I see that it reserves the same address in memory which was allocated previous time. I understand why....But I cannot understand why the data that I'm trying to write there second time is not written...There is written the data that was written first time....Can anybody explain me this???????
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
This is how I allocate it....
Actually my problem is that because of this allocation , which happens once every 2 secs I have memory leak...But when I try to free the allocated memory sector happens thing described above....:(
Please if anybody can help me....I would be so glad...
Here is the code...
- (unsigned char*) createBitmapContext:(UIImage*)anImage{
CGImageRef imageRef = [anImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
bytesPerPixel = 4;
bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
imageRef=nil;
return rawData; }
in this code there is no the part where I free(rawData), and because I cannot free it inside this method I tried to define rawData globally and free it after calling this method...but nothing interesting....
Please if anybody can help me....I would be so glad...
Ok, so this method is rendering a UIImage into a freshly allocated byte buffer and returning the buffer to the caller. Since you're allocating it with calloc, it will be initialised to 0, then overwritten with the image contents.
when I free this unsigned char and allocate it again, I see that it reserves the same address in memory which was allocated previous time
Yes, there are no guarantees about the location of the buffer in memory. Assuming you call free() on the returned memory, requesting the exact same size is quite likely to give you the same buffer back. But - how are you verifying the contents are not written over a second time? What is in the buffer?
my problem is that because of this allocation , which happens once every 2 secs I have memory leak...But when I try to free the allocated memory sector happens thing described above....:(
If there is a leak, it is likely in the code that calls this method, since there is no obvious leakage here. The semantics are obviously such that the caller is responsible for freeing the buffer. So how is that done?
Also, are you verifying that the CGBitmapContext is being correctly created? It is possible that some creation flags or parameters may result in an error. So add a check for context being valid (at least not nil). That could explain why the content is not being overwritten.
One easy way to ensure your memory is being freshly updated is to write your own data to it. You could fill the buffer with a counter, and verify this outside the method. For example, just before you return rawData:
static unsigned updateCounter = 0;
memset(rawData, updateCounter & 0xff, width*height*4);
This will cycle through writing 0-255 into the buffer, which you can easily verify.
Another thing - what are you trying to achieve with this code? There might be an easier way to achieve what you're trying to achieve. Returning bare buffers devoid of metadata is not necessarily the best way to manage your images.
So guys I solved this issue...First thing I've changed createBitmapContext method to this
- (void) createBitmapContext:(UIImage*)anImage andRawData:(unsigned char *)theRawData{
CGImageRef imageRef = [anImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
bytesPerPixel = 4;
bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(theRawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
imageRef=nil;
// return theRawData;}
then...besides this I missed the part where I assign newRawData to oldRawData and by this I was having two pointers to the same memory address...So from here came the issue... I changed this assignment part to this memcpy(rawDataForOldImage, rawDataForNewImage,newCapturedImage.size.width*newCapturedImage.size.height*4); and here the problem is solved....Thanks to all
I've been looking at the new OpenGL framework for iOS, aptly named GLKit, and have been playing around with porting some existing OpenGL 1.0 code to OpenGL ES 2.0 just to dip my toe in the water and get to grips with things.
After reading the API and a whole ream of other best practices provided by Apple and the OpenGL documentation, i've had it pretty much ingrained into me that I should be using Vertex Buffer Objects and using "elements" or rather, vertex indices. There seems to be a lot of mention of optimising memory storage by using padding where necessary too but that's a conversation for another day perhaps ;)
I read on SO a while ago about the benefits of using NSMutableData over classic malloc/free and wanted to try and take this approach when writing my VBO. So far i've managed to bungle together a snippet that looks like i'm heading down the right track but i'm not entirely sure about how much data a VBO should contain. Here's what i've got so far:
//import headers
#import <GLKit/GLKit.h>
#pragma mark -
#pragma mark InterleavingVertexData
//vertex buffer object struct
struct InterleavingVertexData
{
//vertices
GLKVector3 vertices;
//normals
GLKVector3 normal;
//color
GLKVector4 color;
//texture coordinates
GLKVector2 texture;
};
typedef struct InterleavingVertexData InterleavingVertexData;
#pragma mark -
#pragma mark VertexIndices
//vertex indices struct
struct VertexIndices
{
//vertex indices
GLuint a;
GLuint b;
GLuint c;
};
typedef struct VertexIndices VertexIndices;
//create and return a vertex index with specified indices
static inline VertexIndices VertexIndicesMake(GLuint a, GLuint b, GLuint c)
{
//declare vertex indices
VertexIndices vertexIndices;
//set indices
vertexIndices.a = a;
vertexIndices.b = b;
vertexIndices.c = c;
//return vertex indices
return vertexIndices;
}
#pragma mark -
#pragma mark VertexBuffer
//vertex buffer struct
struct VertexBuffer
{
//vertex data
NSMutableData *vertexData;
//vertex indices
NSMutableData *indices;
//total number of vertices
NSUInteger totalVertices;
//total number of indices
NSUInteger totalIndices;
};
typedef struct VertexBuffer VertexBuffer;
//create and return a vertex buffer with allocated data
static inline VertexBuffer VertexBufferMake(NSUInteger totalVertices, NSUInteger totalIndices)
{
//declare vertex buffer
VertexBuffer vertexBuffer;
//set vertices and indices count
vertexBuffer.totalVertices = totalVertices;
vertexBuffer.totalIndices = totalIndices;
//set vertex data and indices
vertexBuffer.vertexData = nil;
vertexBuffer.indices = nil;
//check vertices count
if(totalVertices > 0)
{
//allocate data
vertexBuffer.vertexData = [[NSMutableData alloc] initWithLength:(sizeof(InterleavingVertexData) * totalVertices)];
}
//check indices count
if(totalIndices > 0)
{
//allocate data
vertexBuffer.indices = [[NSMutableData alloc] initWithLength:(sizeof(VertexIndices) * totalIndices)];
}
//return vertex buffer
return vertexBuffer;
}
//grow or shrink a vertex buffer
static inline void VertexBufferResize(VertexBuffer *vertexBuffer, NSUInteger totalVertices, NSUInteger totalIndices)
{
//check adjusted vertices count
if(vertexBuffer->totalVertices != totalVertices && totalVertices > 0)
{
//set vertices count
vertexBuffer->totalVertices = totalVertices;
//check data is valid
if(vertexBuffer->vertexData)
{
//allocate data
[vertexBuffer->vertexData setLength:(sizeof(InterleavingVertexData) * totalVertices)];
}
else
{
//allocate data
vertexBuffer->vertexData = [[NSMutableData alloc] initWithLength:(sizeof(InterleavingVertexData) * totalVertices)];
}
}
//check adjusted indices count
if(vertexBuffer->totalIndices != totalIndices && totalIndices > 0)
{
//set indices count
vertexBuffer->totalIndices = totalIndices;
//check data is valid
if(vertexBuffer->indices)
{
//allocate data
[vertexBuffer->indices setLength:(sizeof(VertexIndices) * totalIndices)];
}
else
{
//allocate data
vertexBuffer->indices = [[NSMutableData alloc] initWithLength:(sizeof(VertexIndices) * totalIndices)];
}
}
}
//release vertex buffer data
static inline void VertexBufferRelease(VertexBuffer *vertexBuffer)
{
//set vertices and indices count
vertexBuffer->totalVertices = 0;
vertexBuffer->totalIndices = 0;
//check vertices are valid
if(vertexBuffer->vertexData)
{
//clean up
[vertexBuffer->vertexData release];
vertexBuffer->vertexData = nil;
}
//check indices are valid
if(vertexBuffer->indices)
{
//clean up
[vertexBuffer->indices release];
vertexBuffer->indices = nil;
}
}
Currently, the interleaving vertex data contains enough to store the vertices, normals, colors and texture coordinates for each vertex. I was under the impression that there would be an equal number of vertices and indices but in practice this obviously isn't the case so for this reason, the indices are part of the VBO rather than the InterleavingVertexData.
Question Updated:
I've updated the code above after managing to wrangle it into a working state. Hopefully it will come in useful to someone in the future.
Now that i've managed to set everything up, i'm having trouble getting the expected results from rendering the content bound to the VBO. Here's the code i've got so far for loading my data into OpenGL:
//generate buffers
glGenBuffers(2, buffers);
//bind vertices buffer
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBufferData(GL_ARRAY_BUFFER, (sizeof(InterleavingVertexData) * vertexBuffer.totalVertices), self.vertexData, GL_STATIC_DRAW);
//bind indices buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, (sizeof(VertexIndices) * vertexBuffer.totalIndices), self.vertexIndices, GL_STATIC_DRAW);
//reset buffers
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
And the code for rendering everything:
//enable required attributes
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glEnableVertexAttribArray(GLKVertexAttribColor);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
//bind buffers
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
//set shape attributes
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, vertices));
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_TRUE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, normal));
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_TRUE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, color));
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_TRUE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, texture));
//draw shape
glDrawElements(GL_TRIANGLES, vertexBuffer.totalIndices, GL_UNSIGNED_INT, (void *)0);
//reset buffers
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
//disable atttributes
glDisableVertexAttribArray(GLKVertexAttribTexCoord0);
glDisableVertexAttribArray(GLKVertexAttribColor);
glDisableVertexAttribArray(GLKVertexAttribNormal);
glDisableVertexAttribArray(GLKVertexAttribPosition);
Whilst my iPhone hasn't yet exploded with awesome graphics of unicorns shooting rainbows from their eyes, I haven't been able to render a simple shape in it's entirety without tearing my hair out.
From the rendering it looks as though only 1/3rd of each shape is being drawn, perhaps 1/2 depending on the viewing angle. It seems the culprit it the count parameter passed to glDrawElements as fiddling with this has differing results but I've read the documentation and checked the value over and over again and it does indeed expect the total number of indices (which is what i'm passing currently).
As I mentioned in my original question, i'm quite confused by VBO's currently or rather, confused by the implementation rather than the concept at least. If anyone would be so kind as to cast an eye over my implementation, that would be super awesome as i'm sure i've made a rookie error somewhere along the way but you know how it is when you stare at something for hours on end with no progress.
Thanks for reading!
I think I see your problem.
You've got a struct, VertexIndices which contains three indices, or the indices for one triangle. When you bind your IBO (Index Buffer Object, the buffer object containing your indices), you do this:
glBufferData(GL_ELEMENT_ARRAY_BUFFER, (sizeof(VertexIndices) * vertexBuffer.totalIndices), self.vertexIndices, GL_STATIC_DRAW);
Which is fine. The size parameter in glBufferData is in bytes so you're multiplying sizeof(3 floats) by the number of groups of 3 floats that you have. Great.
But then when you actually call glDrawElements, you do this:
glDrawElements(GL_TRIANGLES, vertexBuffer.totalIndices, GL_UNSIGNED_INT, (void *)0);
However, the vertexBuffer.totalIndices is equal to the number of VertexIndices structs you've got, which is equal to the total number of indices / 3 (or total number of triangles). So you need to do one of the following:
Easy fix yet stupid: glDrawElements(..., vertexBuffer.totalIndices * 3, ...);
Proper yet more work: vertexBuffer.totalIndices should contain the actual total number of indices that you've got, not the total number of triangles you're rendering.
You need to do one of these because right now totalIndices contains the total number VertexIndices you've got, and each one has 3 indices. The right thing to do here is either rename totalIndices to totalTriangles, or keep track of the actual total number of indices somewhere.
Hi I am relatively new to programming on iOS and using objective C. Recently I have come across an issue I cannot seem to solve, I am writing a OBJ model loader to use within my iOS programming. For this I use two arrays as below:
static CGFloat modelVertices[360*9]={};
static CGFloat modelColours[360*12]={};
As can be seen the length is currently allocated with a hard coded value of 360 (the number of faces in a particular model). Is there no way this can be dynamically allocated from a value that has been calculated after reading the OBJ file as is done below?
int numOfVertices = //whatever this is read from file;
static CGFloat modelColours[numOfVertices*12]={};
I have tried using NSMutable arrays but found these difficult to use as when it comes to actually drawing the mesh gathered I need to use this code:
-(void)render
{
// load arrays into the engine
glVertexPointer(vertexStride, GL_FLOAT, 0, vertexes);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(colorStride, GL_FLOAT, 0, colors);
glEnableClientState(GL_COLOR_ARRAY);
//render
glDrawArrays(renderStyle, 0, vertexCount);
}
As you can see the command glVertexPointer requires the values as a CGFloat array:
glVertexPointer (GLint size, GLenum type, GLsizei stride, const GLvoid *pointer);
You could use a c-style malloc to dynamically allocate space for the array.
int numOfVertices = //whatever this is read from file;
CGFloat *modelColours = (CGFloat *) malloc(sizeof(CGFloat) * numOfVertices);
When you declare a static variable, its size and initial value must be known at compile time. What you can do is declare the variable as a pointer instead of an array, the use malloc or calloc to allocate space for the array and store the result in your variable.
static CGFloat *modelColours = NULL;
int numOfVertices = //whatever this is read from file;
if(modelColours == NULL) {
modelColours = (CGFloat *)calloc(sizeof(CGFloat),numOfVertices*12);
}
I used calloc instead of malloc here because a static array would be filled with 0s by default, and this would ensure that the code was consistent.
I've noticed in Apple's sample code that they often provide a value of 0 in the bytesPerRow parameter of CGBitmapContextCreate. For example, this comes out of the Reflection sample project.
CGContextRef gradientBitmapContext = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh,
8, 0, colorSpace, kCGImageAlphaNone);
That seemed odd to me, since I've always gone the route of multiplying the image width by the number of bytes per pixel. I tried swapping in a zero into my own code and tested it out. Sure enough, it still works.
size_t bitsPerComponent = 8;
size_t bytesPerPixel = 4;
size_t bytesPerRow = reflectionWidth * bytesPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
reflectionWidth,
reflectionHeight,
bitsPerComponent,
0, // bytesPerRow ??
colorSpace,
kCGImageAlphaPremultipliedLast);
According to the docs, bytesPerRow should be "The number of bytes of memory to use per row of the bitmap."
So whats the deal? When can I supply a zero and when must I calculate the exact value? Are there any performance implications of doing it one way or the other?
My understanding is that if you pass in zero, it calculates the bytes-per-row based on the bitsPerComponent and width arguments. You might want additional padding at the end of each row of bytes (if your device required it, or some other constraint). In this case, you could pass a value that was more than just width * (bytes per pixel). I would imagine this is probably never needed in modern i/MacOS development, except for some weird edge-case optimizations.