OpenGL GL_DEPTH_TEST not working - objective-c

I just ported a .obj loader to objective-C and so far, it works, I can get my vertices and normals and stuff.
Every normal is good, pointing in the right direction, all my faces are in CCW winding, but I have some issues with the depth test.
float rotX = 0;
float rotY = 0;
objModel* o = [[objModel alloc] initWithPath:#"/model.obj"]
glClearColor(0,0,0,0);
glEnable(GL_DEPTH_TEST);
glFrontFace(GL_CCW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_COLOR_MATERIAL);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslated(0, -1, 0);
glRotatef(90,0,0,1);
glRotatef(90,0,1,0);
glRotatef(rotX,0,0,-1);
glRotatef(rotY,0,1,0);
[o drawObjWithArrays]
glFlush();
I have 2 different ways of drawing my object, one uses glBegin() / glEnd(), the other uses vertex and normal arrays with a call to glDrawArrays(). Both of them result in the same problem : faces that should be hidden by faces in front of them are displayed, because the depth test isn't working. The faces are drawn in the order they come in the .obj file.
You'll find an image here : http://img524.imageshack.us/img524/994/image2jgq.png
I quite new to OpenGL and objective-C, so I guess my problem comes from a setting that I forgot. Here they are :
-(id) initWithFrame:(NSRect) frame {
NSLog(#"INIT GL VIEW\n");
GLuint attributes[] = {
NSOpenGLPFANoRecovery,
NSOpenGLPFAWindow,
NSOpenGLPFAAccelerated,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFAStencilSize, 8,
NSOpenGLPFAAccumSize, 0,
0
};
NSOpenGLPixelFormat* fmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:(NSOpenGLPixelFormatAttribute*) attributes];
if (!fmt)
NSLog(#"No OpenGL pixel format");
GLfloat mat_ambient[] = {0.0, 0.0, 1.0, 1.0};
GLfloat mat_flash[] = {0.0, 0.0, 1.0, 1.0};
GLfloat mat_flash_shiny[] = {50.0};
GLfloat light_position[] = {100.0,-200.0,-200.0,0.0};
GLfloat ambi[] = {0.1, 0.1, 0.1, 0.1};
GLfloat lightZeroColor[] = {0.9, 0.9, 0.9, 0.1};
/* set the material */
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glLightfv(GL_LIGHT0, GL_AMBIENT, ambi);
glLightfv(GL_LIGHT0, GL_DIFFUSE, lightZeroColor);
glMaterialfv(GL_FRONT, GL_SHININESS, mat_flash_shiny);
glMaterialfv(GL_FRONT, GL_SPECULAR, mat_flash);
glMaterialfv(GL_FRONT, GL_AMBIENT, mat_ambient);
return self = [super initWithFrame:frame pixelFormat:[fmt autorelease]];
}
Anyone can help me ? I've tried everyone glDepthFunc, glCullFace, glFrontFace combination possible, nothing works...
Thanks ^^

what do you get when you add this to your draw method?
int depth;
glGetIntegerv(GL_DEPTH_BITS, &depth);
NSLog(#"%i bits depth", depth)
A few more things to try:
make sure initWithFrame is being called
if you're using an NSOpenGLView subclass from IB, set the depth buffer in IB, and move all your openGL initialization into - (void)prepareOpenGL

In case someone wonders how to set this without IB:
NSOpenGLPixelFormatAttribute attrs[] = {
// NSOpenGLPFADoubleBuffer,
NSOpenGLPFADepthSize, 32,
0
};
NSOpenGLPixelFormat *format = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs];
NSOpenGLView *view = [[NSOpenGLView alloc] initWithFrame:frame pixelFormat:format];

Make sure you call glDepthFunc() to set the depth buffer comparison function. Most applications use GL_LEQUAL or GL_LESS for the depth function. Also make sure you call glClearDepth() to set the value that the depth buffer gets cleared to; you should probably use a parameter of 1.0 for this to clear to maximum depth.

I'm not sure how the appropriate AGL calls should look like but you should make sure you actually allocate bit planes for the depth buffer.

Related

I need help optimizing BGR888 blitting to NSView

This is best I've come up with for blitting a 24-bit BGR image out to an NSView.
I did trim a significant amount of CPU time by ensuring that the NSWindow host also had the same colorSpace.
I think there are 4 or 5 pixel copies going on here:
in the vImage conversion (required)
calling CGDataProviderCreateWithData
calling CGImageCreate
creating the NSBitmapImageRep bitmap
in the final blit with drawInRect (required)
Anyone want to chime in on improving it?
Any help would be much appreciated.
{
// one-time setup code
CGColorSpaceRef useColorSpace = nil;
int w = 1920;
int h = 1080;
[theWindow setColorSpace: [NSColorSpace genericRGBColorSpace]];
// setup vImage buffers (not listed here)
// srcBuffer is my 24-bit BGR image (malloc-ed to be w*h*3)
// dstBuffer is for the resulting 32-bit RGBA image (malloc-ed to be w*h*4)
...
// this is called # 30-60fps
if (!useColorSpace)
useColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
vImage_Error err = vImageConvert_BGR888toRGBA8888(srcBuffer, NULL, 0xff, dstBuffer, NO, 0);
CGDataProviderRef newProvider = CGDataProviderCreateWithData(NULL,dstBuffer->data,w*h*4,myReleaseProvider);
CGImageRef myImageRGBA = CGImageCreate(w, h, 8, 32, w*4, useColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, newProvider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(newProvider);
// store myImageRGBA in an array of frames (using NSObject wrappers) for later access (setNeedsDisplay:)
...
}
- (void)drawRect:(NSRect)dirtyRect
{
// this is called # 30-60fps
CGImageRef storedImage = ...; // retrieve from array
NSBitmapImageRep *repImg = [[NSBitmapImageRep alloc] initWithCGImage:storedImage];
CGRect myFrame = CGRectMake(0,0,CGImageGetWidth(storedImage),CGImageGetHeight(storedImage));
[repImg drawInRect:myFrame fromRect:myFrame operation:NSCompositeCopy fraction:1.0 respectFlipped:TRUE hints:nil];
// free image from array (not listed here)
}
// this is called when the CGDataProvider is ready to release its data
void myReleaseProvider (void *info, const void *data, size_t size)
{
if (data) {
free((void *)data);
data=nil;
}
}
Use CGColorSpaceCreateDeviceRGB instead of genericRGB to avoid colorspace conversion inside CG. Use kCGImageAlphaNoneSkipLast instead of kCGImageAlphaLast since we know alpha is opaque to allow for a copy instead of a blend.
After you make those changes, it would be useful to run an Instruments time profile on it to show where the time is going.

Dynamic allocate array of struct for Open GL in Objective-C

I'm beginner in Open GL but I already can draw simple triangle, rectangle etc.
My problem is:
I have the structure and static array of that structure
typedef struct {
GLKVector3 Position;
} Vertex;
const Vertex Vertices[] = {
{{0.0, 0.0, 0.0}},
{{0.5, 0.0, 0.0}},
{{0.5, 0.5, 0.0}},
{{0.0, 0.5, 0.0}},
{{0.0, 0.0, 0.0}}
};
...some code
but I need array of vertices create dynamically... :(
Example:
typedef struct {
GLKVector3 Position;
} Vertex;
instance variable - iVertices of type Vertex
- (void) viewDidLoad {
int numOfVertices = 0;
Vertex vertices[] = {{0.0, 0.0, 0.0}};
[self addVertex:vertices atIndex:numOfVertices];
numOfVertices ++;
Vertex vertices[] = {{0.5, 0.0, 0.0}};
[self addVertex:vertices atIndex:numOfVertices];
numOfVertices ++;
Vertex vertices[] = {{0.5, 0.5, 0.0}};
[self addVertex:vertices atIndex:numOfVertices];
}
- (void) addVertex:(Vertex) vertex atIndex:(int) num {
iVertices[num] = vertex;
}
...and somewhere
glBufferData(GL_ARRAY_BUFFER,
sizeof(iVertices),
iVertices,
GL_STATIC_DRAW);
and this is not allowed in Objective-C or I don't know how to do it :(
malloc nor callow doesn't help to me...
Thanks a lot!
Your main problem here is that you can't just take the sizeof an array that is an instance variable because it's a pointer which will return a size of 8. Instead you're going to have to save the count of the array somewhere else as another instance variable (or use numOfVertices) and multiply it by the sizeof(int). So something like glBufferData(GL_ARRAY_BUFFER, numOfVariable*sizeof(int), iVertices, GL_STATIC_DRAW); should work for your case.

Drawing a filled square with Objective-C / Cocos2D

I'm desperatly trying to draw a filled square with Cocos2D and I can't manage to find an example on how to do it :
Here is my draw method. I succeeded in drawing a square but I can't manage to fill it !
I've read that I need to use a OpenGL method called glDrawArrays with a parameter GL_TRIANGLE_FAN in order to draw a filled square and that's what I tried.
-(void) draw
{
// Disable textures - we want to draw with plaine colors
ccGLEnableVertexAttribs( kCCVertexAttribFlag_Position | kCCVertexAttribFlag_Color );
float l_fRedComponent = 0;
float l_fGreenComponent = 0;
float l_fBlueComponent = 0;
float l_fAlphaComponent = 0;
[mpColor getRed:&l_fRedComponent green:&l_fGreenComponent blue:&l_fBlueComponent alpha:&l_fAlphaComponent];
ccDrawColor4F(l_fRedComponent, l_fGreenComponent, l_fBlueComponent, l_fAlphaComponent);
glLineWidth(10);
CGPoint l_bottomLeft, l_bottomRight, l_topLeft, l_topRight;
l_bottomLeft.x = miPosX - miWidth / 2.0f;
l_bottomLeft.y = miPosY - miHeight / 2.0f;
l_bottomRight.x = miPosX + miWidth / 2.0f;
l_bottomRight.y = miPosY - miHeight / 2.0f;
l_topRight.x = miPosX + miWidth / 2.0f;
l_topRight.y = miPosY + miHeight / 2.0f;
l_topLeft.x = miPosX - miWidth / 2.0f;
l_topLeft.y = miPosY + miHeight / 2.0f;
CGPoint vertices[] = { l_bottomLeft, l_bottomRight, l_topRight, l_topLeft, l_bottomLeft };
int l_arraySize = sizeof(vertices) / sizeof(CGPoint) ;
// My old way of doing this, it draws a square, but not filled.
//ccDrawPoly( vertices, l_arraySize, NO);
// Deprecated method :(
//glVertexPointer(2, GL_FLOAT, 0, vertices);
// I've found something related to this method to replace the deprecated one, but can't understand this method !
glVertexAttribPointer(kCCVertexAttrib_Position, 3, GL_FLOAT, GL_FALSE, 0, vertices);
glDrawArrays(GL_TRIANGLE_FAN, 0, l_arraySize);
}
I've found some examples with the old version of Cocos2D (1.0) but since it's been upgraded to version 2.0 "lately" all the examples I find give me compilation errors !
Could anyone enlight my path here please ?
I didn't know today is "Reinvent the Wheel" day. :)
ccDrawSolidRect(CGPoint origin, CGPoint destination, ccColor4F color);
If you were to go all crazy and wanted to draw filled polygons, there's also:
ccDrawSolidPoly(const CGPoint *poli, NSUInteger numberOfPoints, ccColor4F color);
The "solid" methods are new in Cocos2D 2.x.
You can simply create CCLayerColor instance with needed content size and use it as filled square. In other case you have to triangulate your polygon (it will have two triangles in case of square) and draw it using OpenGL.
---EDIT
Didn't test this code, find it with google, but it seems to work fine.
http://www.deluge.co/?q=cocos-2d-custom-filled-polygon

glGrab for screen capture on Mac OS 10.7.3 with XCode 4.3.2

I am trying to integrate the glGrab code for screen capture on Mac OS under mentioned config and I am currently stuck at an all blue screen being rendered inside my window. I believe that there is some issue with how the image texture has been created, but can't tell what. I am just a couple of weeks old in OpenGL so please go easy on me if I missed something obvious.
I am using the glGrab code as it is except CGLSetFullScreen method (and not even CGLSetFullScreenOnDisplay) because these methods are now deprecated. So this one line of code has been commented out for the time being.
I have been doing some research on this topic since some time now and found another thread on stackoverflow which possibly could have been the complete answer, but it helped much nonetheless. Convert UIImage to CVImageBufferRef
A direct reference to the glGrab code is http://code.google.com/p/captureme/source/browse/trunk/glGrab.c
The answer to my above question is present below. So no more opengl or glGrab. Use what's best optimized for Mac OSX. This doesn't include the code for capturing the mouse pointer also, but I am sure that if you have landed on this page you're smart enough to figure it out by yourself. Or if someone reading this knows the solution then it's your chance to help the fraternity :) Also this code returns a CVPixelBufferRef. You may choose to send back either the CGImageRef or even the bytestream as it is, just tweak it to your liking. :
void swizzleBitmap(void *data, int rowBytes, int height) {
int top, bottom;
void * buffer;
void * topP;
void * bottomP;
void * base;
top = 0;
bottom = height - 1;
base = data;
buffer = malloc(rowBytes);
while (top < bottom) {
topP = (void *)((top * rowBytes) + (intptr_t)base);
bottomP = (void *)((bottom * rowBytes) + (intptr_t)base);
bcopy( topP, buffer, rowBytes );
bcopy( bottomP, topP, rowBytes );
bcopy( buffer, bottomP, rowBytes );
++top;
--bottom;
}
free(buffer);
}
CVImageBufferRef grabViaOpenGL() {
int bytewidth;
CGImageRef image = CGDisplayCreateImage(kCGDirectMainDisplay); // Main screenshot capture call
CGSize frameSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image)); // Get screenshot bounds
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:NO], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
bytewidth = frameSize.width * 4; // Assume 4 bytes/pixel for now
bytewidth = (bytewidth + 3) & ~3; // Align to 4 bytes
swizzleBitmap(pxdata, bytewidth, frameSize.height); // Solution for ARGB madness
CGColorSpaceRelease(rgbColorSpace);
CGImageRelease(image);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}

objective c: save image pixel RGB values to Array

I'm trying some experimental imagestuff on ipad and I'm trying to store every pixel's colordata into one array to increase performance for reading every pixel colordata,
right now I have a timer that calls my DrawRect as much as possible, in my DrawRect function I have this:
-(void)drawRect:(CGRect)rect
{
UIGraphicsBeginImageContext(self.frame.size);
[currentImage.image drawInRect:CGRectMake(0, 0, 768, 1004)];
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 0.3);
r_x = r_x + 1;
if (r_x == 768) {
r_x = 1;
r_y = r_y + 1;
}
if (r_y == 1004) {
NSLog(#"color = %#", mijnArray_kleur);
}
CGPoint point2_1 = CGPointMake(r_x, r_y);
GetColor *mycolor = [GetColor alloc];
UIColor *st = [mycolor getPixelColorAtLocation:point2_1];
[mijnArray_kleur addObject:st];
[mycolor release];
CGContextSetFillColorWithColor(UIGraphicsGetCurrentContext(), [st CGColor]);
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(r_x,r_y,1,1));
}
and getPixelColorAtLocation is a custom class that returns the UIDeviceRGBColorSpace values of a pixel
with this it takes me about 4 hours (yes, hours :p) to complete one image, is there anything faster/improvements?
Thanks!
Thys
[Copied from comment for clarity] Not that I know objective C, at all, but it seems to me like your function iterates over 768 * 1004 values and thus draws that many rectangles. Guessing about 60 frames/second, this would take 3h 40min. Am I wrong here?