OpenGL depth test problem - objective-c

I've got a problem with OpenGL on mac, and I think the problem is the Depth test.
So, to my problem: Rather than explaining, I made two screenshots:
My scene from far: http://c0848462.cdn.cloudfiles.rackspacecloud.com/dd2267e27ad7d0206526b208cf2ea6910bcd00b4fa.jpg
And from near: http://c0848462.cdn.cloudfiles.rackspacecloud.com/dd2267e27a561b5f02344dca57508dddce21d2315f.jpg
If I do not draw the green floor, everything looks (kinda) fine. But still, like this it looks just aweful.
Here are the three codeblocks I use to set up Opengl:
+ (NSOpenGLPixelFormat*) defaultPixelFormat
{
NSOpenGLPixelFormatAttribute attributes [] = {
NSOpenGLPFAWindow,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFADepthSize, (NSOpenGLPixelFormatAttribute)16,
(NSOpenGLPixelFormatAttribute)nil
};
return [[[NSOpenGLPixelFormat alloc] initWithAttributes:attributes] autorelease];
}
- (void) prepareOpenGL
{
NSLog(#"Preparing OpenGL");
glClearColor( 0.0f, 0.0f, 1.0f, 1.0f );
glEnable(GL_TEXTURE_2D);
glClearDepth(1);
glEnable(GL_DEPTH_TEST);
glEnable (GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
}
- (void)reshape
{
NSLog(#"Reshaping view");
glViewport( 0, 0, (GLsizei)[self bounds].size.width, (GLsizei)[self bounds].size.height);
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
gluPerspective( 45.0, [self bounds].size.width / [self bounds].size.height, 0.1f /*Nearest render distance*/, 5000.0 /*Render distance*/);
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
}

gluPerspective( 45.0, [self bounds].size.width / [self bounds].size.height, 0.1f /*Nearest render distance*/, 5000.0 /*Render distance*/);
That's way too small of a near clip plane. The closer your near clip value is to 0, the less precision you get on values that are farther away. Push your near clip back to at least 1.0, if not farther. In general, you should push it back as far away as you can live with.
Oh, and you should be using a 24-bit depth buffer, not 16-bit.

Related

How to override draw method on PDFAnnotation IOS-PDFKIT

I followed another StackOverflow post that explains how i could override the draw method of a PDFAnnotation so i could draw a picture instead of a traditional PDFAnnotation.
But sadly i was not able to achieve that and the annotation that is drawn on top of my pdf is still a regular one.
This is the code that i used :
#implementation PDFImageAnnotation { UIImage * _picture;
CGRect _bounds;};
-(instancetype)initWithPicture:(nonnull UIImage *)picture bounds:(CGRect) bounds{
self = [super initWithBounds:bounds
forType:PDFAnnotationSubtypeWidget
withProperties:nil];
if(self){
_picture = picture;
_bounds = bounds;
}
return self;
}
- (void)drawWithBox:(PDFDisplayBox) box
inContext:(CGContextRef)context {
[super drawWithBox:box inContext:context];
[_picture drawInRect:_bounds];
CGContextRestoreGState(context);
UIGraphicsPushContext(context);
};
#end
Does someone know how i could override the draw method so i could draw a custom Annotation ?
Thank You !
ps: i also tried to followed the tutorial on the apple dev site.
UPDATE :
Now i'm able to draw pictures using CGContextDrawImage but i'm not able to flip coordinates back in place. when i do that mi pictures are not drawn and it seems that they are put outside of the page but i'm not sure.
This is my new code :
- (void)drawWithBox:(PDFDisplayBox) box
inContext:(CGContextRef)context {
[super drawWithBox:box inContext:context];
UIGraphicsPushContext(context);
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0.0, _pdfView.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, _bounds, _picture.CGImage);
CGContextRestoreGState(context);
UIGraphicsPopContext();
}
I also tried to follow the tutorial on the Apple dev site.
Which one?
Custom Graphics
Adding Custom Graphics to a PDF
Because both include UIGraphicsPushContext(context) & CGContextSaveGState(context) calls, but your code doesn't. Do not blindly copy & paste examples, try to understand them. Read what these two calls do.
Fixed code:
- (void)drawWithBox:(PDFDisplayBox) box
inContext:(CGContextRef)context {
[super drawWithBox:box inContext:context];
UIGraphicsPushContext(context);
CGContextSaveGState(context);
[_picture drawInRect:_bounds];
CGContextRestoreGState(context);
UIGraphicsPopContext();
}
The image was drawn with CGRectMake(20, 20, 100, 100). It's upside down, because PDFPage coordinates are flipped (0, 0 = bottom/left). Leaving it as an exercise for OP.
Rotation
Your rotation code is wrong:
CGContextTranslateCTM(context, 0.0, _pdfView.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, _bounds, _picture.CGImage);
It's based on _pdfView bounds, but it should be based on the image bounds (_bounds). Here's the correct one:
- (void)drawWithBox:(PDFDisplayBox) box
inContext:(CGContextRef)context {
[super drawWithBox:box inContext:context];
UIGraphicsPushContext(context);
CGContextSaveGState(context);
CGContextTranslateCTM(context, _bounds.origin.x, _bounds.origin.y + _bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
[_picture drawInRect:CGRectMake(0, 0, _bounds.size.width, _bounds.size.height)];
CGContextRestoreGState(context);
UIGraphicsPopContext();
}

MKOverlayRenderer gets cut off when rendering MKOverlay but fixed by zooming out

I'm new to iOS development and I'm struggling with porting some code from iOS6 involving the use of MKOverlay.
When the overlay radius or coordinate change, the renderer should update the display accordingly in real time.
This part works, but if I drag the overlay too much, it reaches some boundary and the rendering gets cut off. I can't find any documentation or help on this behavior.
In the CircleOverlayRenderer class:
- (id)initWithOverlay:(id<MKOverlay>)overlay
{
self = [super initWithOverlay:overlay];
if (self) {
CircleZone *bOverlay = (CircleZone *)overlay;
[RACObserve(bOverlay, coordinate) subscribeNext:^(id x) {
[self setNeedsDisplay];
}];
[RACObserve(bOverlay, radius) subscribeNext:^(id x) {
[self setNeedsDisplay];
}];
}
return self;
}
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
CGRect rect = [self rectForMapRect:[self.overlay boundingMapRect]];
CGContextSaveGState(context);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextSetFillColorSpace(context, colorSpace);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextSetFillColor(context, color);
CGContextSetAllowsAntialiasing(context, YES);
// outline
{
CGContextSetAlpha(context, 0.8);
CGContextFillEllipseInRect(context, rect);
}
// red
{
CGContextSetAlpha(context, 0.5);
CGRect ellipseRect = CGRectInset(rect, 0.01 * rect.size.width / 2, 0.01 * rect.size.height / 2);
CGContextFillEllipseInRect(context, ellipseRect);
}
CGContextRestoreGState(cox);
}
In the CircleOverlay class:
- (MKMapRect)boundingMapRect
{
MKMapPoint center = MKMapPointForCoordinate(self.coordinate);
double mapPointsPerMeter = MKMapPointsPerMeterAtLatitude(self.coordinate.latitude);
double mapPointsRadius = _radius * mapPointsPerMeter;
return MKMapRectMake(center.x - mapPointsRadius, center.y - mapPointsRadius,
mapPointsRadius * 2.0, mapPointsRadius * 2.0);
}
Here are some screen shots of the problem I'm seeing:
Problem when dragging overlay too much:
Problem when changing the radius:
The problem does go away if I keep zooming the map out. After the map tiles refresh, the overlay no longer gets cut off...
If anyone had a similar problem, please help me, it's driving me crazy!
Looking at the radius example, it makes me suspect the boundingMapRect, given how its cropping. Looking at the boundingMapRect implementation, the reliance upon MKMapPointsPerMeterAtLatitude (esp when you're looking at a large region) is worrying. That function is useful if you are, for example, trying to figure out where a coordinate 10 meters from some other coordinate, but when looking at really large spans, it doesn't always work out well.
I might, instead, suggest something that gets the MKCoordinateRegion of where the circle is, and then convert that to MKMapRect. A simplistic implementation might look like:
- (MKMapRect)boundingMapRect {
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(self.coordinate, _radius * 2, _radius * 2);
CLLocationCoordinate2D upperLeftCoordinate = CLLocationCoordinate2DMake(region.center.latitude - region.span.latitudeDelta / 2, region.center.longitude - region.span.longitudeDelta / 2);
CLLocationCoordinate2D lowerRightCoordinate = CLLocationCoordinate2DMake(region.center.latitude + region.span.latitudeDelta / 2, region.center.longitude + region.span.longitudeDelta / 2);
MKMapPoint upperLeft = MKMapPointForCoordinate(upperLeftCoordinate);
MKMapPoint lowerRight = MKMapPointForCoordinate(lowerRightCoordinate);
return MKMapRectMake(MIN(upperLeft.x, lowerRight.x),
MIN(upperLeft.y, lowerRight.y),
ABS(upperLeft.x - lowerRight.x),
ABS(upperLeft.y - lowerRight.y));
}
You'll have to tweak with this to make sure it gracefully handles crossing of the 180th meridian and when the circle encompasses the north pole, but it illustrates the basic idea: Get MKCoordinateRegion for the circle and then convert that to MKMapRect.

How to correctly render a texture orthogonally in OpenGL?

I'm trying to render a 2D texture in an orthogonal projection.
Let me know what's wrong.
width and height are 128, the view is 256px wide and tall, so I expect the texture to be scaled 2x.
But all I get is this:
Code:
#interface ModNesOpenGLView : NSOpenGLView {
#public
char *pixels;
int width;
int height;
int zoom;
}
- (void) drawRect: (NSRect) bounds;
- (void) free;
#end
#implementation ModNesOpenGLView
-(void) awakeFromNib {
self->pixels = malloc( self->width * self->height * 3 );
memset( (void *)self->pixels, 0, self->width * self->height * 3 );
for( int y=0; y<self->height; ++y )
{
for( int x=0; x<self->width; ++x )
{
char r=0,g=0,b=0;
switch( y%3 ) {
case 0: r=0xFF; break;
case 1: g=0xFF; break;
case 2: b=0xFF; break;
}
[self setPixel_x:x y:y r:r g:g b:b];
}
}
}
-(void) setPixel_x:(int)x y:(int)y r:(char)r g:(char)g b:(char)b
{
self->pixels[ ( y * self->width + x ) * 3 ] = r;
self->pixels[ ( y * self->width + x ) * 3 + 1 ] = g;
self->pixels[ ( y * self->width + x ) * 3 + 2 ] = b;
}
-(void) drawRect: (NSRect) bounds
{
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT );
glTexImage2D( GL_TEXTURE_2D, 0, 3, self->width, self->height, 0,GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // GL_LINEAR
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glEnable(GL_TEXTURE_2D);
// glTexSubImage2D(GL_TEXTURE_2D, 0 ,0, 0, self->width, self->height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glBegin( GL_QUADS );
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
glEnd();
glFlush();
}
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
}
(^ code scrolls down)
I realize I'm missing all the part about the initialization of the projection matrix and orthogonal projection.
I added it:
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
glClearColor(0, 0, 0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glViewport(0, 0, self->width, self->height);
}
And then I get this:
I'm confused.
Here's where I got the code from: code example
Your problem is with coordinate systems and their ranges. Looking at the coordinates you use for drawing:
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
The OpenGL coordinate system has a range of [-1.0, 1.0] in both x- and y-direction if you don't apply a transformation. This means that (0.0, 0.0), which is the bottom-left corner of the quad you are drawing, is in the center of the screen. It then extends to the right and top. The size of the quad is actually much bigger than the window, but it obviously gets clipped.
This explains the original version and resulting picture you posted. You end up with the top-right quadrant being filled, with a very small fraction of your texture (about one texel).
Then in the updated code, you add this:
glViewport(0, 0, self->width, self->height);
The viewport determines the part of the window you draw to. Since you say that width and height are 128, and the window size is 256x256, this call specifies that you only want to draw into the bottom-left quadrant of your window.
Since everything else is unchanged, you then still draw the top-right quadrant of your drawing area. So you end up filling the top-right quadrant of the bottom-left quadrant of the window, which is exactly what you have in the second image.
To fix this, the simplest approach is to not set the viewport to a non-default value (remove the glViewport() call), and use coordinates in the range [-1.0, 1.0] in both directions:
glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, -1.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f( 1.0f, -1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f( 1.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0f, 1.0f);
Another option is that you set up a transformation that changes the coordinate range to the values you are using. In legacy OpenGL, which you are using, something like this should work:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, self->width, 0.0, self->height, -1.0, 1.0);

Ribbon Graphs (charts) with Core Graphics

I am generating a classic line graph using core graphics, which renders and work very well.
There are several lines stacking one after another using "layer.zPosition"
-(void)drawRect:(CGRect)rect {
float colorChange = (0.1 * [self tag]);
theFillColor = [UIColor colorWithRed:(colorChange) green:(colorChange*0.50) blue:colorChange alpha:0.75f].CGColor;
CGContextRef c = UIGraphicsGetCurrentContext();
CGFloat white[4] = {1.0f, 1.0f, 1.0f, 1.0f};
CGContextSetFillColorWithColor(c, theFillColor);
CGContextSetStrokeColor(c, white);
CGContextSetLineWidth(c, 2.0f);
CGContextBeginPath(c);
//
CGContextMoveToPoint(c, 0.0f, 200-[[array objectAtIndex:0] floatValue]);
CGContextAddLineToPoint(c, 0.0f, 200-[[array objectAtIndex:0] floatValue]);
//
distancePerPoint = (rect.size.width / [array count]);
float lastPointX = 750.0;
for (int i = 0 ; i < [array count] ; i++)
{
CGContextAddLineToPoint(c, (i*distancePerPoint), 200-[[array objectAtIndex:i] floatValue]);
lastPointX = (i*distancePerPoint);
}
//
CGContextAddLineToPoint(c, lastPointX, 200.0);
CGContextAddLineToPoint(c, 0, 200);
CGContextClosePath(c);
//
//CGContextFillPath(c);
CGContextDrawPath(c, kCGPathFillStroke);
//CGContextDrawPath(c, kCGPathStroke);
}
(The above code is generating the following result):
(I can post the code I am using for the 3d effect if needed, but the way I do it is generically by
CATransform3D rotationAndPerspectiveTransform = CATransform3DIdentity;)
Question:
How can I transform my line graph to have depth ?
I would like to have a "depth" to the line(s) graph (thus making them a ribbon) as later I would like to represent them using rotation And Perspective Transform (as stated above)
You can't easily do with with Core Graphics or Core animation because CALayers are "flat" - they work like origami, where you can make 3D structures by connecting rectangles in 3D space, but you can't have arbitrary polygonal 3D shapes.
Actually that's not strictly true, you could look at using CAShapeLayers to do your drawing, and then manipulate them in 3D, but I think this is generally going to be very hard work to calculate where to position each shape and to get the edges to line up correctly.
Really the way to make this kind of 3D structure is to use OpenGL directly.
If you're not too familiar with low-level OpenGL programming, you might want to check out Galaxy Engine or Cocos3D.

Unusual Lighting Effects - Random Polygons Coloured

I am working on creating an object loader for use with iOS, I have managed to load the vertices, normals and face data from and OBJ file, and then place this data into arrays for reconstructing the object. But I have come across an issue with the lighting, at the bottom is a video from the simulation of my program - this is with the lighting in the following position:
CGFloat position[] = { 0.0f, -1.0f, 0.0f, 0.0f };
glLightfv(GL_LIGHT0, GL_POSITION, position);
This is specified in both the render method each frame and the setup view method which is called once at setup.
Various other lighting details are here, these are called once during setup:
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
CGFloat ambientLight[] = { 0.2f, 0.2f, 0.2f, 1.0f };
CGFloat diffuseLight[] = { 1.0f, 0.0f, 0.0, 1.0f };
glLightfv(GL_LIGHT0, GL_AMBIENT, ambientLight);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuseLight);
CGFloat position[] = { 0.0f, -1.0f, 0.0f, 0.0f };
glLightfv(GL_LIGHT0, GL_POSITION, position);
glEnable(GL_COLOR_MATERIAL);
glEnable(GL_NORMALIZE);
The video of the issue can be found here:
http://youtu.be/dXm4wqzvO5c
Thanks,
Paul
[EDIT]
for further info normals are also supplied by the following code, they are currently in a large normals array or XYZ XYZ XYZ etc...
// FACE SHADING
glColorPointer(4, GL_FLOAT, 0, colors);
glEnableClientState(GL_COLOR_ARRAY);
glNormalPointer(GL_FLOAT, 3, normals);
glEnableClientState(GL_NORMAL_ARRAY);
glDrawArrays(GL_TRIANGLES, 0, 3*numOfFaces);
glDisableClientState(GL_COLOR_ARRAY);
I now feel incredibly stupid... All part of being a student programmer I guess. I will leave an answer to this so if anyone else gets this problem they can solve it too! The mistake was simply down to a typo:
glNormalPointer(GL_FLOAT, 3, normals);
Should have read
glNormalPointer(GL_FLOAT, 0, normals);
The second argument being the STRIDE which is only used if the array contains other values e.g. Vert Coords / Normals / Texture Coords. As mine are in single arrays the stride between the values should be 0.