OpenGL Picking not work in NSOpenGLView - objective-c

I Am trying to implement picking in a NSOpenGLView , but not works, this is the code.
I render only the objects that I need pick with no lights and I render the scene as same in normal render.
- (void)drawSeleccion
{
NSSize size = [self bounds].size;
GLuint selectBuf[16 * 4] = {0};
GLint hits;
glClearColor (0.0, 0.0, 0.0, 0.0);
glColor4f(1.0, 1.0, 1.0, 1.0);
glSelectBuffer (16 * 4, selectBuf);
glRenderMode(GL_SELECT);
/// *** Start ***
glInitNames();
glPushName(0);
// view port.
glViewport(0, 0, size.width, size.height);
// projection matrix.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float dist = 534;
aspectRatio = size.width / size.height;
nearDist = MAX(10, dist - 360.0);
farDist = dist + 360.0;
GLKMatrix4 m4 = GLKMatrix4MakePerspective(zoom, aspectRatio, nearDist, farDist);
glMultMatrixf(m4.m);
// Model view.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// I look at.
GLKMatrix4 glm = GLKMatrix4MakeLookAt(0, dist, 0, 0, 0, 0, 0, 0, 1);
glMultMatrixf(glm.m);
// rotate viewPort.
glRotated(rotate_x, 1, 0, 0);
glRotated(rotate_z, 0, 0, 1);
glTranslated(translate_x - frente * 0.5, fondo * -0.5, translate_z - alto * 0.5);
/// render model....
glPushMatrix();
for (int volNo = 0; volNo < [self.modeloOptimo.arr_volumenes count]; volNo++) {
VolumenOptimo *volOp = self.modeloOptimo.arr_volumenes[volNo];
glLoadName(volNo);
volOp->verProblemas = false;
[volOp drawVolumen];
}
glPopName();
glPopMatrix();
// Flush view.
glFlush();
hits = glRenderMode(GL_RENDER);
processHits (hits, selectBuf);
} // Fin de drawRect.
Always hits = 0, and selectBuf is empty.
Any idea. thanks

Related

CGGlyphs wrong position

I have been trying to draw single glyph with core text, but the x position of letter is little bit different. The red rectangle show the correct position.
CGContextRef main = [[NSGraphicsContext currentContext] graphicsPort];
CGContextSetAllowsAntialiasing(main, false);
CGContextSetFont(main, font);
CGContextSetFontSize(main, 200);
CGContextSetTextPosition(main, 0, 0);
glyphs[0] = CGFontGetGlyphWithGlyphName(font, CFSTR("E"));
points[0] = CGPointMake(100, 100);
CGContextSetRGBFillColor(main, 0, 1, 0, 1);
CGContextShowGlyphsAtPositions(main, glyphs, points, 1);
CGRect *r = malloc(sizeof(CGRect)*1);
CGFontGetGlyphBBoxes(font, glyphs, 1, r);
float t = roundf(r[0].size.width/CGFontGetUnitsPerEm(font)*200);
float t2 = roundf(r[0].size.height/CGFontGetUnitsPerEm(font)*200);
CGRect r2 = CGRectMake(points[0].x, points[0].y-1, t, t2+2);
CGContextSetRGBStrokeColor(main, 1, 0, 0, 1);
CGContextStrokeRect(main, r2);
Here is screenshot:
You're assuming the bounding box's origin is at zero. It isn't. You need to add in its offset. Something like (following your patterns):
float cornerX = roundf(r[0].origin.x/CGFontGetUnitsPerEm(font)*200);
float cornerY = roundf(r[0].origin.y/CGFontGetUnitsPerEm(font)*200);
CGRect r2 = CGRectMake(points[0].x+cornerX, points[0].y-1+cornerY, t, t2+2);

How to correctly render a texture orthogonally in OpenGL?

I'm trying to render a 2D texture in an orthogonal projection.
Let me know what's wrong.
width and height are 128, the view is 256px wide and tall, so I expect the texture to be scaled 2x.
But all I get is this:
Code:
#interface ModNesOpenGLView : NSOpenGLView {
#public
char *pixels;
int width;
int height;
int zoom;
}
- (void) drawRect: (NSRect) bounds;
- (void) free;
#end
#implementation ModNesOpenGLView
-(void) awakeFromNib {
self->pixels = malloc( self->width * self->height * 3 );
memset( (void *)self->pixels, 0, self->width * self->height * 3 );
for( int y=0; y<self->height; ++y )
{
for( int x=0; x<self->width; ++x )
{
char r=0,g=0,b=0;
switch( y%3 ) {
case 0: r=0xFF; break;
case 1: g=0xFF; break;
case 2: b=0xFF; break;
}
[self setPixel_x:x y:y r:r g:g b:b];
}
}
}
-(void) setPixel_x:(int)x y:(int)y r:(char)r g:(char)g b:(char)b
{
self->pixels[ ( y * self->width + x ) * 3 ] = r;
self->pixels[ ( y * self->width + x ) * 3 + 1 ] = g;
self->pixels[ ( y * self->width + x ) * 3 + 2 ] = b;
}
-(void) drawRect: (NSRect) bounds
{
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT );
glTexImage2D( GL_TEXTURE_2D, 0, 3, self->width, self->height, 0,GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // GL_LINEAR
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glEnable(GL_TEXTURE_2D);
// glTexSubImage2D(GL_TEXTURE_2D, 0 ,0, 0, self->width, self->height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glBegin( GL_QUADS );
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
glEnd();
glFlush();
}
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
}
(^ code scrolls down)
I realize I'm missing all the part about the initialization of the projection matrix and orthogonal projection.
I added it:
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
glClearColor(0, 0, 0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glViewport(0, 0, self->width, self->height);
}
And then I get this:
I'm confused.
Here's where I got the code from: code example
Your problem is with coordinate systems and their ranges. Looking at the coordinates you use for drawing:
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
The OpenGL coordinate system has a range of [-1.0, 1.0] in both x- and y-direction if you don't apply a transformation. This means that (0.0, 0.0), which is the bottom-left corner of the quad you are drawing, is in the center of the screen. It then extends to the right and top. The size of the quad is actually much bigger than the window, but it obviously gets clipped.
This explains the original version and resulting picture you posted. You end up with the top-right quadrant being filled, with a very small fraction of your texture (about one texel).
Then in the updated code, you add this:
glViewport(0, 0, self->width, self->height);
The viewport determines the part of the window you draw to. Since you say that width and height are 128, and the window size is 256x256, this call specifies that you only want to draw into the bottom-left quadrant of your window.
Since everything else is unchanged, you then still draw the top-right quadrant of your drawing area. So you end up filling the top-right quadrant of the bottom-left quadrant of the window, which is exactly what you have in the second image.
To fix this, the simplest approach is to not set the viewport to a non-default value (remove the glViewport() call), and use coordinates in the range [-1.0, 1.0] in both directions:
glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, -1.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f( 1.0f, -1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f( 1.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0f, 1.0f);
Another option is that you set up a transformation that changes the coordinate range to the values you are using. In legacy OpenGL, which you are using, something like this should work:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, self->width, 0.0, self->height, -1.0, 1.0);

glDrawArrays distorts output to CCRenderTexture

If CCRenderTexture is not full window size, glDrawArrays output is smaller and at a strange angle. In my test code, the diagonal line should run from corner to corner at a 45 degree angle. How can I draw this smooth line correctly? I'm new to cocos2d and any help is much appreciated.
//compute vertex points for smooth line triangle strip
CGPoint start = CGPointMake(0., 0.);
CGPoint end = CGPointMake(200., 200.);
float lineWidth = 10.0;
float deltaX = end.x - start.x;
float deltaY = end.y - start.y;
float length = sqrtf(deltaX*deltaX+deltaY*deltaY);
if (length < 0.25) return; //line too small to show on display
float offsetX = -lineWidth*deltaY/length;
float offsetY = lineWidth*deltaX/length;
GLfloat lineVertices[12]; //6 vertices x,y values
lineVertices[0] = start.x + offsetX;
lineVertices[1] = start.y + offsetY;
lineVertices[2] = end.x + offsetX;
lineVertices[3] = end.y + offsetY;
lineVertices[4] = start.x;
lineVertices[5] = start.y;
lineVertices[6] = end.x;
lineVertices[7] = end.y;
lineVertices[8] = start.x - offsetX;
lineVertices[9] = start.y - offsetY;
lineVertices[10] = end.x - offsetX;
lineVertices[11] = end.y - offsetY;
ccColor4F colorVertices[6];
ccColor4F color1 = {1., 0., 0., 0.};
ccColor4F color2 = {1., 0., 0., 1.};
colorVertices[0] = color1;
colorVertices[1] = color1;
colorVertices[2] = color2;
colorVertices[3] = color2;
colorVertices[4] = color1;
colorVertices[5] = color1;
CCRenderTexture *rtx = [CCRenderTexture renderTextureWithWidth:200 height:200];
[rtx beginWithClear:1. g:1. b:1. a:1.];
[shaderProgram_ use];
ccGLEnableVertexAttribs(kCCVertexAttribFlag_Position | kCCVertexAttribFlag_Color);
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, lineVertices);
glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, 0, colorVertices);
glViewport(0,0, screenWidth, screenHeight); //dimensions of main screen
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
[rtx end];
[rtx saveToFile:#"lineDrawTest" format:kCCImageFormatPNG];
I added a glViewport call to set the view to the original full screen size. glDrawArrays now draws the smooth line with the right size and angle.

NSOpenGLView without Interface Builder

I've written a class TestingView which is a subclass of NSOpenGLView, and added it to NSWindow through Interface Builder
#implementation TestingView
- (void)prepareOpenGL
{
[super prepareOpenGL];
glGenRenderbuffers( NumRenderbuffers, renderbuffer );
glBindRenderbuffer( GL_RENDERBUFFER, renderbuffer[Color] );
glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA, 1024, 768 );
glBindRenderbuffer( GL_RENDERBUFFER, renderbuffer[Depth] );
glRenderbufferStorage( GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 1024, 768);
glGenFramebuffers( 1, &framebuffer );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, framebuffer );
glFramebufferRenderbuffer( GL_DRAW_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER,
renderbuffer[Color] );
glFramebufferRenderbuffer( GL_DRAW_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, renderbuffer[Depth] );
glEnable( GL_DEPTH_TEST );
}
- (void)show
{
glBindFramebuffer( GL_READ_FRAMEBUFFER, framebuffer );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, 0 );
glViewport(0, 0, self.window.frame.size.width, self.window.frame.size.height);
glClearColor( 0.0, 0.0, 0.0, 0.0 );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glBlitFramebuffer( 0, 0, 1024, 768, 0, 0, 1024, 768, GL_COLOR_BUFFER_BIT, GL_NEAREST );
glSwapAPPLE();
}
- (void)draw
{
/* Prepare to render into the renderbuffer */
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, framebuffer );
glViewport( 0, 0, 1024, 768 );
/* Render into renderbuffer */
glClearColor( 1.0, 1.0, 0.0, 1.0 );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
}
#end
and it works fine. However in my second project I can not use Interface Builder, so I do it like this,
TestingView *testingView = [[TestingView alloc] initWithFrame:CGRectMake(0, 0, 1024, 768)];
[self.view addSubview:testingView];
and it doesn't work. OpengGLContext doesn't initialize properly. I also tried
GLuint pixAttribs[] =
{
NSOpenGLPFAWindow, // choose among pixelformats capable of rendering to windows
NSOpenGLPFAAccelerated, // require hardware-accelerated pixelformat
NSOpenGLPFADoubleBuffer, // require double-buffered pixelformat
NSOpenGLPFAColorSize, 24, // require 24 bits for color-channels
NSOpenGLPFAAlphaSize, 8, // require an 8-bit alpha channel
NSOpenGLPFADepthSize, 24, // require a 24-bit depth buffer
NSOpenGLPFAMinimumPolicy, // select a pixelformat which meets or exceeds these requirements
0
};
NSOpenGLPixelFormat *pixFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:pixAttribs];
NSOpenGLContext *context = [[NSOpenGLContext alloc] initWithFormat:pixFormat shareContext:nil];
TestingView *testingView = [[TestingView alloc] initWithFrame:CGRectMake(0, 0, 1024, 768) pixelFormat:pixFormat];
testingView.openGLContext = context;
and OpenGL still doesn't work. How can I add NSOpenGLView as a subview without using Interface Builder?
I smashed my head a few times on that one. I found out that my pixformat was in the way. Here's what I'd do:
// Assuming you have a self.window initialized somewhere
TestingView *testingView = [[TestingView alloc] initWithFrame:self.window.frame pixelFormat:nil];
[self.window setContentView:testingView];
You can checkout my full implementation (no Interface Builder at all) here: https://gitorious.org/glengine/glengine

Drawing to a bitmap context

I am trying to draw to a bitmap context but coming up empty. I believe I'm creating things properly because I can initialize the context, draw a few things, then create an image from it and draw that image. What I cannot do is, after initialization, trigger further drawing on the context that draws more items on it. I'm not sure if I'm missing some common practice that implies I can only draw it at certain places or that I have to do something else. Here is what I do, below.
I copied the helper function provided by apple with one modification to obtain the color space because it wasn't compiling (this is for iPad, don't know if that matters):
CGContextRef MyCreateBitmapContext (int pixelsWide, int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);// 1
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB(); //CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);// 2
bitmapData = malloc( bitmapByteCount );// 3
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,// 4
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);// 5
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );// 6
return context;// 7
}
I initialize it in my init method below with a few sample draws just to be sure it looks right:
mContext = MyCreateBitmapContext (rect.size.width, rect.size.height);
// sample fills
CGContextSetRGBFillColor (mContext, 1, 0, 0, 1);
CGContextFillRect (mContext, CGRectMake (0, 0, 200, 100 ));
CGContextSetRGBFillColor (mContext, 0, 0, 1, .5);
CGContextFillRect (mContext, CGRectMake (0, 0, 100, 200 ));
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 5.0);
CGContextAddEllipseInRect(mContext, CGRectMake(0, 0, 60.0, 60.0));
CGContextStrokePath(mContext);
In my drawRect method, I create an image from it to render it. Maybe I should be creating and keeping this image as a member var and updating it everytime I draw something new and not creating the image every frame? (Some advice on this would be nice):
// draw bitmap context
CGImageRef myImage = CGBitmapContextCreateImage (mContext);
CGContextDrawImage(context, rect, myImage);
CGImageRelease(myImage);
Then as a test I try drawing a circle when I touch, but nothing happens, and the touch is definitely triggering:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint location;
for (UITouch* touch in touches)
{
location = [touch locationInView: [touch view]];
}
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 2.0);
CGContextAddEllipseInRect(mContext, CGRectMake(location.x, location.y, 60.0, 60.0));
CGContextStrokePath(mContext);
}
Help?
[self setNeedsDisplay]; !!!!
!!!!!!!!!
so it was because drawRect was never being called after init since it didn't know it needed to refresh. My understanding is that I should just call setNeedsDisplay anytime I draw and that seems to work. :)