NSOpenGLView without Interface Builder - objective-c

I've written a class TestingView which is a subclass of NSOpenGLView, and added it to NSWindow through Interface Builder
#implementation TestingView
- (void)prepareOpenGL
{
[super prepareOpenGL];
glGenRenderbuffers( NumRenderbuffers, renderbuffer );
glBindRenderbuffer( GL_RENDERBUFFER, renderbuffer[Color] );
glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA, 1024, 768 );
glBindRenderbuffer( GL_RENDERBUFFER, renderbuffer[Depth] );
glRenderbufferStorage( GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 1024, 768);
glGenFramebuffers( 1, &framebuffer );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, framebuffer );
glFramebufferRenderbuffer( GL_DRAW_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER,
renderbuffer[Color] );
glFramebufferRenderbuffer( GL_DRAW_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, renderbuffer[Depth] );
glEnable( GL_DEPTH_TEST );
}
- (void)show
{
glBindFramebuffer( GL_READ_FRAMEBUFFER, framebuffer );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, 0 );
glViewport(0, 0, self.window.frame.size.width, self.window.frame.size.height);
glClearColor( 0.0, 0.0, 0.0, 0.0 );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glBlitFramebuffer( 0, 0, 1024, 768, 0, 0, 1024, 768, GL_COLOR_BUFFER_BIT, GL_NEAREST );
glSwapAPPLE();
}
- (void)draw
{
/* Prepare to render into the renderbuffer */
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, framebuffer );
glViewport( 0, 0, 1024, 768 );
/* Render into renderbuffer */
glClearColor( 1.0, 1.0, 0.0, 1.0 );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
}
#end
and it works fine. However in my second project I can not use Interface Builder, so I do it like this,
TestingView *testingView = [[TestingView alloc] initWithFrame:CGRectMake(0, 0, 1024, 768)];
[self.view addSubview:testingView];
and it doesn't work. OpengGLContext doesn't initialize properly. I also tried
GLuint pixAttribs[] =
{
NSOpenGLPFAWindow, // choose among pixelformats capable of rendering to windows
NSOpenGLPFAAccelerated, // require hardware-accelerated pixelformat
NSOpenGLPFADoubleBuffer, // require double-buffered pixelformat
NSOpenGLPFAColorSize, 24, // require 24 bits for color-channels
NSOpenGLPFAAlphaSize, 8, // require an 8-bit alpha channel
NSOpenGLPFADepthSize, 24, // require a 24-bit depth buffer
NSOpenGLPFAMinimumPolicy, // select a pixelformat which meets or exceeds these requirements
0
};
NSOpenGLPixelFormat *pixFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:pixAttribs];
NSOpenGLContext *context = [[NSOpenGLContext alloc] initWithFormat:pixFormat shareContext:nil];
TestingView *testingView = [[TestingView alloc] initWithFrame:CGRectMake(0, 0, 1024, 768) pixelFormat:pixFormat];
testingView.openGLContext = context;
and OpenGL still doesn't work. How can I add NSOpenGLView as a subview without using Interface Builder?

I smashed my head a few times on that one. I found out that my pixformat was in the way. Here's what I'd do:
// Assuming you have a self.window initialized somewhere
TestingView *testingView = [[TestingView alloc] initWithFrame:self.window.frame pixelFormat:nil];
[self.window setContentView:testingView];
You can checkout my full implementation (no Interface Builder at all) here: https://gitorious.org/glengine/glengine

Related

OpenGL Picking not work in NSOpenGLView

I Am trying to implement picking in a NSOpenGLView , but not works, this is the code.
I render only the objects that I need pick with no lights and I render the scene as same in normal render.
- (void)drawSeleccion
{
NSSize size = [self bounds].size;
GLuint selectBuf[16 * 4] = {0};
GLint hits;
glClearColor (0.0, 0.0, 0.0, 0.0);
glColor4f(1.0, 1.0, 1.0, 1.0);
glSelectBuffer (16 * 4, selectBuf);
glRenderMode(GL_SELECT);
/// *** Start ***
glInitNames();
glPushName(0);
// view port.
glViewport(0, 0, size.width, size.height);
// projection matrix.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float dist = 534;
aspectRatio = size.width / size.height;
nearDist = MAX(10, dist - 360.0);
farDist = dist + 360.0;
GLKMatrix4 m4 = GLKMatrix4MakePerspective(zoom, aspectRatio, nearDist, farDist);
glMultMatrixf(m4.m);
// Model view.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// I look at.
GLKMatrix4 glm = GLKMatrix4MakeLookAt(0, dist, 0, 0, 0, 0, 0, 0, 1);
glMultMatrixf(glm.m);
// rotate viewPort.
glRotated(rotate_x, 1, 0, 0);
glRotated(rotate_z, 0, 0, 1);
glTranslated(translate_x - frente * 0.5, fondo * -0.5, translate_z - alto * 0.5);
/// render model....
glPushMatrix();
for (int volNo = 0; volNo < [self.modeloOptimo.arr_volumenes count]; volNo++) {
VolumenOptimo *volOp = self.modeloOptimo.arr_volumenes[volNo];
glLoadName(volNo);
volOp->verProblemas = false;
[volOp drawVolumen];
}
glPopName();
glPopMatrix();
// Flush view.
glFlush();
hits = glRenderMode(GL_RENDER);
processHits (hits, selectBuf);
} // Fin de drawRect.
Always hits = 0, and selectBuf is empty.
Any idea. thanks

How to correctly render a texture orthogonally in OpenGL?

I'm trying to render a 2D texture in an orthogonal projection.
Let me know what's wrong.
width and height are 128, the view is 256px wide and tall, so I expect the texture to be scaled 2x.
But all I get is this:
Code:
#interface ModNesOpenGLView : NSOpenGLView {
#public
char *pixels;
int width;
int height;
int zoom;
}
- (void) drawRect: (NSRect) bounds;
- (void) free;
#end
#implementation ModNesOpenGLView
-(void) awakeFromNib {
self->pixels = malloc( self->width * self->height * 3 );
memset( (void *)self->pixels, 0, self->width * self->height * 3 );
for( int y=0; y<self->height; ++y )
{
for( int x=0; x<self->width; ++x )
{
char r=0,g=0,b=0;
switch( y%3 ) {
case 0: r=0xFF; break;
case 1: g=0xFF; break;
case 2: b=0xFF; break;
}
[self setPixel_x:x y:y r:r g:g b:b];
}
}
}
-(void) setPixel_x:(int)x y:(int)y r:(char)r g:(char)g b:(char)b
{
self->pixels[ ( y * self->width + x ) * 3 ] = r;
self->pixels[ ( y * self->width + x ) * 3 + 1 ] = g;
self->pixels[ ( y * self->width + x ) * 3 + 2 ] = b;
}
-(void) drawRect: (NSRect) bounds
{
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT );
glTexImage2D( GL_TEXTURE_2D, 0, 3, self->width, self->height, 0,GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // GL_LINEAR
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glEnable(GL_TEXTURE_2D);
// glTexSubImage2D(GL_TEXTURE_2D, 0 ,0, 0, self->width, self->height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glBegin( GL_QUADS );
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
glEnd();
glFlush();
}
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
}
(^ code scrolls down)
I realize I'm missing all the part about the initialization of the projection matrix and orthogonal projection.
I added it:
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
glClearColor(0, 0, 0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glViewport(0, 0, self->width, self->height);
}
And then I get this:
I'm confused.
Here's where I got the code from: code example
Your problem is with coordinate systems and their ranges. Looking at the coordinates you use for drawing:
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
The OpenGL coordinate system has a range of [-1.0, 1.0] in both x- and y-direction if you don't apply a transformation. This means that (0.0, 0.0), which is the bottom-left corner of the quad you are drawing, is in the center of the screen. It then extends to the right and top. The size of the quad is actually much bigger than the window, but it obviously gets clipped.
This explains the original version and resulting picture you posted. You end up with the top-right quadrant being filled, with a very small fraction of your texture (about one texel).
Then in the updated code, you add this:
glViewport(0, 0, self->width, self->height);
The viewport determines the part of the window you draw to. Since you say that width and height are 128, and the window size is 256x256, this call specifies that you only want to draw into the bottom-left quadrant of your window.
Since everything else is unchanged, you then still draw the top-right quadrant of your drawing area. So you end up filling the top-right quadrant of the bottom-left quadrant of the window, which is exactly what you have in the second image.
To fix this, the simplest approach is to not set the viewport to a non-default value (remove the glViewport() call), and use coordinates in the range [-1.0, 1.0] in both directions:
glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, -1.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f( 1.0f, -1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f( 1.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0f, 1.0f);
Another option is that you set up a transformation that changes the coordinate range to the values you are using. In legacy OpenGL, which you are using, something like this should work:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, self->width, 0.0, self->height, -1.0, 1.0);

Transparency in CGContext

I'm trying to set a transparent background for a CGContext but keep getting :
CGBitmapContextCreateImage: invalid context 0x0
Here's what I've got. If I switch kCGImageAlphaLast to kCGImageAlphaNoneSkipFirst it works but the alpha channel is completely ignored. I'm very new to this color & context stuff - any ideas?
-(BOOL) initContext:(CGSize)size {
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
cacheContext = CGBitmapContextCreate (NULL, size.width, size.height, 8, bitmapBytesPerRow, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast);
CGContextSetRGBFillColor(cacheContext, 1.0, 1.0, 1.0f, 0.5);
CGContextFillRect(cacheContext, self.bounds);
return YES;
}
CGBitmapContext supports only certain possible pixel formats. You probably want kCGImageAlphaPremultipliedLast.
(Here's an explanation of premultiplied alpha.)
Also note:
There is no need to malloc cacheBitmap. Since you are passing in NULL as the first argument to CGBitmapContextCreate, the bitmap context will do its own allocation.
Depending on your code, self.bounds may not be the correct rectangle to fill. It would be safer to use CGRectMake(0.f, 0.f, size.width, size.height).

What is the name for UIControlState to get the value when subclassing UIButton?

I am subclassing UIButton.
But I need to know the state that the button is in to draw the color of button for up or down:
- (void)drawRect:(CGRect)rect{
if (state==UIControlStateNormal) {
//draw rect RED
}
if (state==UIControlEventTouchDown)
//draw rect BLUE
}
}
Did you even try looking at the docs?
Goto UIButton Reference
Check the properties
Nothing obvious
See if UIButton has a superclass
Goto UIControl Reference - UIButton's superclass
Check the properties
Oh look there is a state property
Update
The accepted answer is slightly incorrect and could lead to some annoyingly difficult bug to track down.
The header for UIControl declares state as
#property(nonatomic,readonly) UIControlState state; // could be more than one state (e.g. disabled|selected). synthesized from other flags.
Now looking up to see how UIControlState is defined we see
enum {
UIControlStateNormal = 0,
UIControlStateHighlighted = 1 << 0, // used when UIControl isHighlighted is set
UIControlStateDisabled = 1 << 1,
UIControlStateSelected = 1 << 2, // flag usable by app (see below)
UIControlStateApplication = 0x00FF0000, // additional flags available for application use
UIControlStateReserved = 0xFF000000 // flags reserved for internal framework use
};
typedef NSUInteger UIControlState;
Therefore as you are dealing with a bit mask you should check for the state appropriately e.g.
if (self.state & UIControlStateNormal) { ... }
Update 2
You could do this by drawing into an image and then setting the image as the background e.g.
- (void)clicked:(UIButton *)button;
{
UIGraphicsBeginImageContext(button.frame.size);
// Draw gradient
UIImage *gradient = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[button setBackgroundImage:image forState:UIControlStateNormal];
}
Since state is a property, you can use self.state to access it.
Update:
See the next answer's update.
To redraw rect, I did this...
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
myState = 1;
[self setNeedsDisplay];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
myState = 0;
[self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect
{
// Drawing code
NSLog(#"drawRect state: %d", self.state);
NSLog(#"rect %#", NSStringFromCGRect(self.frame));
c = UIGraphicsGetCurrentContext();
if (myState==0){
///////// plot border /////////////
CGContextBeginPath(c);
CGContextSetLineWidth(c, 2.0);
CGContextMoveToPoint(c, 1, 1);
CGContextAddLineToPoint(c, 1, 20);
CGContextAddLineToPoint(c, 299, 20);
CGContextAddLineToPoint(c, 299, 1);
CGContextSetRGBFillColor(c, 0.0, 0.0, 1.0, 1.0);
CGContextFillPath(c);
CGContextClosePath(c);
CGContextStrokePath(c);
}else{
CGContextBeginPath(c);
CGContextSetLineWidth(c, 2.0);
CGContextMoveToPoint(c, 1, 20);
CGContextAddLineToPoint(c, 1, 40);
CGContextAddLineToPoint(c, 299, 40);
CGContextAddLineToPoint(c, 299, 20);
CGContextSetRGBFillColor(c, 1.0, 0.0, 1.0, 1.0);
CGContextFillPath(c);
CGContextClosePath(c);
CGContextStrokePath(c);
}
}

Drawing to a bitmap context

I am trying to draw to a bitmap context but coming up empty. I believe I'm creating things properly because I can initialize the context, draw a few things, then create an image from it and draw that image. What I cannot do is, after initialization, trigger further drawing on the context that draws more items on it. I'm not sure if I'm missing some common practice that implies I can only draw it at certain places or that I have to do something else. Here is what I do, below.
I copied the helper function provided by apple with one modification to obtain the color space because it wasn't compiling (this is for iPad, don't know if that matters):
CGContextRef MyCreateBitmapContext (int pixelsWide, int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);// 1
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB(); //CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);// 2
bitmapData = malloc( bitmapByteCount );// 3
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,// 4
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);// 5
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );// 6
return context;// 7
}
I initialize it in my init method below with a few sample draws just to be sure it looks right:
mContext = MyCreateBitmapContext (rect.size.width, rect.size.height);
// sample fills
CGContextSetRGBFillColor (mContext, 1, 0, 0, 1);
CGContextFillRect (mContext, CGRectMake (0, 0, 200, 100 ));
CGContextSetRGBFillColor (mContext, 0, 0, 1, .5);
CGContextFillRect (mContext, CGRectMake (0, 0, 100, 200 ));
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 5.0);
CGContextAddEllipseInRect(mContext, CGRectMake(0, 0, 60.0, 60.0));
CGContextStrokePath(mContext);
In my drawRect method, I create an image from it to render it. Maybe I should be creating and keeping this image as a member var and updating it everytime I draw something new and not creating the image every frame? (Some advice on this would be nice):
// draw bitmap context
CGImageRef myImage = CGBitmapContextCreateImage (mContext);
CGContextDrawImage(context, rect, myImage);
CGImageRelease(myImage);
Then as a test I try drawing a circle when I touch, but nothing happens, and the touch is definitely triggering:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint location;
for (UITouch* touch in touches)
{
location = [touch locationInView: [touch view]];
}
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 2.0);
CGContextAddEllipseInRect(mContext, CGRectMake(location.x, location.y, 60.0, 60.0));
CGContextStrokePath(mContext);
}
Help?
[self setNeedsDisplay]; !!!!
!!!!!!!!!
so it was because drawRect was never being called after init since it didn't know it needed to refresh. My understanding is that I should just call setNeedsDisplay anytime I draw and that seems to work. :)