How to correctly render a texture orthogonally in OpenGL? - objective-c

I'm trying to render a 2D texture in an orthogonal projection.
Let me know what's wrong.
width and height are 128, the view is 256px wide and tall, so I expect the texture to be scaled 2x.
But all I get is this:
Code:
#interface ModNesOpenGLView : NSOpenGLView {
#public
char *pixels;
int width;
int height;
int zoom;
}
- (void) drawRect: (NSRect) bounds;
- (void) free;
#end
#implementation ModNesOpenGLView
-(void) awakeFromNib {
self->pixels = malloc( self->width * self->height * 3 );
memset( (void *)self->pixels, 0, self->width * self->height * 3 );
for( int y=0; y<self->height; ++y )
{
for( int x=0; x<self->width; ++x )
{
char r=0,g=0,b=0;
switch( y%3 ) {
case 0: r=0xFF; break;
case 1: g=0xFF; break;
case 2: b=0xFF; break;
}
[self setPixel_x:x y:y r:r g:g b:b];
}
}
}
-(void) setPixel_x:(int)x y:(int)y r:(char)r g:(char)g b:(char)b
{
self->pixels[ ( y * self->width + x ) * 3 ] = r;
self->pixels[ ( y * self->width + x ) * 3 + 1 ] = g;
self->pixels[ ( y * self->width + x ) * 3 + 2 ] = b;
}
-(void) drawRect: (NSRect) bounds
{
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT );
glTexImage2D( GL_TEXTURE_2D, 0, 3, self->width, self->height, 0,GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // GL_LINEAR
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glEnable(GL_TEXTURE_2D);
// glTexSubImage2D(GL_TEXTURE_2D, 0 ,0, 0, self->width, self->height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glBegin( GL_QUADS );
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
glEnd();
glFlush();
}
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
}
(^ code scrolls down)
I realize I'm missing all the part about the initialization of the projection matrix and orthogonal projection.
I added it:
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
glClearColor(0, 0, 0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glViewport(0, 0, self->width, self->height);
}
And then I get this:
I'm confused.
Here's where I got the code from: code example

Your problem is with coordinate systems and their ranges. Looking at the coordinates you use for drawing:
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
The OpenGL coordinate system has a range of [-1.0, 1.0] in both x- and y-direction if you don't apply a transformation. This means that (0.0, 0.0), which is the bottom-left corner of the quad you are drawing, is in the center of the screen. It then extends to the right and top. The size of the quad is actually much bigger than the window, but it obviously gets clipped.
This explains the original version and resulting picture you posted. You end up with the top-right quadrant being filled, with a very small fraction of your texture (about one texel).
Then in the updated code, you add this:
glViewport(0, 0, self->width, self->height);
The viewport determines the part of the window you draw to. Since you say that width and height are 128, and the window size is 256x256, this call specifies that you only want to draw into the bottom-left quadrant of your window.
Since everything else is unchanged, you then still draw the top-right quadrant of your drawing area. So you end up filling the top-right quadrant of the bottom-left quadrant of the window, which is exactly what you have in the second image.
To fix this, the simplest approach is to not set the viewport to a non-default value (remove the glViewport() call), and use coordinates in the range [-1.0, 1.0] in both directions:
glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, -1.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f( 1.0f, -1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f( 1.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0f, 1.0f);
Another option is that you set up a transformation that changes the coordinate range to the values you are using. In legacy OpenGL, which you are using, something like this should work:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, self->width, 0.0, self->height, -1.0, 1.0);

Related

OpenGL Picking not work in NSOpenGLView

I Am trying to implement picking in a NSOpenGLView , but not works, this is the code.
I render only the objects that I need pick with no lights and I render the scene as same in normal render.
- (void)drawSeleccion
{
NSSize size = [self bounds].size;
GLuint selectBuf[16 * 4] = {0};
GLint hits;
glClearColor (0.0, 0.0, 0.0, 0.0);
glColor4f(1.0, 1.0, 1.0, 1.0);
glSelectBuffer (16 * 4, selectBuf);
glRenderMode(GL_SELECT);
/// *** Start ***
glInitNames();
glPushName(0);
// view port.
glViewport(0, 0, size.width, size.height);
// projection matrix.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float dist = 534;
aspectRatio = size.width / size.height;
nearDist = MAX(10, dist - 360.0);
farDist = dist + 360.0;
GLKMatrix4 m4 = GLKMatrix4MakePerspective(zoom, aspectRatio, nearDist, farDist);
glMultMatrixf(m4.m);
// Model view.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// I look at.
GLKMatrix4 glm = GLKMatrix4MakeLookAt(0, dist, 0, 0, 0, 0, 0, 0, 1);
glMultMatrixf(glm.m);
// rotate viewPort.
glRotated(rotate_x, 1, 0, 0);
glRotated(rotate_z, 0, 0, 1);
glTranslated(translate_x - frente * 0.5, fondo * -0.5, translate_z - alto * 0.5);
/// render model....
glPushMatrix();
for (int volNo = 0; volNo < [self.modeloOptimo.arr_volumenes count]; volNo++) {
VolumenOptimo *volOp = self.modeloOptimo.arr_volumenes[volNo];
glLoadName(volNo);
volOp->verProblemas = false;
[volOp drawVolumen];
}
glPopName();
glPopMatrix();
// Flush view.
glFlush();
hits = glRenderMode(GL_RENDER);
processHits (hits, selectBuf);
} // Fin de drawRect.
Always hits = 0, and selectBuf is empty.
Any idea. thanks

3D Opaque Polygons with OpenGL and weighted OIT

I'm having trouble getting opaque polygons to not be transparent. I'm using a formula from this site:
Weighted Order Independent Transparency
Here's my code:
int programShader = 0;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self changeFrustumX:0 y:0 w:512 h:512];
// Opaque stuff goes here?
// Make everything transparent
glEnable(GL_BLEND);
for (int i = 0; i < 2; i++)
{
glBindFramebuffer(GL_FRAMEBUFFER, framebufferID[i]);
if (i == 0)
{
// The transparency colors
glClearColor(0.0, 0.0, 0.0, 1.0);
glBlendFunc(GL_ONE, GL_ONE);
glClear(GL_COLOR_BUFFER_BIT);
programShader = colorPassShader;
}
else if (i == 1)
{
// The transparency mask
glClearColor(1.0, 1.0, 1.0, 1.0);
glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);
glClear(GL_COLOR_BUFFER_BIT);
programShader = maskPassShader;
}
glUseProgram(programShader);
// Yellow is supposed to be opaque
[self setProgram:programShader
modelView:modelViewArray2
projection:frustumArray
vertices:yellowVertices
colors:yellowColors
textures:NULL];
// Blue not opaque
[self setProgram:programShader
modelView:modelViewArray2
projection:frustumArray
vertices:blueVertices
colors:blueColors
textures:NULL];
// Red not opaque
[self setProgram:programShader
modelView:modelViewArray2
projection:frustumArray
vertices:redVertices
colors:redColors
textures:NULL];
}
// get back to the default framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Transparent objects rendering
glClearColor(0.75, 0.75, 0.75, 1.0);
// Original blend
glBlendFunc(GL_ONE_MINUS_SRC_ALPHA, GL_SRC_ALPHA);
glClear(GL_COLOR_BUFFER_BIT);
[self changeOrthoX:0 y:0 w:512 h:512];
glUseProgram(combineShader);
glActiveTexture(GL_TEXTURE1);
// Display the color from the framebuffer
glBindTexture(GL_TEXTURE_2D, renderTextureID[0]);
// Colors
glUniform1i(glGetUniformLocation(combineShader, "sAccumulation"), 1);
glActiveTexture(GL_TEXTURE2);
// Display the color from the framebuffer
glBindTexture(GL_TEXTURE_2D, renderTextureID[1]);
// Mask
glUniform1i(glGetUniformLocation(combineShader, "sReveal"), 2);
[self setProgram:combineShader
modelView:modelViewArray3
projection:orthogonalArray
vertices:combineVertices
colors:NULL
textures:combineTextures];
// Opaque objects rendering
glDisable(GL_BLEND);
I couldn't get multiple glFragData[n] working and from what I understand OpenGL ES doesn't support more than one anyway. The Yellow polygon should be totally opaque but in the pictures it's not. How do I get it opaque and everything else transparent? Also how do I create half transparent half opaque polygons?
Here is a picture that I generated:

CGContextAddEllipse - overlapping get's clipped - Quartz

I like to draw a glass with a few Elements
- Top Ellipse
- Bottom Ellipse
- and the Lines Inbetween
Next, it should be filled with a Gradient. The Elements work, but at the point, where the middle of the glass comes in touch with the top or bottom ellipse the area get's clipped.
- (void)drawRect:(CGRect)rect
{
CGPoint c = self.center;
// Drawing code
CGContextRef cx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(cx, 1.0);
[[UIColor whiteColor] setStroke];
// DrawTheShapeOfTheGlass
CGContextBeginPath(cx);
// Top and Bottom Ellipse
CGContextAddEllipseInRect(cx, CGRectMake(0, 0, 100, 20));
CGContextAddEllipseInRect(cx, CGRectMake(10, 90, 80, 20));
// Define the points for the Area inbetween
CGPoint points[] = { {0.0,10.0},{10.0,100.0},{90.0,100.0},{100.0,10.0} };
CGContextAddLines(cx, points, 4);
CGContextClosePath(cx);
// Clip, that's only the Clipped-Area wil be filled with the Gradient
CGContextClip(cx);
// CreateAndDraw the Gradient
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat colorSpace[] = {1.0f, 1.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f, 1.0f };
CGFloat locations[] = { 0.0, 1.0 };
CGGradientRef myGradient = CGGradientCreateWithColorComponents(rgbColorSpace, colorSpace, locations, 2);
CGPoint s = CGPointMake(0, 0);
CGPoint e = CGPointMake(100, 100);
CGContextDrawLinearGradient(cx, myGradient, s, e, kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation);
CGColorSpaceRelease(rgbColorSpace);
CGGradientRelease(myGradient);
}
Here how it looks like:
Is there any possibility to "fill" the whole ellipse? I played around with BlendModes but it didn't help.
Thanks
Try replacing the points[] initialization code with the following...
CGPoint points[] = {{0.0,10.0},{100.0,10.0},{90.0,100.0},{10.0,100.0}};
CoreGraphics uses the non-zero winding count rule to determine how to fill a path. Since ellipses are drawn clockwise and your trapezoid was drawn counter clockwise, the overlapping regions were not filled. Changing the drawing order of the trapezoid to clockwise will result in an object that is completely filled.

Objective-C Draw a Circle with a Ring shape

I wrote this class that draws a animated progress with a circle (it draws a circular sector based on a float progress)
#implementation MXMProgressView
#synthesize progress;
- (id)initWithDefaultSize {
int circleOffset = 45.0f;
self = [super initWithFrame:CGRectMake(0.0f,
0.0f,
135.0f + circleOffset,
135.0f + circleOffset)];
self.backgroundColor = [UIColor clearColor];
return self;
}
- (void)drawRect:(CGRect)rect {
CGRect allRect = self.bounds;
CGRect circleRect = CGRectMake(allRect.origin.x + 2, allRect.origin.y + 2, allRect.size.width - 4,
allRect.size.height - 4);
CGContextRef context = UIGraphicsGetCurrentContext();
// background image
//UIImage *image = [UIImage imageNamed:#"loader_disc_hover.png"];
//[image drawInRect:circleRect];
// Orange: E27006
CGContextSetRGBFillColor(context,
((CGFloat)0xE2/(CGFloat)0xFF),
((CGFloat)0x70/(CGFloat)0xFF),
((CGFloat)0x06/(CGFloat)0xFF),
0.01f); // fill
//CGContextSetLineWidth(context, 2.0);
CGContextFillEllipseInRect(context, circleRect);
//CGContextStrokeEllipseInRect(context, circleRect);
// Draw progress
float x = (allRect.size.width / 2);
float y = (allRect.size.height / 2);
// Orange: E27006
CGContextSetRGBFillColor(context,
((CGFloat)0xE2/(CGFloat)0xFF),
((CGFloat)0x70/(CGFloat)0xFF),
((CGFloat)0x06/(CGFloat)0xFF),
1.0f); // progress
CGContextMoveToPoint(context, x, y);
CGContextAddArc(context, x, y, (allRect.size.width - 4) / 2, -M_PI_2, (self.progress * 2 * M_PI) - M_PI_2, 0);
CGContextClosePath(context);
CGContextFillPath(context);
}
#end
Now what I want to do I to draw a ring shape with the same progress animation, instead of filling the full circle, so a circular sector again not starting from the center of the circle.
I tried with CGContextAddEllipseInRect and the CGContextEOFillPath(context);
with no success.
I think you'll need to construct a more complex path, something like:
// Move to start point of outer arc (which might not be required)
CGContextMoveToPoint(context, x+outerRadius*cos(startAngle), y+outerRadius*sin(startAngle));
// Add outer arc to path (counterclockwise)
CGContextAddArc(context, x, y, outerRadius, startAngle, endAngle, 0);
// move *inward* to start point of inner arc
CGContextMoveToPoint(context, x+innerRadius*cos(endAngle), y+innerRadius*sin(endAngle));
// Add inner arc to path (clockwise)
CGContextAddArc(context, x, y, innerRadius, endAngle, StartAngle, 1);
// Close the path from end of inner arc to start of outer arc
CGContextClosePath(context);
Note: I haven't tried the above code myself
Cheap and nasty solution:
Draw a solid circle that is smaller than the original circle by the thickness of the ring you want to draw.
Draw this circle on top of the original circle, all that you will see animating is the ring.

Drawing to a bitmap context

I am trying to draw to a bitmap context but coming up empty. I believe I'm creating things properly because I can initialize the context, draw a few things, then create an image from it and draw that image. What I cannot do is, after initialization, trigger further drawing on the context that draws more items on it. I'm not sure if I'm missing some common practice that implies I can only draw it at certain places or that I have to do something else. Here is what I do, below.
I copied the helper function provided by apple with one modification to obtain the color space because it wasn't compiling (this is for iPad, don't know if that matters):
CGContextRef MyCreateBitmapContext (int pixelsWide, int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);// 1
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB(); //CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);// 2
bitmapData = malloc( bitmapByteCount );// 3
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,// 4
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);// 5
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );// 6
return context;// 7
}
I initialize it in my init method below with a few sample draws just to be sure it looks right:
mContext = MyCreateBitmapContext (rect.size.width, rect.size.height);
// sample fills
CGContextSetRGBFillColor (mContext, 1, 0, 0, 1);
CGContextFillRect (mContext, CGRectMake (0, 0, 200, 100 ));
CGContextSetRGBFillColor (mContext, 0, 0, 1, .5);
CGContextFillRect (mContext, CGRectMake (0, 0, 100, 200 ));
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 5.0);
CGContextAddEllipseInRect(mContext, CGRectMake(0, 0, 60.0, 60.0));
CGContextStrokePath(mContext);
In my drawRect method, I create an image from it to render it. Maybe I should be creating and keeping this image as a member var and updating it everytime I draw something new and not creating the image every frame? (Some advice on this would be nice):
// draw bitmap context
CGImageRef myImage = CGBitmapContextCreateImage (mContext);
CGContextDrawImage(context, rect, myImage);
CGImageRelease(myImage);
Then as a test I try drawing a circle when I touch, but nothing happens, and the touch is definitely triggering:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint location;
for (UITouch* touch in touches)
{
location = [touch locationInView: [touch view]];
}
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 2.0);
CGContextAddEllipseInRect(mContext, CGRectMake(location.x, location.y, 60.0, 60.0));
CGContextStrokePath(mContext);
}
Help?
[self setNeedsDisplay]; !!!!
!!!!!!!!!
so it was because drawRect was never being called after init since it didn't know it needed to refresh. My understanding is that I should just call setNeedsDisplay anytime I draw and that seems to work. :)