3D Opaque Polygons with OpenGL and weighted OIT - opengl-es-2.0

I'm having trouble getting opaque polygons to not be transparent. I'm using a formula from this site:
Weighted Order Independent Transparency
Here's my code:
int programShader = 0;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self changeFrustumX:0 y:0 w:512 h:512];
// Opaque stuff goes here?
// Make everything transparent
glEnable(GL_BLEND);
for (int i = 0; i < 2; i++)
{
glBindFramebuffer(GL_FRAMEBUFFER, framebufferID[i]);
if (i == 0)
{
// The transparency colors
glClearColor(0.0, 0.0, 0.0, 1.0);
glBlendFunc(GL_ONE, GL_ONE);
glClear(GL_COLOR_BUFFER_BIT);
programShader = colorPassShader;
}
else if (i == 1)
{
// The transparency mask
glClearColor(1.0, 1.0, 1.0, 1.0);
glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);
glClear(GL_COLOR_BUFFER_BIT);
programShader = maskPassShader;
}
glUseProgram(programShader);
// Yellow is supposed to be opaque
[self setProgram:programShader
modelView:modelViewArray2
projection:frustumArray
vertices:yellowVertices
colors:yellowColors
textures:NULL];
// Blue not opaque
[self setProgram:programShader
modelView:modelViewArray2
projection:frustumArray
vertices:blueVertices
colors:blueColors
textures:NULL];
// Red not opaque
[self setProgram:programShader
modelView:modelViewArray2
projection:frustumArray
vertices:redVertices
colors:redColors
textures:NULL];
}
// get back to the default framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Transparent objects rendering
glClearColor(0.75, 0.75, 0.75, 1.0);
// Original blend
glBlendFunc(GL_ONE_MINUS_SRC_ALPHA, GL_SRC_ALPHA);
glClear(GL_COLOR_BUFFER_BIT);
[self changeOrthoX:0 y:0 w:512 h:512];
glUseProgram(combineShader);
glActiveTexture(GL_TEXTURE1);
// Display the color from the framebuffer
glBindTexture(GL_TEXTURE_2D, renderTextureID[0]);
// Colors
glUniform1i(glGetUniformLocation(combineShader, "sAccumulation"), 1);
glActiveTexture(GL_TEXTURE2);
// Display the color from the framebuffer
glBindTexture(GL_TEXTURE_2D, renderTextureID[1]);
// Mask
glUniform1i(glGetUniformLocation(combineShader, "sReveal"), 2);
[self setProgram:combineShader
modelView:modelViewArray3
projection:orthogonalArray
vertices:combineVertices
colors:NULL
textures:combineTextures];
// Opaque objects rendering
glDisable(GL_BLEND);
I couldn't get multiple glFragData[n] working and from what I understand OpenGL ES doesn't support more than one anyway. The Yellow polygon should be totally opaque but in the pictures it's not. How do I get it opaque and everything else transparent? Also how do I create half transparent half opaque polygons?
Here is a picture that I generated:

Related

How to correctly render a texture orthogonally in OpenGL?

I'm trying to render a 2D texture in an orthogonal projection.
Let me know what's wrong.
width and height are 128, the view is 256px wide and tall, so I expect the texture to be scaled 2x.
But all I get is this:
Code:
#interface ModNesOpenGLView : NSOpenGLView {
#public
char *pixels;
int width;
int height;
int zoom;
}
- (void) drawRect: (NSRect) bounds;
- (void) free;
#end
#implementation ModNesOpenGLView
-(void) awakeFromNib {
self->pixels = malloc( self->width * self->height * 3 );
memset( (void *)self->pixels, 0, self->width * self->height * 3 );
for( int y=0; y<self->height; ++y )
{
for( int x=0; x<self->width; ++x )
{
char r=0,g=0,b=0;
switch( y%3 ) {
case 0: r=0xFF; break;
case 1: g=0xFF; break;
case 2: b=0xFF; break;
}
[self setPixel_x:x y:y r:r g:g b:b];
}
}
}
-(void) setPixel_x:(int)x y:(int)y r:(char)r g:(char)g b:(char)b
{
self->pixels[ ( y * self->width + x ) * 3 ] = r;
self->pixels[ ( y * self->width + x ) * 3 + 1 ] = g;
self->pixels[ ( y * self->width + x ) * 3 + 2 ] = b;
}
-(void) drawRect: (NSRect) bounds
{
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT );
glTexImage2D( GL_TEXTURE_2D, 0, 3, self->width, self->height, 0,GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // GL_LINEAR
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glEnable(GL_TEXTURE_2D);
// glTexSubImage2D(GL_TEXTURE_2D, 0 ,0, 0, self->width, self->height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glBegin( GL_QUADS );
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
glEnd();
glFlush();
}
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
}
(^ code scrolls down)
I realize I'm missing all the part about the initialization of the projection matrix and orthogonal projection.
I added it:
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
glClearColor(0, 0, 0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glViewport(0, 0, self->width, self->height);
}
And then I get this:
I'm confused.
Here's where I got the code from: code example
Your problem is with coordinate systems and their ranges. Looking at the coordinates you use for drawing:
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
The OpenGL coordinate system has a range of [-1.0, 1.0] in both x- and y-direction if you don't apply a transformation. This means that (0.0, 0.0), which is the bottom-left corner of the quad you are drawing, is in the center of the screen. It then extends to the right and top. The size of the quad is actually much bigger than the window, but it obviously gets clipped.
This explains the original version and resulting picture you posted. You end up with the top-right quadrant being filled, with a very small fraction of your texture (about one texel).
Then in the updated code, you add this:
glViewport(0, 0, self->width, self->height);
The viewport determines the part of the window you draw to. Since you say that width and height are 128, and the window size is 256x256, this call specifies that you only want to draw into the bottom-left quadrant of your window.
Since everything else is unchanged, you then still draw the top-right quadrant of your drawing area. So you end up filling the top-right quadrant of the bottom-left quadrant of the window, which is exactly what you have in the second image.
To fix this, the simplest approach is to not set the viewport to a non-default value (remove the glViewport() call), and use coordinates in the range [-1.0, 1.0] in both directions:
glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, -1.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f( 1.0f, -1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f( 1.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0f, 1.0f);
Another option is that you set up a transformation that changes the coordinate range to the values you are using. In legacy OpenGL, which you are using, something like this should work:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, self->width, 0.0, self->height, -1.0, 1.0);

Transparency in CGContext

I'm trying to set a transparent background for a CGContext but keep getting :
CGBitmapContextCreateImage: invalid context 0x0
Here's what I've got. If I switch kCGImageAlphaLast to kCGImageAlphaNoneSkipFirst it works but the alpha channel is completely ignored. I'm very new to this color & context stuff - any ideas?
-(BOOL) initContext:(CGSize)size {
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
cacheContext = CGBitmapContextCreate (NULL, size.width, size.height, 8, bitmapBytesPerRow, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast);
CGContextSetRGBFillColor(cacheContext, 1.0, 1.0, 1.0f, 0.5);
CGContextFillRect(cacheContext, self.bounds);
return YES;
}
CGBitmapContext supports only certain possible pixel formats. You probably want kCGImageAlphaPremultipliedLast.
(Here's an explanation of premultiplied alpha.)
Also note:
There is no need to malloc cacheBitmap. Since you are passing in NULL as the first argument to CGBitmapContextCreate, the bitmap context will do its own allocation.
Depending on your code, self.bounds may not be the correct rectangle to fill. It would be safer to use CGRectMake(0.f, 0.f, size.width, size.height).

CGContextAddEllipse - overlapping get's clipped - Quartz

I like to draw a glass with a few Elements
- Top Ellipse
- Bottom Ellipse
- and the Lines Inbetween
Next, it should be filled with a Gradient. The Elements work, but at the point, where the middle of the glass comes in touch with the top or bottom ellipse the area get's clipped.
- (void)drawRect:(CGRect)rect
{
CGPoint c = self.center;
// Drawing code
CGContextRef cx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(cx, 1.0);
[[UIColor whiteColor] setStroke];
// DrawTheShapeOfTheGlass
CGContextBeginPath(cx);
// Top and Bottom Ellipse
CGContextAddEllipseInRect(cx, CGRectMake(0, 0, 100, 20));
CGContextAddEllipseInRect(cx, CGRectMake(10, 90, 80, 20));
// Define the points for the Area inbetween
CGPoint points[] = { {0.0,10.0},{10.0,100.0},{90.0,100.0},{100.0,10.0} };
CGContextAddLines(cx, points, 4);
CGContextClosePath(cx);
// Clip, that's only the Clipped-Area wil be filled with the Gradient
CGContextClip(cx);
// CreateAndDraw the Gradient
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat colorSpace[] = {1.0f, 1.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f, 1.0f };
CGFloat locations[] = { 0.0, 1.0 };
CGGradientRef myGradient = CGGradientCreateWithColorComponents(rgbColorSpace, colorSpace, locations, 2);
CGPoint s = CGPointMake(0, 0);
CGPoint e = CGPointMake(100, 100);
CGContextDrawLinearGradient(cx, myGradient, s, e, kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation);
CGColorSpaceRelease(rgbColorSpace);
CGGradientRelease(myGradient);
}
Here how it looks like:
Is there any possibility to "fill" the whole ellipse? I played around with BlendModes but it didn't help.
Thanks
Try replacing the points[] initialization code with the following...
CGPoint points[] = {{0.0,10.0},{100.0,10.0},{90.0,100.0},{10.0,100.0}};
CoreGraphics uses the non-zero winding count rule to determine how to fill a path. Since ellipses are drawn clockwise and your trapezoid was drawn counter clockwise, the overlapping regions were not filled. Changing the drawing order of the trapezoid to clockwise will result in an object that is completely filled.

High-Resolution Content for paint app Using OpenGL ES on iPad device

I am working on paint app [taking reference from GLPaint app] for iPhone and iPad. In this app I am filling colors in paint-images by drawings lines onscreen based on where the user touches. App working properly for iPhone. In iPad without zooming lines on the paint view are proper [no pixel distortion] but after zooming lines on the paintView has distorted pixels i.e Content of OpenGL ES is not High Resolution.
I am using Following code for initialize paint view:
-(id)initWithCoder:(NSCoder*)coder {
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
CGFloat components[3];
if ((self = [super initWithCoder:coder])) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = NO;
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
return nil;
}
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
brushImage = [UIImage imageNamed:#"circle 64.png"].CGImage;
}
else {
brushImage = [UIImage imageNamed:#"flower 128.png"].CGImage;
}
// Get the width and height of the image
width = CGImageGetWidth(brushImage) ;
height = CGImageGetHeight(brushImage) ;
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
CGFloat scale;
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
NSLog(#"IPAd");
self.contentScaleFactor=1.0;
scale = self.contentScaleFactor;
}
else {
// NSLog(#"IPHone");
self.contentScaleFactor=2.0;
}
//scale = 2.000000;
// Setup OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
NSLog(#"Scale %f", scale);
glOrthof(0, (frame.size.width) * scale, 0, (frame.size.height) * scale, -1, 1);
glViewport(0, 0, (frame.size.width) * scale, (frame.size.height) * scale);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
// Set a blending function appropriate for premultiplied alpha pixel data
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(width / kBrushScale);
// Make sure to start with a cleared buffer
needsErase = YES;
// Define a starting color
HSL2RGB((CGFloat) 0.0 / (CGFloat)kPaletteSize, kSaturation, kLuminosity, &components[0], &components[1], &components[2]);
[self setBrushColorWithRed:245.0f green:245.0f blue:0.0f];
boolEraser=NO;
}
return self;
}
TO CREATE FRAME BUFFER
-(BOOL)createFramebuffer {
// Generate IDs for a framebuffer object and a color renderbuffer
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
// For this sample, we also need a depth buffer, so we'll create and attach one via another renderbuffer.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
return NO;
}
return YES;
}
Line Drawn using Following code
-(void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end {
static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;
NSUInteger vertexCount = 0,
count,
i;
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Convert locations from Points to Pixels
//CGFloat scale = self.contentScaleFactor;
CGFloat scale;
scale=self.contentScaleFactor;
NSLog(#"Scale %f",scale);
start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;
float dx = end.x - start.x;
float dy = end.y - start.y;
float dist = (sqrtf(dx * dx + dy * dy)/ kBrushPixelStep);
// Allocate vertex array buffer
if(vertexBuffer == NULL)
// vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
count = MAX(ceilf(dist), 1);
//NSLog(#"count %d",count);
for(i = 0; i < count; ++i) {
if (vertexCount == vertexMax) {
vertexMax = 2 * vertexMax;
vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
// NSLog(#"if loop");
}
vertexBuffer[2 * vertexCount + 0] = start.x + (dx) * ((GLfloat)i / (GLfloat)count);
vertexBuffer[2 * vertexCount + 1] = start.y + (dy) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
For ipad device content of paint view is proper- high resolution for normal view but after zooming I am not getting High resolution content of paint view pixel of the lines looks distorted.
I have tried to change ContentScaleFactor as well as scale parameter of above code to see the difference but nothing worked as expected. IPad supports contentScaleFactor of 1.0 & 1.5, when I set contentScaleFactor = 2 Paint view can not display line it shows weird dotted lines.
Is there any way to make contents of OpenGL es high resolution?
The short answer is YES, you can have "High resolution" Content.
But you will have to clearly understand the issue before solving it. This is the long answer :
The brushes you use have a specific size (64 or 128). As soon as your virtual paper (the area in which you draw) will display its pixels larger than 1 screen pixel, you will start to see the "distortion". For example, in your favorite picture viewer, if you open one of your brush and zoom in the picture will also be distorted. You cannot avoid that, unless using vertor-brushes (with is not the scope of this answer and is far more complicated).
The quickest way would be to use more detailled brushes, but it is a fudge as if you zoom enought, the texture will look distorted as well.
You can also add a magnification filter using glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); . You used MIN in your sample, add this one will smooth the textures
i am not sure what you mean by high resolution. opengl is a vector library with a bitmap backed rendering system. the backing store will have the size in pixels (multiplied by the content scale factor) of the layer you are using to create the renderbuffer in:
- (BOOL)renderbufferStorage:(NSUInteger)target fromDrawable:(id<EAGLDrawable>)drawable
once it is created there is no way to change the resolution, nor would it make sense to do so generally, one renderbuffer pixel per screen pixel makes the most sense.
it is hard to know exactly what problem you are trying to solve without knowing what zooming you are talking about. i assume you have set up a CAEAGLLayer in a UIScrollView, and you are seeing pixel artifacts. this is inevitable, how else could it work?
if you want your lines to be smooth, you need to implement them using triangle strip meshes with alpha blending at the edges, which will provide antialiasing. instead of zooming the layer itself, you would simply "zoom" the contents by scaling the vertices, but keeping the CAEAGLLayer the same size. this would eliminate pixelation and give purdy alpha blended edges.

Drawing to a bitmap context

I am trying to draw to a bitmap context but coming up empty. I believe I'm creating things properly because I can initialize the context, draw a few things, then create an image from it and draw that image. What I cannot do is, after initialization, trigger further drawing on the context that draws more items on it. I'm not sure if I'm missing some common practice that implies I can only draw it at certain places or that I have to do something else. Here is what I do, below.
I copied the helper function provided by apple with one modification to obtain the color space because it wasn't compiling (this is for iPad, don't know if that matters):
CGContextRef MyCreateBitmapContext (int pixelsWide, int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);// 1
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB(); //CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);// 2
bitmapData = malloc( bitmapByteCount );// 3
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,// 4
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);// 5
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );// 6
return context;// 7
}
I initialize it in my init method below with a few sample draws just to be sure it looks right:
mContext = MyCreateBitmapContext (rect.size.width, rect.size.height);
// sample fills
CGContextSetRGBFillColor (mContext, 1, 0, 0, 1);
CGContextFillRect (mContext, CGRectMake (0, 0, 200, 100 ));
CGContextSetRGBFillColor (mContext, 0, 0, 1, .5);
CGContextFillRect (mContext, CGRectMake (0, 0, 100, 200 ));
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 5.0);
CGContextAddEllipseInRect(mContext, CGRectMake(0, 0, 60.0, 60.0));
CGContextStrokePath(mContext);
In my drawRect method, I create an image from it to render it. Maybe I should be creating and keeping this image as a member var and updating it everytime I draw something new and not creating the image every frame? (Some advice on this would be nice):
// draw bitmap context
CGImageRef myImage = CGBitmapContextCreateImage (mContext);
CGContextDrawImage(context, rect, myImage);
CGImageRelease(myImage);
Then as a test I try drawing a circle when I touch, but nothing happens, and the touch is definitely triggering:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint location;
for (UITouch* touch in touches)
{
location = [touch locationInView: [touch view]];
}
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 2.0);
CGContextAddEllipseInRect(mContext, CGRectMake(location.x, location.y, 60.0, 60.0));
CGContextStrokePath(mContext);
}
Help?
[self setNeedsDisplay]; !!!!
!!!!!!!!!
so it was because drawRect was never being called after init since it didn't know it needed to refresh. My understanding is that I should just call setNeedsDisplay anytime I draw and that seems to work. :)