Draw UIImage (or JPEG) onto EAGLView - objective-c

I am making a PDF annotator and when you switch pages it has to redraw all of the previously drawn OpenGL content (which was saved to file in JSON format). The problem is that it takes longer the more content there is to draw. I have a UIImage saved to disk for each page so I was hoping to speed up this process by drawing that UIImage onto EAGLContext in one big stroke.
I want to know how to take an UIImage (or JPEG/PNG file) and draw it directly on to the screen. The reason why it has to be on the EAGLView is because it needs to support the eraser, and using the regular UIKit way wouldn't work with that.
I assume there's some way to set a brush as the whole image and just stamp the screen with it once. Any suggestions?

As a pedantic note, there is no standard class named EAGLView, but I assume you're referring to one of Apple's sample UIView subclasses that host OpenGL ES content.
The first step in doing this would be to load the UIImage into a texture. The following is some code that I've used for this in my image processing framework (newImageSource is the input UIImage):
CGSize pointSizeOfImage = [newImageSource size];
CGFloat scaleOfImage = [newImageSource scale];
pixelSizeOfImage = CGSizeMake(scaleOfImage * pointSizeOfImage.width, scaleOfImage * pointSizeOfImage.height);
CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
BOOL shouldRedrawUsingCoreGraphics = YES;
// For now, deal with images larger than the maximum texture size by resizing to be within that limit
CGSize scaledImageSizeToFitOnGPU = [GPUImageOpenGLESContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
if (!CGSizeEqualToSize(scaledImageSizeToFitOnGPU, pixelSizeOfImage))
{
pixelSizeOfImage = scaledImageSizeToFitOnGPU;
pixelSizeToUseForTexture = pixelSizeOfImage;
shouldRedrawUsingCoreGraphics = YES;
}
if (self.shouldSmoothlyScaleOutput)
{
// In order to use mipmaps, you need to provide power-of-two textures, so convert to the next largest power of two and stretch to fill
CGFloat powerClosestToWidth = ceil(log2(pixelSizeOfImage.width));
CGFloat powerClosestToHeight = ceil(log2(pixelSizeOfImage.height));
pixelSizeToUseForTexture = CGSizeMake(pow(2.0, powerClosestToWidth), pow(2.0, powerClosestToHeight));
shouldRedrawUsingCoreGraphics = YES;
}
GLubyte *imageData = NULL;
CFDataRef dataFromImageDataProvider;
if (shouldRedrawUsingCoreGraphics)
{
// For resized image, redraw
imageData = (GLubyte *) calloc(1, (int)pixelSizeToUseForTexture.width * (int)pixelSizeToUseForTexture.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 8, (int)pixelSizeToUseForTexture.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeToUseForTexture.width, pixelSizeToUseForTexture.height), [newImageSource CGImage]);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
}
else
{
// Access the raw image bytes directly
dataFromImageDataProvider = CGDataProviderCopyData(CGImageGetDataProvider([newImageSource CGImage]));
imageData = (GLubyte *)CFDataGetBytePtr(dataFromImageDataProvider);
}
glBindTexture(GL_TEXTURE_2D, outputTexture);
if (self.shouldSmoothlyScaleOutput)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
if (self.shouldSmoothlyScaleOutput)
{
glGenerateMipmap(GL_TEXTURE_2D);
}
if (shouldRedrawUsingCoreGraphics)
{
free(imageData);
}
else
{
CFRelease(dataFromImageDataProvider);
}
As you can see, this has some functions for resizing images that exceed the maximum texture size of the device (the class method in the above code merely queries the max texture size), as well as a boolean flag for whether or not to generate mipmaps for the texture for smoother downsampling. These can be removed if you don't care about those cases. This is also OpenGL ES 2.0 code, so there might be an OES suffix or two that you'd need to add to some of the functions above in order for them to work with 1.1.
Once you have the UIImage in a texture, you can draw it to the screen by using a textured quad (two triangles that make up a rectangle, with appropriate texture coordinates for the corners). How you do this will differ between OpenGL ES 1.1 and 2.0. For 2.0, you use a passthrough shader program that just reads the color from that location in the texture and draws that to the screen and for 1.1, you just set up the texture coordinates for your geometry and draw the two triangles.
I have some OpenGL ES 2.0 code for this in this answer.

Related

Draw text into CGBitmapContext

I have an app that renders into a UIView's CGContext in drawRect. I also export those renderings using a background renderer. It uses the same rendering logic to render (in faster than real time) into a CGBitmapContext (which I subsequently transform into an mp4 file).
I have noticed that the output video has a number of weird glitches. Such as the image being rotated, weird duplications of the rendered images, random noise, and the timing is also odd.
I'm looking for ways to debug this. For the timing issue, I thought I'd render a string that tells me which frame I'm currently viewing, only to find rendering text into CGContext's not very well documented. In fact, the documentations around much of core graphics is quite unforgiving to some one of my experience.
So specifically, I'd like to know how to render text into a context. If its Core Text, must it inter-operate some how with the core graphics context? And in general, I'd appreciate any tips and advice on doing bitmap rendering and debugging the results.
according another question:
How to convert Text to Image in Cocoa Objective-C
we can use the CTLineDraw to draw the text in a CGBitmapContext
sample code:
NSString* string = #"terry.wang";
CGFloat fontSize = 10.0f;
// Create an attributed string with string and font information
CTFontRef font = CTFontCreateWithName(CFSTR("Helvetica Light"), fontSize, nil);
NSDictionary* attributes = [NSDictionary dictionaryWithObjectsAndKeys:
(id)font, kCTFontAttributeName,
nil];
NSAttributedString* as = [[NSAttributedString alloc] initWithString:string attributes:attributes];
CFRelease(font);
// Figure out how big an image we need
CTLineRef line = CTLineCreateWithAttributedString((CFAttributedStringRef)as);
CGFloat ascent, descent, leading;
double fWidth = CTLineGetTypographicBounds(line, &ascent, &descent, &leading);
// On iOS 4.0 and Mac OS X v10.6 you can pass null for data
size_t width = (size_t)ceilf(fWidth);
size_t height = (size_t)ceilf(ascent + descent);
void* data = malloc(width*height*4);
// Create the context and fill it with white background
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, 8, width*4, space, bitmapInfo);
CGColorSpaceRelease(space);
CGContextSetRGBFillColor(ctx, 1.0, 1.0, 1.0, 1.0); // white background
CGContextFillRect(ctx, CGRectMake(0.0, 0.0, width, height));
// Draw the text
CGFloat x = 0.0;
CGFloat y = descent;
CGContextSetTextPosition(ctx, x, y);
CTLineDraw(line, ctx);
CFRelease(line);

Optimized alternative to CGContextDrawImage

I'm currently working a lot with CoreGraphics on OSX.
I've run Time Profiler over my code and found the biggest hang-up is in CGContextDrawImage. It's part of a loop that gets called many times per second.
I don't have any way of optimizing this code per se (since it's in the Apple libraries) - but I am wondering if there's a speedier alternative or way to improve the speed.
I'm using CGContextDraw image after some blend-mode code such as: CGContextSetBlendMode(context, kCGBlendModeDifference); so alternative implementations would need to be able to support blending.
Time profiler results:
3658.0ms 15.0% 0.0 CGContextDrawImage
3658.0ms 15.0% 0.0 ripc_DrawImage
3539.0ms 14.5% 0.0 ripc_AcquireImage
3539.0ms 14.5% 0.0 CGSImageDataLock
3539.0ms 14.5% 1.0 img_data_lock
3465.0ms 14.2% 0.0 img_interpolate_read
2308.0ms 9.4% 7.0 resample_band
1932.0ms 7.9% 1932.0 resample_byte_h_3cpp_vector
369.0ms 1.5% 369.0 resample_byte_v_Ncpp_vector
1157.0ms 4.7% 2.0 img_decode_read
1150.0ms 4.7% 8.0 decode_data
863.0ms 3.5% 863.0 decode_swap
267.0ms 1.0% 267.0 decode_byte_8bpc_3
Update:
The actual source is something along the lines of the following:
/////////////////////////////////////////////////////////////////////////////////////////
- (CGImageRef)createBlendedImage:(CGImageRef)image
secondImage:(CGImageRef)secondImage
blendMode:(CGBlendMode)blendMode
{
// Get the image width and height
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
// Set the frame
CGRect frame = CGRectMake(0, 0, width, height);
// Create context with alpha channel
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
CGImageGetBitsPerComponent(image),
CGImageGetBytesPerRow(image),
CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
if (!context) {
return nil;
}
// Draw the image inside the context
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, frame, image);
// Set the blend mode and draw the second image
CGContextSetBlendMode(context, blendMode);
CGContextDrawImage(context, frame, secondImage);
// Get the masked image from the context
CGImageRef blendedImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
return blendedImage;
}
/////////////////////////////////////////////////////////////////////////////////////////
- (CGImageRef)createImageTick
{
// `self.image` and `self.previousImage` are two instance properties (CGImageRefs)
// Create blended image (stage one)
CGImageRef stageOne = [self createBlendedImage:self.image
secondImage:self.previousImage
blendMode:kCGBlendModeXOR];
// Create blended image (stage two) if stage one image is 50% red
CGImageRef stageTwo = nil;
if ([self isImageRed:stageOne]) {
stageTwo = [self createBlendedImage:self.image
secondImage:stageOne
blendMode:kCGBlendModeSourceAtop];
}
// Release intermediate image
CGImageRelease(stageOne);
return stageTwo;
}
#JeremyRoman et al: Thank you so much for your comments. I am drawing the same image a couple of times per loop, onto different contexts with different filters, and combining with new images. Does resampling include switching from RGB to RGBA? What could I try to speed up or eliminate resampling? – Chris Nolet
This is what Core Image is for. See the Core Image Programming Guide for details. CGContext is designed for rendering final images to the screen, which it sounds like is not your goal with every image you're creating.

High-Resolution Content for paint app Using OpenGL ES on iPad device

I am working on paint app [taking reference from GLPaint app] for iPhone and iPad. In this app I am filling colors in paint-images by drawings lines onscreen based on where the user touches. App working properly for iPhone. In iPad without zooming lines on the paint view are proper [no pixel distortion] but after zooming lines on the paintView has distorted pixels i.e Content of OpenGL ES is not High Resolution.
I am using Following code for initialize paint view:
-(id)initWithCoder:(NSCoder*)coder {
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
CGFloat components[3];
if ((self = [super initWithCoder:coder])) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = NO;
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
return nil;
}
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
brushImage = [UIImage imageNamed:#"circle 64.png"].CGImage;
}
else {
brushImage = [UIImage imageNamed:#"flower 128.png"].CGImage;
}
// Get the width and height of the image
width = CGImageGetWidth(brushImage) ;
height = CGImageGetHeight(brushImage) ;
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
CGFloat scale;
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
NSLog(#"IPAd");
self.contentScaleFactor=1.0;
scale = self.contentScaleFactor;
}
else {
// NSLog(#"IPHone");
self.contentScaleFactor=2.0;
}
//scale = 2.000000;
// Setup OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
NSLog(#"Scale %f", scale);
glOrthof(0, (frame.size.width) * scale, 0, (frame.size.height) * scale, -1, 1);
glViewport(0, 0, (frame.size.width) * scale, (frame.size.height) * scale);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
// Set a blending function appropriate for premultiplied alpha pixel data
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(width / kBrushScale);
// Make sure to start with a cleared buffer
needsErase = YES;
// Define a starting color
HSL2RGB((CGFloat) 0.0 / (CGFloat)kPaletteSize, kSaturation, kLuminosity, &components[0], &components[1], &components[2]);
[self setBrushColorWithRed:245.0f green:245.0f blue:0.0f];
boolEraser=NO;
}
return self;
}
TO CREATE FRAME BUFFER
-(BOOL)createFramebuffer {
// Generate IDs for a framebuffer object and a color renderbuffer
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
// For this sample, we also need a depth buffer, so we'll create and attach one via another renderbuffer.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
return NO;
}
return YES;
}
Line Drawn using Following code
-(void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end {
static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;
NSUInteger vertexCount = 0,
count,
i;
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Convert locations from Points to Pixels
//CGFloat scale = self.contentScaleFactor;
CGFloat scale;
scale=self.contentScaleFactor;
NSLog(#"Scale %f",scale);
start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;
float dx = end.x - start.x;
float dy = end.y - start.y;
float dist = (sqrtf(dx * dx + dy * dy)/ kBrushPixelStep);
// Allocate vertex array buffer
if(vertexBuffer == NULL)
// vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
count = MAX(ceilf(dist), 1);
//NSLog(#"count %d",count);
for(i = 0; i < count; ++i) {
if (vertexCount == vertexMax) {
vertexMax = 2 * vertexMax;
vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
// NSLog(#"if loop");
}
vertexBuffer[2 * vertexCount + 0] = start.x + (dx) * ((GLfloat)i / (GLfloat)count);
vertexBuffer[2 * vertexCount + 1] = start.y + (dy) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
For ipad device content of paint view is proper- high resolution for normal view but after zooming I am not getting High resolution content of paint view pixel of the lines looks distorted.
I have tried to change ContentScaleFactor as well as scale parameter of above code to see the difference but nothing worked as expected. IPad supports contentScaleFactor of 1.0 & 1.5, when I set contentScaleFactor = 2 Paint view can not display line it shows weird dotted lines.
Is there any way to make contents of OpenGL es high resolution?
The short answer is YES, you can have "High resolution" Content.
But you will have to clearly understand the issue before solving it. This is the long answer :
The brushes you use have a specific size (64 or 128). As soon as your virtual paper (the area in which you draw) will display its pixels larger than 1 screen pixel, you will start to see the "distortion". For example, in your favorite picture viewer, if you open one of your brush and zoom in the picture will also be distorted. You cannot avoid that, unless using vertor-brushes (with is not the scope of this answer and is far more complicated).
The quickest way would be to use more detailled brushes, but it is a fudge as if you zoom enought, the texture will look distorted as well.
You can also add a magnification filter using glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); . You used MIN in your sample, add this one will smooth the textures
i am not sure what you mean by high resolution. opengl is a vector library with a bitmap backed rendering system. the backing store will have the size in pixels (multiplied by the content scale factor) of the layer you are using to create the renderbuffer in:
- (BOOL)renderbufferStorage:(NSUInteger)target fromDrawable:(id<EAGLDrawable>)drawable
once it is created there is no way to change the resolution, nor would it make sense to do so generally, one renderbuffer pixel per screen pixel makes the most sense.
it is hard to know exactly what problem you are trying to solve without knowing what zooming you are talking about. i assume you have set up a CAEAGLLayer in a UIScrollView, and you are seeing pixel artifacts. this is inevitable, how else could it work?
if you want your lines to be smooth, you need to implement them using triangle strip meshes with alpha blending at the edges, which will provide antialiasing. instead of zooming the layer itself, you would simply "zoom" the contents by scaling the vertices, but keeping the CAEAGLLayer the same size. this would eliminate pixelation and give purdy alpha blended edges.

Simple way of using irregular shaped buttons

I've finally got my main app release (Tap Play MMO - check it out ;-) ) and I'm now working on expanding it.
To do this I need to have a circle that has four seperate buttons in it, these buttons will essentially be quarters. I've come to the conclusion that the circlular image will need to be constructed of four images, one for each quarter, but due to the necessity of rectangular image shapes I'm going to end up with some overlap, although the overlap will be transparent.
What's the best way of getting this to work? I need something really simple really, I've looked at this
http://iphonedevelopment.blogspot.com/2010/03/irregularly-shaped-uibuttons.html
Before but not yet succeeded in getting it to work. Anyone able to offer some advice?
In case it makes any difference I'll be deploying to a iOS 3.X framework (will be 4.2 down the line when 4.2 comes out for iPad)
Skip the buttons and simply respond to touches in your view that contains the circle.
Create a CGPath for each area that you want to capture touches, when your UIview receives a touch, check for membership inside the paths.
[Edited answer to show skeleton implementation details -- TomH]
Here's how I would approach the problem: (I haven't tested this code and the syntax may not be quite right, but this is the general idea)
1) Using PS or your favorite image creation application, create one png of the quarter circles. Add it to your XCode project.
2) Add a UIView to the UI. Set the UIView's layer's contents to the png.
self.myView = [[UIView alloc] initWithRect:CGRectMake(10.0, 10.0, 100.0, 100,0)];
[myView.layer setContents:(id)[UIImage loadImageNamed:#"my.png"]];
3) Create CGPaths that describe the region in the UIView that you are interested in.
self.quadrantOnePath = CGPathCreateMutable();
CGPathMoveToPoint(self.quadrantOnePath, NULL, 50.0, 50.0);
CGPathAddLineToPoint(self.quadrantOnePath, NULL, 100.0, 50.0);
CGPathAddArc(self.quadrantOnePath, NULL, 50.0, 50.0, 50.0, 0.0, M_PI2, 1);
CGPathCloseSubpath(self.quadrantOnePath);
// create paths for the other 3 circle quadrants too!
4) Add a UIGestureRecognizer and listen/observe for taps in the view
UITapGestureRecognizer *tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleGesture:)];
[tapRecognizer setNumberOfTapsRequired:2]; // default is 1
5) When tapRecognizer invokes its target selector
- (void)handleGesture:(UIGestureRecognizer *) recognizer {
CGPoint touchPoint = [recognizer locationOfTouch:0 inView:self.myView];
bool processTouch = CGPathContainsPoint(self.quadrantOnePath, NULL, touchPoint, true);
if(processTouch) {
// call your method to process the touch
}
}
Don't forget to release everything when appropriate -- use CGPathRelease to release paths.
Another thought: If the graphic that you are using to represent your circle quadrants is simply a filled color (i.e. no fancy graphics, layer effects, etc.), you could also use the paths you created in the UIView's drawRect method to draw the quadrants too. This would address one of the failings of the approach above: there isn't a tight integration between the graphic and the paths used to check for the touches. That is, if you swap out the graphic for something different, change the size of the graphic, etc., your paths used to check for touches will be out of sync. Potentially a high maintenance piece of code.
I can't see, why overlapping is needed.
Just create 4 buttons and give each one a slice of your image.
edit after comment
see this great project. One example is exactly what you want to do.
It works by incorporating the alpha-value of a pixel in the overwritten
pointInside:withEvent: and a category on UIImage, that adds this method
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = self.size.width;
NSUInteger height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
Here's an awesome project that solves the problem of irregular shaped buttons so easily:
http://christinemorris.com/2011/06/ios-irregular-shaped-buttons/

Flipping OpenGL texture

When I load textures from images normally, they are upside down because of OpenGL's coordinate system. What would be the best way to flip them?
glScalef(1.0f, -1.0f, 1.0f);
mapping the y coordinates of the textures in reverse
vertically flipping the image files manually (in Photoshop)
flipping them programatically after loading them (I don't know how)
This is the method I'm using to load png textures, in my Utilities.m file (Objective-C):
+ (TextureImageRef)loadPngTexture:(NSString *)name {
CFURLRef textureURL = CFBundleCopyResourceURL(
CFBundleGetMainBundle(),
(CFStringRef)name,
CFSTR("png"),
CFSTR("Textures"));
NSAssert(textureURL, #"Texture name invalid");
CGImageSourceRef imageSource = CGImageSourceCreateWithURL(textureURL, NULL);
NSAssert(imageSource, #"Invalid Image Path.");
NSAssert((CGImageSourceGetCount(imageSource) > 0), #"No Image in Image Source.");
CFRelease(textureURL);
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSAssert(image, #"Image not created.");
CFRelease(imageSource);
GLuint width = CGImageGetWidth(image);
GLuint height = CGImageGetHeight(image);
void *data = malloc(width * height * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSAssert(colorSpace, #"Colorspace not created.");
CGContextRef context = CGBitmapContextCreate(
data,
width,
height,
8,
width * 4,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
NSAssert(context, #"Context not created.");
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
CGContextRelease(context);
return TextureImageCreate(width, height, data);
}
Where TextureImage is a struct that has a height, width and void *data.
Right now I'm just playing around with OpenGL, but later I want to try making a simple 2d game. I'm using Cocoa for all the windowing and Objective-C as the language.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Thanks.
Any of those:
Flip texture during the texture load,
OR flip model texture coordinates during model load
OR set texture matrix to flip y (glMatrixMode(GL_TEXTURE)) during render.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Depends on how you are going to render text.
Jordan Lewis pointed out CGContextDrawImage draws image upside down when passed UIImage.CGImage. There I found a quick and easy solution: Before calling CGContextDrawImage,
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0f, -1.0f);
Does the job perfectly well.