Clipping UIView to a circle using CGContextClip - objective-c

I've looked through various similar questions and still can't seem to get my code to work. I have a UIView with an image drawn to it ([image drawInRect:bounds]), but I'm missing something in my context clipping:
// Get context & bounds, and calculate centre & radius
CGContextRef ctx=UIGraphicsGetCurrentContext();
CGRect bounds=[self bounds];
CGPoint centre;
centre.x=bounds.origin.x+0.5*bounds.size.width;
centre.y=bounds.origin.y+0.5*bounds.size.height;
CGFloat radius=centre.x;
// Draw image
UIImage *backImage=[UIImage imageNamed:#"backimage.png"];
[backImage drawInRect:bounds];
// Create clipping path and clip context
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddArc(path, NULL, centre.x, centre.y, radius, 0, 2*M_PI, 0);
CGContextAddPath(ctx, path);
CGContextClip(ctx);
Any ideas of where I went wrong? Thanks for reading.

radius=centre.x seems to be wrong. The radius should be half the width or height:
CGFloat radius = 0.5*bounds.size.width;
You could also use the convenience function
CGPathRef path = CGPathCreateWithEllipseInRect(bounds, NULL);
UPDATE: It turned out that the actual problem was that the clipping path was modified after drawing the image.
The clipping path is used for all future drawing operations and must therefore be set before the drawing.

Related

How to draw curve with glow in CGContext - something like SKShapeNode with glowWidth

I am rewriting some of my graphics drawing code from SKShapeNodes to CGContext/CALayers. I am trying to draw a curve with glow in CGContext. This is what I used to have in SpriteKit:
CGPathRef path = …(some path)
SKShapeNode *node = [SKShapeNode node];
node.path = path;
node.glowWidth = 60;
After adding it to the scene with dark-grey background, the result was as follows:
Is it possible to draw line with such glow using CGContext but without using CIFilters? Normally I will be drawing over an non-blank context background, so I prefer not to use CIFilters after the line was drawn.
I have already tried the "shadow" solution, but the results are far from perfect:
UIGraphicsBeginImageContextWithOptions(frame.size, NO, 1);
CGContextRef context = UIGraphicsGetCurrentContext();
CGFloat glowWidth = 60.0;
CGContextSetShadowWithColor(context, CGSizeMake(0.0, 0.0), glowWidth, [UIColor whiteColor].CGColor);
CGContextBeginPath(context);
CGContextAddPath(context, path);
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextStrokePath(context);
Result (the shadow is hardly visible):
Please let me know if you have some ideas.
You can create a SKEffectNode which allows you to apply Core Image filters to all of its children. In other words, you can create a SKEffectNode then add a flower sprite as a child and the SKEffectNode would apply the Core Image effect to the flower sprite.
For more detailed information, please see the SKEffectNode Class Reference.

Drawings in drawRect not being displayed correctly

I want to implement freeform drawing in my app. First, I tried the code inside drawLayer:inContext: and it gave me the result I wanted.
Drawing in CALayer:
But when I decided to implement the code inside drawRect:, this happened:
Even if I draw inside the white space, the drawing is rendered outside as shown above. The code I used is exactly the same. I copy-pasted it from drawLayer:inContext: to drawRect:. I didn't change a thing, so why is this happening?
The Code:
CGContextSaveGState(ctx);
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineWidth(ctx, 1.0);
CGContextSetRGBStrokeColor(ctx, 1, 0, 0, 1);
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, prevPoint.x, prevPoint.y);
for (NSValue *r in drawnPoints){
CGPoint pt = [r CGPointValue];
CGContextAddLineToPoint(ctx, pt.x, pt.y);
}
CGContextStrokePath(ctx);
CGContextRestoreGState(ctx);
I see you are using app in full screen mode where the view is centered and does not take full width of the screen.
It may be that CALayer has transform applied to it that translates the drawing from the left side of the screen to the center. This may not be the case with drawRect:. Try setting CGContext's transform matrix:
CGContextSaveGState(ctx);
CGFloat xOffset = CGRectGetMidX(screenFrame) - CGRectGetMidX(viewFrame);
CGContextTranslateCTM(ctx, xOffset, 0.0f);
// rest of drawing code
// ...
CGContextRestoreGState(ctx);

Simple way of using irregular shaped buttons

I've finally got my main app release (Tap Play MMO - check it out ;-) ) and I'm now working on expanding it.
To do this I need to have a circle that has four seperate buttons in it, these buttons will essentially be quarters. I've come to the conclusion that the circlular image will need to be constructed of four images, one for each quarter, but due to the necessity of rectangular image shapes I'm going to end up with some overlap, although the overlap will be transparent.
What's the best way of getting this to work? I need something really simple really, I've looked at this
http://iphonedevelopment.blogspot.com/2010/03/irregularly-shaped-uibuttons.html
Before but not yet succeeded in getting it to work. Anyone able to offer some advice?
In case it makes any difference I'll be deploying to a iOS 3.X framework (will be 4.2 down the line when 4.2 comes out for iPad)
Skip the buttons and simply respond to touches in your view that contains the circle.
Create a CGPath for each area that you want to capture touches, when your UIview receives a touch, check for membership inside the paths.
[Edited answer to show skeleton implementation details -- TomH]
Here's how I would approach the problem: (I haven't tested this code and the syntax may not be quite right, but this is the general idea)
1) Using PS or your favorite image creation application, create one png of the quarter circles. Add it to your XCode project.
2) Add a UIView to the UI. Set the UIView's layer's contents to the png.
self.myView = [[UIView alloc] initWithRect:CGRectMake(10.0, 10.0, 100.0, 100,0)];
[myView.layer setContents:(id)[UIImage loadImageNamed:#"my.png"]];
3) Create CGPaths that describe the region in the UIView that you are interested in.
self.quadrantOnePath = CGPathCreateMutable();
CGPathMoveToPoint(self.quadrantOnePath, NULL, 50.0, 50.0);
CGPathAddLineToPoint(self.quadrantOnePath, NULL, 100.0, 50.0);
CGPathAddArc(self.quadrantOnePath, NULL, 50.0, 50.0, 50.0, 0.0, M_PI2, 1);
CGPathCloseSubpath(self.quadrantOnePath);
// create paths for the other 3 circle quadrants too!
4) Add a UIGestureRecognizer and listen/observe for taps in the view
UITapGestureRecognizer *tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleGesture:)];
[tapRecognizer setNumberOfTapsRequired:2]; // default is 1
5) When tapRecognizer invokes its target selector
- (void)handleGesture:(UIGestureRecognizer *) recognizer {
CGPoint touchPoint = [recognizer locationOfTouch:0 inView:self.myView];
bool processTouch = CGPathContainsPoint(self.quadrantOnePath, NULL, touchPoint, true);
if(processTouch) {
// call your method to process the touch
}
}
Don't forget to release everything when appropriate -- use CGPathRelease to release paths.
Another thought: If the graphic that you are using to represent your circle quadrants is simply a filled color (i.e. no fancy graphics, layer effects, etc.), you could also use the paths you created in the UIView's drawRect method to draw the quadrants too. This would address one of the failings of the approach above: there isn't a tight integration between the graphic and the paths used to check for the touches. That is, if you swap out the graphic for something different, change the size of the graphic, etc., your paths used to check for touches will be out of sync. Potentially a high maintenance piece of code.
I can't see, why overlapping is needed.
Just create 4 buttons and give each one a slice of your image.
edit after comment
see this great project. One example is exactly what you want to do.
It works by incorporating the alpha-value of a pixel in the overwritten
pointInside:withEvent: and a category on UIImage, that adds this method
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = self.size.width;
NSUInteger height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
Here's an awesome project that solves the problem of irregular shaped buttons so easily:
http://christinemorris.com/2011/06/ios-irregular-shaped-buttons/

Flipping OpenGL texture

When I load textures from images normally, they are upside down because of OpenGL's coordinate system. What would be the best way to flip them?
glScalef(1.0f, -1.0f, 1.0f);
mapping the y coordinates of the textures in reverse
vertically flipping the image files manually (in Photoshop)
flipping them programatically after loading them (I don't know how)
This is the method I'm using to load png textures, in my Utilities.m file (Objective-C):
+ (TextureImageRef)loadPngTexture:(NSString *)name {
CFURLRef textureURL = CFBundleCopyResourceURL(
CFBundleGetMainBundle(),
(CFStringRef)name,
CFSTR("png"),
CFSTR("Textures"));
NSAssert(textureURL, #"Texture name invalid");
CGImageSourceRef imageSource = CGImageSourceCreateWithURL(textureURL, NULL);
NSAssert(imageSource, #"Invalid Image Path.");
NSAssert((CGImageSourceGetCount(imageSource) > 0), #"No Image in Image Source.");
CFRelease(textureURL);
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSAssert(image, #"Image not created.");
CFRelease(imageSource);
GLuint width = CGImageGetWidth(image);
GLuint height = CGImageGetHeight(image);
void *data = malloc(width * height * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSAssert(colorSpace, #"Colorspace not created.");
CGContextRef context = CGBitmapContextCreate(
data,
width,
height,
8,
width * 4,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
NSAssert(context, #"Context not created.");
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
CGContextRelease(context);
return TextureImageCreate(width, height, data);
}
Where TextureImage is a struct that has a height, width and void *data.
Right now I'm just playing around with OpenGL, but later I want to try making a simple 2d game. I'm using Cocoa for all the windowing and Objective-C as the language.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Thanks.
Any of those:
Flip texture during the texture load,
OR flip model texture coordinates during model load
OR set texture matrix to flip y (glMatrixMode(GL_TEXTURE)) during render.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Depends on how you are going to render text.
Jordan Lewis pointed out CGContextDrawImage draws image upside down when passed UIImage.CGImage. There I found a quick and easy solution: Before calling CGContextDrawImage,
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0f, -1.0f);
Does the job perfectly well.

Get size of UIView after applying CGAffineTransform

I was surprised not to find an answer to this question, maybe is something very simple I somehow overlook :
How to get the real size of an UIView after I apply a CGAffineTransform to it?
eg.
my UIView has size 300 x 200, I apply a scaling transform let's say factor 2 both horizontal and vertical, so the UIView now takes 600 x 400 on the screen, but it's bounds and it's layer's bounds are still returning a size of 300 x 200 ... where do I find the real size of the UIView ?
ps. forgot to mention I want to also rotate the uiview. If I apply only scaling CGSizeApplyAffineTransform works great, but when there's also rotation, then it does not work properly.
Edit: drawnonward pointed me in the right direction, I just refined a bit the code to compile and here it is :
UIView* view = (your view being transformed);
CGAffineTransform trans = (view.transform or create a new transformation);
CGRect rect = [view bounds];
CGMutablePathRef path = CGPathCreateMutable();
rect.origin = CGPointZero;
CGPathAddRect(path , &trans , rect);
rect = CGPathGetBoundingBox( path );
CGPathRelease( path );
Now rect.size contains the dimensions of the view with the transformation applied
Thanks again to drawnonward
I use this in Objective C:
CGRect transformedBounds = CGRectApplyAffineTransform(view.bounds, view.transform);
or in Swift 4:
let transformedBounds = view.bounds.applying(view.transform)
[myView frame] returns the frame of the view as seen by the parent, for layout and relative sizes. [myView bounds] returns the bounds of the view as seen by itself, for drawing. If you have transforms applied to multiple views, you can use convertRect: to or from a view.
Edit:
Maybe something like this.
CGRect rect = [view bounds];
CGPathRef path = CGPathCreateMutable();
rect.origin = CGPointZero;
CGPathAddRect( rect , [view transform] );
rect = CGPathGetBoundingBox( path );
CGPathRelease( path );
The use [view center] to find the position in the superview.
Old question, but bumped into here, after searching a solution and tons of attempts. It was simple;
view.layer.frame has all transformations applied and you'll get the size from view.layer.frame.size easily.
-- below here is not an answer to this question - -
And for my problem, I was trying to calculate new center value after changing layer.anchorPoint of my rotated view, so it doesn't move. And finally did it like this;
CGPoint topLeft = [self.superview convertPoint:CGPointMake(0, 0) fromView:self];
self.layer.anchorPoint = CGPointMake(0, 0);
self.center = topLeft;
for reverse
CGPoint center = [self.superview convertPoint:CGPointMake(self.bounds.size.width / 2, self.bounds.size.height / 2) fromView:self];
self.layer.anchorPoint = CGPointMake(.5, .5);
self.center = center;
finally.
Use CGSizeApplyAffineTransform(size, transform) and it will return a transformed size. There are similar CGPoint and CGRect functions as well.
Simpler: A view with (bounds) size s to which transform tr is applied has resulting size:
CGSizeMake(s.width*hypotf(tr.a, tr.b), s.height*hypotf(tr.c, tr.d))
However, if view's superview or any ancestor view has any non-unit transform applied, this size makes little sense in absolute terms.
If you want the absolute size of a view in window coordinates after any arbitrary transform has been applied to that view or its superviews, you should first compute the absolute transform matrix by composing all the view transform up to the root window, and then apply the above formula to the result.
But you apply a rotating transform, it don't get right size by CGPathGetBoundingBox.
If you applied the CGAffineTransform the view's .layer then the adjusted CGRect region after scale and/or translation transforms is simply view.layer.frame