I have a really annoying issue trying to draw into a bitmap CGContext. What I am trying to do is I have a couple of images to draw into the full size of the image. One can come in at any UIImageOrientation and I've written the code to correctly rotate that properly, but I'm struggling with the second bit which is trying to draw another view at an arbitrary rotation about its centre.
The other view comprises an image drawn possibly outside of its bounds. What I am having a problem with is drawing these at a rotated angle as though it was a UIView that had an affine transform applied to it. e.g. imagine a UIView at {100, 300} of size {20, 20} and an affine transform rotating it by 45 degrees. It would be rotated about {110, 310}.
What I have tried is this:
- (void)drawOtherViewInContext:(CGContextRef)context atRect:(CGRect)rect withRotation:(CGFloat)rotation contextSize:(CGSize)contextSize {
CGRect thisFrame = <SOLVED_FEATURE_FRAME_RELATIVE_TO_RECT_SIZE>;
thisFrame.origin.y = contextSize.height - thisFrame.origin.y - thisFrame.size.height;
CGRect rotatedRect = CGRectApplyAffineTransform(CGRectMake(0.0f, 0.0f, rect.size.width, rect.size.height), CGAffineTransformMakeRotation(-rotation));
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, rect.origin.x, contextSize.height - rect.origin.y - rect.size.height);
transform = CGAffineTransformTranslate(transform,
+(rotatedRect.size.width/2.0f),
+(rotatedRect.size.height/2.0f));
transform = CGAffineTransformRotate(transform, -rotation);
transform = CGAffineTransformTranslate(transform,
-(rect.size.width/2.0f),
-(rect.size.height/2.0f));
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, thisFrame, theCGImageToDraw);
CGContextConcatCTM(context, CGAffineTransformInvert(transform));
}
So what I am doing there, I think, is this:
Translate to the bottom left of rect which is where this view is meant to be drawn.
Translate by half the rotated size in x and y.
Rotate by the required angle.
Translate back half the original size in x and y.
I thought that this would be what I wanted to do because the first step translates the coordinate system to be such that thisFrame is drawn correctly relative to where we're being told to draw (by the rect method parameter). Then it's a pretty normal rotate about the centre of a rectangle.
The problem is that when rotated by say 45 degrees, the image is drawn slightly out of place. It's almost correct, but just not quite. When at 0, 90, 180 or 270 degrees then the position is pretty much spot on, maybe a few pixels out but when at 45, 135, 225, 315 degrees the position is too far up and to the right.
Can anyone see what I'm doing wrong here?
Update:
Silly me, it's bigger because I was passing in the wrong rect! Edited to get rid of references to it being the wrong size. It's still not quite in the right place though.
OK I have fixed it. The first point was that I was passing in the wrong rect at first as I was grabbing the frame from a UIView which had an affine transform applied to it, and as we all know the frame in that case is undefined. More likely it's the CGRect that comes from CGRectApplyAffineTransform(bounds, transform) but anyway, I fixed that one.
Then the main problem of drawing offset was fixed by changing my transform to this:
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, rect.origin.x, contextSize.height - rect.origin.y - rect.size.height);
transform = CGAffineTransformTranslate(transform,
+(rect.size.width/2.0f),
+(rect.size.height/2.0f));
transform = CGAffineTransformRotate(transform, -rotation);
transform = CGAffineTransformTranslate(transform,
-(rect.size.width/2.0f),
-(rect.size.height/2.0f));
That's what I had originally thought I should be doing, but for some reason I changed it to use the rotated CGRect.
Related
This is a code snippet for creating a thumbnail sized image (from an original large image) and placing it appropriately on top of a tableviewcell. As i was studying the code i got stuck at the part where the thumbnail is being given a position by setting its abscissa and ordinate. In the method -(void)setThumbDataFromImage:(UIImage *)image they're setting the dimensions and coordinate for project thumbnail—
-(void)setThumbnailDataFromImage:(UIImage *)image{
CGSize origImageSize= [image size];
// the rectange of the thumbnail
CGRect newRect= CGRectMake(0, 0, 40, 40);
// figure out a scaling ratio to make sure we maintain the same aspect ratio
float ratio= MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
// Create a transparent bitmap context with a scaling factor equal to that of the screen
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
// create a path that is a rounded rectangle
UIBezierPath *path= [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
// make all the subsequent drawing to clip to this rounded rectangle
[path addClip];
// center the image in the thumbnail rectangle
CGRect projectRect;
projectRect.size.width=ratio * origImageSize.width;
projectRect.size.height= ratio * origImageSize.height;
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
// draw the image on it
[image drawInRect:projectRect];
// get the image from the image context, keep it as our thumbnail
UIImage *smallImage= UIGraphicsGetImageFromCurrentImageContext();
[self setThumbnail:smallImage];
// get the PNG representation of the image and set it as our archivable data
NSData *data= UIImagePNGRepresentation(smallImage);
[self setThumbnailData:data];
// Cleanup image context resources, we're done
UIGraphicsEndImageContext();
}
I got the width and height computation wherein we multiply the origImageSize with scaling factor/ratio.
But then we use the following to give the thumbnail a position—
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
This i fail to understand. I cannot wrap my head around it. :?
Is this part of the centering process. I mean, are we using a mathematical relation here to position the thumbnail or is it some random calculation i.e could have been anything.. Am i missing some fundamental behind these two lines of code??
Those two lines are standard code for centering something, although they aren’t quite written in the most general way. You normally want to use:
projectRect.origin.x = newRect.origin.x + newRect.size.width / 2.0 - projectRect.size.width / 2.0;
projectRect.origin.y = newRect.origin.y + newRect.size.height / 2.0 - projectRect.size.height / 2.0;
In your case the author knows the origin is 0,0, so they omitted the first term in each line.
Since to center a rectangle in another rectangle you want the centers of the two axes to line up, you take, say, half the container’s width (the center of the outer rectangle) and subtract half the inner rectangle’s width (which takes you to the left side of the inner rectangle), and that gives you where the inner rectangle’s left side should be (e.g.: its x origin) when it is correctly centered.
The code below draws the following. One can notice the left side line has thin line as compare to that on right.
Other observation the Quad curve is not so sharp.
How can I make it look better?
- (void)drawRect:(CGRect)rect
{
CGContextRef contextRef=UIGraphicsGetCurrentContext();
[self drawBatteryEdges:contextRef withFinalBorderRect:rect];
}
-(void) drawBatteryEdges:(CGContextRef) contectRef withFinalBorderRect:(CGRect) batteryRect{
CGFloat topOffset=20.0f;
CGFloat bottomOffset=20.0f;
CGFloat curveOffset=4f;
CGMutablePathRef path=CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, 0, topOffset);
CGPathAddQuadCurveToPoint(path, NULL, batteryRect.size.width/2.0, topOffset-(curveOffset), batteryRect.size.width, topOffset);
CGPathAddLineToPoint(path, NULL, batteryRect.size.width, batteryRect.size.height-bottomOffset);
CGPathAddQuadCurveToPoint(path, NULL,
batteryRect.size.width/2.0, (CGPathGetCurrentPoint(path).y)+(curveOffset),
0, (CGPathGetCurrentPoint(path).y));
CGPathCloseSubpath(path);
CGContextAddPath(contectRef, path);
CGContextDrawPath(contectRef, kCGPathStroke);
}
It draws the following.
Wenderlich, Ray. "Core Graphics Tutorial: Lines, Rectangles, and Gradients." 15 Apr. 2013.
http://www.raywenderlich.com/32283/core-graphics-tutorial-lines-rectangles-and-gradients
Well, it turns out that when Core Graphics strokes a path, it draws
the stroke on the middle of the exact edge of the path.
In your case, the edge of the path is the rectangle you wish to fill.
So when drawing a 1 pixel line along that edge, half of the line (1/2
pixel) will be on the inside of the rectangle, and the other half of
the line (1/2 pixel) will be on the outside of the rectangle.
But of course, since there’s no way to draw 1/2 a pixel, instead Core
Graphics uses anti-aliasing to draw in both pixels, but just a lighter
shade to give the appearance that it is only a single pixel drawn.
But you don’t want no anti-aliasing, you want just one pixel, darnit!
There are several ways to fix this:
You can use clipping to cut out the undesirable pixels
You can disable antialiasing and also modify the rectangle boundaries to make sure the stroke is where you want
You can modify the path to stroke so it takes the 1/2 pixel effect into consideration
I would suggest drawing your stroke on a half pixel, which would involve doing something like this:
CGRectMake(rect.origin.x + 0.5, rect.origin.y + 0.5, rect.size.width - 1, rect.size.height - 1);
After transform uibutton change height and setFrame does not work. After this. Help me. My code here:
NSLog(#"BEFORE_Frame_height = %f", nameBgBtn.frame.size.height);
NSLog(#"BEFORE_Bound_height = %f", nameBgBtn.bounds.size.height);
nameBgBtn.transform = CGAffineTransformMakeRotation(degreesToRadian(rndValue));
CGRect newFrame = CGRectMake(nameBgBtn.frame.origin.x,nameBgBtn.frame.origin.y, nameBgBtn.bounds.size.width, nameBgBtn.bounds.size.height);
[nameBgBtn setFrame: newFrame];
[nameBgBtn setBounds:newFrame];
NSLog(#"After_Frame_height = %f", nameBgBtn.frame.size.height);
NSLog(#"After_Bount_height = %f", nameBgBtn.bounds.size.height);
My logger:
2013-03-07 15:30:23.887 BEFORE_Frame_height = 46.000000
2013-03-07 15:30:23.888 BEFORE_Bound_height = 46.000000
2013-03-07 15:30:23.888 After_Frame_height = 49.887489
2013-03-07 15:30:23.888 After_Bound_height = 46.000000
There is difference between frame and bounds, especially when you are changing the transform. In your code you are mixing both and the result is not what you expect.
You apply some rotation by setting transform.
You create rectangle with origin of the new frame and size of bounds. The bounds didn't change using transform.
You set this rect to frame. The view does not move (the same origin), but it gets scaled down, because you are changing outer dimensions.
You set the same rect to bounds. I'm not sure what happens if you set bounds.origin to non-zero value, but the contents of button may be translated. Also it scales the button up, because bounds.size is set to the same as before.
To be clear:
bounds = rect in inner coordinate system, usually origin of zero (except for scroll views) and with desired size.
frame = rect in superview (outer) coordinate system, with any origin and the size may be the same as bounds.size. The frame is calculation of center, bounds and transform.
transform = how bounds are transformed to make frame. Mapping of inner to outer coordinates.
If you have button with size {50, 80} and you apply 90° rotation, the bounds.size will be the same {50, 80}, also center will not change, but frame reflects the new transformed size {80, 50}.
I hope it's clear now.
Update: Here is an image showing difference between frame and bounds.
Dark square is bounds, light square is frame. On the first image, they have the same size. On the second image, the view has rotated transform.
I want to do a rounded rectangle outline on an NSImage and I figured that using NSBezierPath would be the best way. However, I ran into a problem: instead of drawing a nice curve, I get this:
For reasons I can't understand, NSBezierPath is drawing the rounded part with a darker color than the rest.
Here's the code I'm using (inside a drawRect: call on a custom view):
NSBezierPath* bp = [NSBezierPath bezierPathWithRoundedRect: self.bounds xRadius: 5 yRadius: 5];
[[[NSColor blackColor] colorWithAlphaComponent: 0.5] setStroke];
[bp stroke];
Any ideas?
Edit:
If I inset the path by 0.5 everything draws just fine. But why is it that I get this when I offset the path by 10 pixels (for example)?
If I understand correctly, it should draw a thin line as well...
Many rendering systems are derived from the PostScript drawing model. Core Graphics is one of these derivative systems. (Here are some others: PDF, SVG, the HTML Canvas 2D Context, Cairo.)
All of these systems have the idea of stroking a path with a line of some fixed width. When you stroke the path, the line straddles the path: half of the line's width is on one side of the path, and half of the line's width is on the other side. Here's a diagram that may make this clearer:
Now, what happens when you stroke a path that lies along the boundary of your view? Half of the stroke falls outside of your view's bounds and is clipped away - not drawn. You only see the half of the stroke that falls inside the view's bounds.
When you use a rounded corner, that corner pulls away from the view's boundary, toward its center, so more of the stroke around the corner falls inside the view's boundary. So the stroke appears to get thicker around the rounded corner, like this:
To fix this, you need to inset your path by half the line width, so that the entire stroke falls inside your view's bounds along the entire path. The default line width is 1.0, so:
NSBezierPath* bp = [NSBezierPath bezierPathWithRoundedRect:
NSRectInset(self.bounds, 0.5, 0.5) xRadius:5 yRadius:5];
In iOS field, just minus the radius of the circle to prevent from being clipped.
UIBezierPath *roundPath = [UIBezierPath bezierPath];
[roundPath addArcWithCenter:
CGPointMake(self.frame.size.width / 2, self.frame.size.height / 2)
radius:(self.frame.size.width / 2 - 0.5)
startAngle:M_PI_2 endAngle:M_PI * 3 / 2.f clockwise:YES];
I am using CGAffineTransformMake to flip an UIImageView vertically. It works fine but it does not seem to save the new flipped position of UIImageview, because when I try to flip it 2nd time (execute the line code below) it just does not work.
shape.transform = CGAffineTransformMake(1, 0, 0, 1, 0, 0);
help please.
Thanks in advance.
Kedar
Transforms are not automatically additive/accumulative as you would expect. Assigning a transform just transforms the target once.
Each transform is highly specific. If apply a rotation transform that rotates a view +45 degrees, you will see it rotate only once. Applying the same transform again does not rotate the view an additional +45 degrees. All subsequent applications of the same transforms produce no visible effect because the view is already rotated +45 degrees and that is all that transform will ever do.
To make transforms accumulative you have apply the new transform to the existing transform instead of just replacing it. So as mentioned previously for each subsequent rotation you use:
shape.transform = CGAffineTransformRotate(shape.transform, M_PI);
Which adds the new transform to the existing transform. If you add a +45 degree transform in this manner the view will rotate an additional +45 each time it is applied.
I have the same problem with you and I found the solution! I want to rotate the UIImageView, because I will have the animation. To save the image I use this method:
void CGContextConcatCTM(CGContextRef c, CGAffineTransform transform)
the transform param is the transform of your UIImageView so anything you have done to the imageView will be the same with image! And I have write a category method of UIImage.
-(UIImage *)imageRotateByTransform:(CGAffineTransform)transform{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
rotatedViewBox.transform = transform;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
//Rotate the image context using tranform
CGContextConcatCTM(bitmap, transform);
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Hope this will help you.
If you just want to reverse the effects of a previous transformation, you may like to look into setting the shape.transform property to the value CGAffineTransformIdentity.
When you set a view's transform property you are replacing any existing transform it has, not adding to it. So if you assign a transform which causes a rotation, it will forget about any flip you had previously configured.
If you want to add an additional rotation or scaling operation to a view which you have previously transformed you should investigate the functions which allow you to specify an existing transform.
I.e. instead of using
shape.transform = CGAffineTransformMakeRotation(M_PI);
which replaces the existing transform with the specified rotation, you could use
shape.transform = CGAffineTransformRotate(shape.transform, M_PI);
this applies the rotation to the existing transform (what ever that may be) and then assigns it to the view. Take a look at Apple's documentation for CGAffineTransformRotate, it may clarify things a little.
BTW, the documentation says: "If you don’t plan to reuse an affine transform, you may want to use CGContextScaleCTM, CGContextRotateCTM, CGContextTranslateCTM, or CGContextConcatCTM."