How to find the coordinates of UIImage after using CGAffineTransformMakeRotation on an object - core-animation

I have an uiimage and i rotated the image using CGAffineTransformMakeRotation to the desired angle. Now i wanted to find out the new coordinates of the image that is new x position, y position, width, height of the image using the imageview.center method

Since you are asking for x, y, width, height and not the four corners I'm assuming that you want the bounding box of the rotated image. You could calculate that using a CGPath.
// Your rotation transform
CGAffineTransform rotation = CGAffineTransformMakeRotation(angle);
// Create a path from the transformed frame
CGPathRef rotatedImageRectPath =
CGPathCreateWithRect(
imageView.frame, // rect to get the path from
&rotation // transform to apply to rect
);
// Get the bounding box from the path
CGRect boundingBox = CGPathGetBoundingBox(rotatedImageRectPath);

Related

coordinate computation of the image thumbnail

This is a code snippet for creating a thumbnail sized image (from an original large image) and placing it appropriately on top of a tableviewcell. As i was studying the code i got stuck at the part where the thumbnail is being given a position by setting its abscissa and ordinate. In the method -(void)setThumbDataFromImage:(UIImage *)image they're setting the dimensions and coordinate for project thumbnail—
-(void)setThumbnailDataFromImage:(UIImage *)image{
CGSize origImageSize= [image size];
// the rectange of the thumbnail
CGRect newRect= CGRectMake(0, 0, 40, 40);
// figure out a scaling ratio to make sure we maintain the same aspect ratio
float ratio= MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
// Create a transparent bitmap context with a scaling factor equal to that of the screen
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
// create a path that is a rounded rectangle
UIBezierPath *path= [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
// make all the subsequent drawing to clip to this rounded rectangle
[path addClip];
// center the image in the thumbnail rectangle
CGRect projectRect;
projectRect.size.width=ratio * origImageSize.width;
projectRect.size.height= ratio * origImageSize.height;
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
// draw the image on it
[image drawInRect:projectRect];
// get the image from the image context, keep it as our thumbnail
UIImage *smallImage= UIGraphicsGetImageFromCurrentImageContext();
[self setThumbnail:smallImage];
// get the PNG representation of the image and set it as our archivable data
NSData *data= UIImagePNGRepresentation(smallImage);
[self setThumbnailData:data];
// Cleanup image context resources, we're done
UIGraphicsEndImageContext();
}
I got the width and height computation wherein we multiply the origImageSize with scaling factor/ratio.
But then we use the following to give the thumbnail a position—
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
This i fail to understand. I cannot wrap my head around it. :?
Is this part of the centering process. I mean, are we using a mathematical relation here to position the thumbnail or is it some random calculation i.e could have been anything.. Am i missing some fundamental behind these two lines of code??
Those two lines are standard code for centering something, although they aren’t quite written in the most general way. You normally want to use:
projectRect.origin.x = newRect.origin.x + newRect.size.width / 2.0 - projectRect.size.width / 2.0;
projectRect.origin.y = newRect.origin.y + newRect.size.height / 2.0 - projectRect.size.height / 2.0;
In your case the author knows the origin is 0,0, so they omitted the first term in each line.
Since to center a rectangle in another rectangle you want the centers of the two axes to line up, you take, say, half the container’s width (the center of the outer rectangle) and subtract half the inner rectangle’s width (which takes you to the left side of the inner rectangle), and that gives you where the inner rectangle’s left side should be (e.g.: its x origin) when it is correctly centered.

Transform uibutton change height

After transform uibutton change height and setFrame does not work. After this. Help me. My code here:
NSLog(#"BEFORE_Frame_height = %f", nameBgBtn.frame.size.height);
NSLog(#"BEFORE_Bound_height = %f", nameBgBtn.bounds.size.height);
nameBgBtn.transform = CGAffineTransformMakeRotation(degreesToRadian(rndValue));
CGRect newFrame = CGRectMake(nameBgBtn.frame.origin.x,nameBgBtn.frame.origin.y, nameBgBtn.bounds.size.width, nameBgBtn.bounds.size.height);
[nameBgBtn setFrame: newFrame];
[nameBgBtn setBounds:newFrame];
NSLog(#"After_Frame_height = %f", nameBgBtn.frame.size.height);
NSLog(#"After_Bount_height = %f", nameBgBtn.bounds.size.height);
My logger:
2013-03-07 15:30:23.887 BEFORE_Frame_height = 46.000000
2013-03-07 15:30:23.888 BEFORE_Bound_height = 46.000000
2013-03-07 15:30:23.888 After_Frame_height = 49.887489
2013-03-07 15:30:23.888 After_Bound_height = 46.000000
There is difference between frame and bounds, especially when you are changing the transform. In your code you are mixing both and the result is not what you expect.
You apply some rotation by setting transform.
You create rectangle with origin of the new frame and size of bounds. The bounds didn't change using transform.
You set this rect to frame. The view does not move (the same origin), but it gets scaled down, because you are changing outer dimensions.
You set the same rect to bounds. I'm not sure what happens if you set bounds.origin to non-zero value, but the contents of button may be translated. Also it scales the button up, because bounds.size is set to the same as before.
To be clear:
bounds = rect in inner coordinate system, usually origin of zero (except for scroll views) and with desired size.
frame = rect in superview (outer) coordinate system, with any origin and the size may be the same as bounds.size. The frame is calculation of center, bounds and transform.
transform = how bounds are transformed to make frame. Mapping of inner to outer coordinates.
If you have button with size {50, 80} and you apply 90° rotation, the bounds.size will be the same {50, 80}, also center will not change, but frame reflects the new transformed size {80, 50}.
I hope it's clear now.
Update: Here is an image showing difference between frame and bounds.
Dark square is bounds, light square is frame. On the first image, they have the same size. On the second image, the view has rotated transform.

Draw rounded linear gradient (or extended radial gradient) with CoreGraphics

I want to do some custom drawing with CoreGraphics. I need a linear gradient on my view, but the thing is that this view is a rounded rectangle so I want my gradient to be also rounded at angles. You can see what I want to achieve on the image below:
So is this possible to implement in CoreGraphics or some other programmatic and easy way?
Thank you.
I don't think there is an API for that, but you can get the same effect if you first draw a radial gradient, say, in an (N+1)x(N+1) size bitmap context, then convert the image from the context to a resizable image with left and right caps set to N.
Pseudocode:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(N+1,N+1), NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
// <draw the gradient into 'context'>
UIImage* gradientBase = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage* gradientImage = [gradientBase resizableImageWithCapInsets:UIEdgeInsetsMake(0,N,0,N)];
In case you want the image to scale vertically as well, you just have to set the caps to UIEdgeInsetsMake(N,N,N,N).
I just want to add more sample code for this technique, as some things weren't obvious for. Maybe it will be useful for somebody:
So, let's say, we have our custom view class and in it's drawRect: method we put this:
// Defining the rect in which to draw
CGRect drawRect=self.bounds;
Float32 gradientSize=drawRect.size.height; // The size of original radial gradient
CGPoint center=CGPointMake(0.5f*gradientSize,0.5f*gradientSize); // Center of gradient
// Creating the gradient
Float32 colors[4]={0.f,1.f,1.f,0.2f}; // From opaque white to transparent black
CGGradientRef gradient=CGGradientCreateWithColorComponents(CGColorSpaceCreateDeviceGray(), colors, nil, 2);
// Starting image and drawing gradient into it
UIGraphicsBeginImageContextWithOptions(CGSizeMake(gradientSize, gradientSize), NO, 1.f);
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextDrawRadialGradient(context, gradient, center, 0.f, center, center.x, 0); // Drawing gradient
UIImage* gradientImage=UIGraphicsGetImageFromCurrentImageContext(); // Retrieving image from context
UIGraphicsEndImageContext(); // Ending process
gradientImage=[gradientImage resizableImageWithCapInsets:UIEdgeInsetsMake(0.f, center.x-1.f, 0.f, center.x-1.f)]; // Leaving 2 pixels wide area in center which will be tiled to fill whole area
// Drawing image into view frame
[gradientImage drawInRect:drawRect];
That's all. Also if you're not going to ever change the gradient while app is running, you would want to put everything except last line in awakeFromNib method and then in drawRect: just draw the gradientImage into view's frame. Also don't forget to retain the gradientImage in this case.

CGContextDrawRadialGradient not rendering alpha in PDF?

I have the following drawing, which renders a circle with full color at the center fading to 0 alpha at the edges. When drawing this to the screen, it looks perfect. However, when I draw the same thing in a PDF context (CGPDFContextCreate), the whole circle comes out opaque. If I draw any other regular path in the PDF, then alpha renders fines. So just the gradient doesn't work. Is this a bug or am I missing something?
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
size_t num_locations = 2;
CGFloat locations[2] = { 1.0, 0.0 };
CGColorRef color = [[UIColor redColor]CGColor];
CGFloat *k = (CGFloat *)CGColorGetComponents(color);
CGFloat components[8] = { k[0], k[1], k[2], 0.0, k[0], k[1], k[2], 1.0 };
CGGradientRef myGradient = CGGradientCreateWithColorComponents(myColorspace, components, locations, num_locations);
CGPoint c = CGPointMake(160, 160);
CGContextDrawRadialGradient(pdfContext, myGradient, c, 0, c, 60, 0);
Official response from Apple tech support:
Quartz ignores the alpha value of colors in gradients (or shadings)
when capturing a gradient (or shading) to a PDF document and instead
treats all colors as if they are completely opaque. In addition,
Quartz ignores the global alpha in the context when it records
gradients (or shadings) into a PDF document. One possible work-around
is to capture a shading as bits using a bitmap context and use the
resulting bits to create a CGImage that you draw through the clipping
area. This produces pre-rendered gradients (or shadings) but does
capture the alpha content into a PDF document. You should not perform
this pre-rendering for gradients (or shadings) that don't contain
alpha.

CGContext drawing rotated around arbitrary point

I have a really annoying issue trying to draw into a bitmap CGContext. What I am trying to do is I have a couple of images to draw into the full size of the image. One can come in at any UIImageOrientation and I've written the code to correctly rotate that properly, but I'm struggling with the second bit which is trying to draw another view at an arbitrary rotation about its centre.
The other view comprises an image drawn possibly outside of its bounds. What I am having a problem with is drawing these at a rotated angle as though it was a UIView that had an affine transform applied to it. e.g. imagine a UIView at {100, 300} of size {20, 20} and an affine transform rotating it by 45 degrees. It would be rotated about {110, 310}.
What I have tried is this:
- (void)drawOtherViewInContext:(CGContextRef)context atRect:(CGRect)rect withRotation:(CGFloat)rotation contextSize:(CGSize)contextSize {
CGRect thisFrame = <SOLVED_FEATURE_FRAME_RELATIVE_TO_RECT_SIZE>;
thisFrame.origin.y = contextSize.height - thisFrame.origin.y - thisFrame.size.height;
CGRect rotatedRect = CGRectApplyAffineTransform(CGRectMake(0.0f, 0.0f, rect.size.width, rect.size.height), CGAffineTransformMakeRotation(-rotation));
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, rect.origin.x, contextSize.height - rect.origin.y - rect.size.height);
transform = CGAffineTransformTranslate(transform,
+(rotatedRect.size.width/2.0f),
+(rotatedRect.size.height/2.0f));
transform = CGAffineTransformRotate(transform, -rotation);
transform = CGAffineTransformTranslate(transform,
-(rect.size.width/2.0f),
-(rect.size.height/2.0f));
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, thisFrame, theCGImageToDraw);
CGContextConcatCTM(context, CGAffineTransformInvert(transform));
}
So what I am doing there, I think, is this:
Translate to the bottom left of rect which is where this view is meant to be drawn.
Translate by half the rotated size in x and y.
Rotate by the required angle.
Translate back half the original size in x and y.
I thought that this would be what I wanted to do because the first step translates the coordinate system to be such that thisFrame is drawn correctly relative to where we're being told to draw (by the rect method parameter). Then it's a pretty normal rotate about the centre of a rectangle.
The problem is that when rotated by say 45 degrees, the image is drawn slightly out of place. It's almost correct, but just not quite. When at 0, 90, 180 or 270 degrees then the position is pretty much spot on, maybe a few pixels out but when at 45, 135, 225, 315 degrees the position is too far up and to the right.
Can anyone see what I'm doing wrong here?
Update:
Silly me, it's bigger because I was passing in the wrong rect! Edited to get rid of references to it being the wrong size. It's still not quite in the right place though.
OK I have fixed it. The first point was that I was passing in the wrong rect at first as I was grabbing the frame from a UIView which had an affine transform applied to it, and as we all know the frame in that case is undefined. More likely it's the CGRect that comes from CGRectApplyAffineTransform(bounds, transform) but anyway, I fixed that one.
Then the main problem of drawing offset was fixed by changing my transform to this:
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, rect.origin.x, contextSize.height - rect.origin.y - rect.size.height);
transform = CGAffineTransformTranslate(transform,
+(rect.size.width/2.0f),
+(rect.size.height/2.0f));
transform = CGAffineTransformRotate(transform, -rotation);
transform = CGAffineTransformTranslate(transform,
-(rect.size.width/2.0f),
-(rect.size.height/2.0f));
That's what I had originally thought I should be doing, but for some reason I changed it to use the rotated CGRect.