Transform uibutton change height - objective-c

After transform uibutton change height and setFrame does not work. After this. Help me. My code here:
NSLog(#"BEFORE_Frame_height = %f", nameBgBtn.frame.size.height);
NSLog(#"BEFORE_Bound_height = %f", nameBgBtn.bounds.size.height);
nameBgBtn.transform = CGAffineTransformMakeRotation(degreesToRadian(rndValue));
CGRect newFrame = CGRectMake(nameBgBtn.frame.origin.x,nameBgBtn.frame.origin.y, nameBgBtn.bounds.size.width, nameBgBtn.bounds.size.height);
[nameBgBtn setFrame: newFrame];
[nameBgBtn setBounds:newFrame];
NSLog(#"After_Frame_height = %f", nameBgBtn.frame.size.height);
NSLog(#"After_Bount_height = %f", nameBgBtn.bounds.size.height);
My logger:
2013-03-07 15:30:23.887 BEFORE_Frame_height = 46.000000
2013-03-07 15:30:23.888 BEFORE_Bound_height = 46.000000
2013-03-07 15:30:23.888 After_Frame_height = 49.887489
2013-03-07 15:30:23.888 After_Bound_height = 46.000000

There is difference between frame and bounds, especially when you are changing the transform. In your code you are mixing both and the result is not what you expect.
You apply some rotation by setting transform.
You create rectangle with origin of the new frame and size of bounds. The bounds didn't change using transform.
You set this rect to frame. The view does not move (the same origin), but it gets scaled down, because you are changing outer dimensions.
You set the same rect to bounds. I'm not sure what happens if you set bounds.origin to non-zero value, but the contents of button may be translated. Also it scales the button up, because bounds.size is set to the same as before.
To be clear:
bounds = rect in inner coordinate system, usually origin of zero (except for scroll views) and with desired size.
frame = rect in superview (outer) coordinate system, with any origin and the size may be the same as bounds.size. The frame is calculation of center, bounds and transform.
transform = how bounds are transformed to make frame. Mapping of inner to outer coordinates.
If you have button with size {50, 80} and you apply 90° rotation, the bounds.size will be the same {50, 80}, also center will not change, but frame reflects the new transformed size {80, 50}.
I hope it's clear now.
Update: Here is an image showing difference between frame and bounds.
Dark square is bounds, light square is frame. On the first image, they have the same size. On the second image, the view has rotated transform.

Related

UIView's bounds size vs frame size

Can a frame's size be different from the bound's size of a UIView.
Whenever I set either of them, I notice that both change and they are always in sync. Is there an edge case where this is not true?
Yes; for example, a transformed (e.g. rotated) view has a different (and useless) frame size.
The frame is purely a convenience, and you could live entirely without it if you had to; the bounds size and center, together, accurately and always describe the view's position and size.
Yes, Please refer the below simple difference between frame and bound:-
The frame of a view is the rectangle, expressed as a location (x,y)
and size (width,height) relative to the superview it is contained
within.

The bounds of a view is the rectangle, expressed as a location (x,y)
and size (width,height) relative to its own coordinate system.
bounds "describes the view’s location and size in its own coordinate system".
frame "defines the origin and dimensions of the view in the coordinate system of its superview".
So the two should differ for any view that uses a different coordinate system than its parent. The key giveaway is:
However, if the transform property contains a non-identity transform,
the value of the frame property is undefined and should not be
modified. In that case, you can reposition the view using the center
property and adjust the size using the bounds property instead.
So that's an example Apple gives you of when frame is defined not to have a predictable relationship to bounds: whenever you've set a non-identity transform.
(source for all quotes was the UIView documentation)
They are different.
Assume I have a label:
label.frame = CGRect(x: 0, y: 0, width: 200, height: 20)
Its current frame & bounds (print(label.frame, label2.bounds)) are as follows:
(0.0, 0.0, 200.0, 20.0) (0.0, 0.0, 200.0, 20.0)
Note they are currently the same. It is shown in x-position, y-position, width, height (in that order).
Now I will apply a scale Y of 2 to the label like so:
label.transform = CGAffineTransform(scaleX: 1, y: 2)
Its new frame & bounds are as follows:
(0.0, -10.0, 200.0, 40.0) (0.0, 0.0, 200.0, 20.0)
Notice how its own bounds are still the same, while the frame has changed (height went from 20 to 40, and the y-position has shifted by 10 upwards to compensate for the 20 increase so it will remain centred).
This corresponds to what other answers/documentation are saying. Neither are useless, use it accordingly to your needs.
7 years late to the party but hope this still helps others.

coordinate computation of the image thumbnail

This is a code snippet for creating a thumbnail sized image (from an original large image) and placing it appropriately on top of a tableviewcell. As i was studying the code i got stuck at the part where the thumbnail is being given a position by setting its abscissa and ordinate. In the method -(void)setThumbDataFromImage:(UIImage *)image they're setting the dimensions and coordinate for project thumbnail—
-(void)setThumbnailDataFromImage:(UIImage *)image{
CGSize origImageSize= [image size];
// the rectange of the thumbnail
CGRect newRect= CGRectMake(0, 0, 40, 40);
// figure out a scaling ratio to make sure we maintain the same aspect ratio
float ratio= MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
// Create a transparent bitmap context with a scaling factor equal to that of the screen
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
// create a path that is a rounded rectangle
UIBezierPath *path= [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
// make all the subsequent drawing to clip to this rounded rectangle
[path addClip];
// center the image in the thumbnail rectangle
CGRect projectRect;
projectRect.size.width=ratio * origImageSize.width;
projectRect.size.height= ratio * origImageSize.height;
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
// draw the image on it
[image drawInRect:projectRect];
// get the image from the image context, keep it as our thumbnail
UIImage *smallImage= UIGraphicsGetImageFromCurrentImageContext();
[self setThumbnail:smallImage];
// get the PNG representation of the image and set it as our archivable data
NSData *data= UIImagePNGRepresentation(smallImage);
[self setThumbnailData:data];
// Cleanup image context resources, we're done
UIGraphicsEndImageContext();
}
I got the width and height computation wherein we multiply the origImageSize with scaling factor/ratio.
But then we use the following to give the thumbnail a position—
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
This i fail to understand. I cannot wrap my head around it. :?
Is this part of the centering process. I mean, are we using a mathematical relation here to position the thumbnail or is it some random calculation i.e could have been anything.. Am i missing some fundamental behind these two lines of code??
Those two lines are standard code for centering something, although they aren’t quite written in the most general way. You normally want to use:
projectRect.origin.x = newRect.origin.x + newRect.size.width / 2.0 - projectRect.size.width / 2.0;
projectRect.origin.y = newRect.origin.y + newRect.size.height / 2.0 - projectRect.size.height / 2.0;
In your case the author knows the origin is 0,0, so they omitted the first term in each line.
Since to center a rectangle in another rectangle you want the centers of the two axes to line up, you take, say, half the container’s width (the center of the outer rectangle) and subtract half the inner rectangle’s width (which takes you to the left side of the inner rectangle), and that gives you where the inner rectangle’s left side should be (e.g.: its x origin) when it is correctly centered.

How to find the coordinates of UIImage after using CGAffineTransformMakeRotation on an object

I have an uiimage and i rotated the image using CGAffineTransformMakeRotation to the desired angle. Now i wanted to find out the new coordinates of the image that is new x position, y position, width, height of the image using the imageview.center method
Since you are asking for x, y, width, height and not the four corners I'm assuming that you want the bounding box of the rotated image. You could calculate that using a CGPath.
// Your rotation transform
CGAffineTransform rotation = CGAffineTransformMakeRotation(angle);
// Create a path from the transformed frame
CGPathRef rotatedImageRectPath =
CGPathCreateWithRect(
imageView.frame, // rect to get the path from
&rotation // transform to apply to rect
);
// Get the bounding box from the path
CGRect boundingBox = CGPathGetBoundingBox(rotatedImageRectPath);

CGContext drawing rotated around arbitrary point

I have a really annoying issue trying to draw into a bitmap CGContext. What I am trying to do is I have a couple of images to draw into the full size of the image. One can come in at any UIImageOrientation and I've written the code to correctly rotate that properly, but I'm struggling with the second bit which is trying to draw another view at an arbitrary rotation about its centre.
The other view comprises an image drawn possibly outside of its bounds. What I am having a problem with is drawing these at a rotated angle as though it was a UIView that had an affine transform applied to it. e.g. imagine a UIView at {100, 300} of size {20, 20} and an affine transform rotating it by 45 degrees. It would be rotated about {110, 310}.
What I have tried is this:
- (void)drawOtherViewInContext:(CGContextRef)context atRect:(CGRect)rect withRotation:(CGFloat)rotation contextSize:(CGSize)contextSize {
CGRect thisFrame = <SOLVED_FEATURE_FRAME_RELATIVE_TO_RECT_SIZE>;
thisFrame.origin.y = contextSize.height - thisFrame.origin.y - thisFrame.size.height;
CGRect rotatedRect = CGRectApplyAffineTransform(CGRectMake(0.0f, 0.0f, rect.size.width, rect.size.height), CGAffineTransformMakeRotation(-rotation));
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, rect.origin.x, contextSize.height - rect.origin.y - rect.size.height);
transform = CGAffineTransformTranslate(transform,
+(rotatedRect.size.width/2.0f),
+(rotatedRect.size.height/2.0f));
transform = CGAffineTransformRotate(transform, -rotation);
transform = CGAffineTransformTranslate(transform,
-(rect.size.width/2.0f),
-(rect.size.height/2.0f));
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, thisFrame, theCGImageToDraw);
CGContextConcatCTM(context, CGAffineTransformInvert(transform));
}
So what I am doing there, I think, is this:
Translate to the bottom left of rect which is where this view is meant to be drawn.
Translate by half the rotated size in x and y.
Rotate by the required angle.
Translate back half the original size in x and y.
I thought that this would be what I wanted to do because the first step translates the coordinate system to be such that thisFrame is drawn correctly relative to where we're being told to draw (by the rect method parameter). Then it's a pretty normal rotate about the centre of a rectangle.
The problem is that when rotated by say 45 degrees, the image is drawn slightly out of place. It's almost correct, but just not quite. When at 0, 90, 180 or 270 degrees then the position is pretty much spot on, maybe a few pixels out but when at 45, 135, 225, 315 degrees the position is too far up and to the right.
Can anyone see what I'm doing wrong here?
Update:
Silly me, it's bigger because I was passing in the wrong rect! Edited to get rid of references to it being the wrong size. It's still not quite in the right place though.
OK I have fixed it. The first point was that I was passing in the wrong rect at first as I was grabbing the frame from a UIView which had an affine transform applied to it, and as we all know the frame in that case is undefined. More likely it's the CGRect that comes from CGRectApplyAffineTransform(bounds, transform) but anyway, I fixed that one.
Then the main problem of drawing offset was fixed by changing my transform to this:
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, rect.origin.x, contextSize.height - rect.origin.y - rect.size.height);
transform = CGAffineTransformTranslate(transform,
+(rect.size.width/2.0f),
+(rect.size.height/2.0f));
transform = CGAffineTransformRotate(transform, -rotation);
transform = CGAffineTransformTranslate(transform,
-(rect.size.width/2.0f),
-(rect.size.height/2.0f));
That's what I had originally thought I should be doing, but for some reason I changed it to use the rotated CGRect.

Behaviour of CGAffineTransformMakeScale with CGREct and with UIView

I have a view with a frame defined as (0,0,320,480).
I call transformation on this view:
self.myView.transform = CGAffineTransformMakeScale(factor, factor);
The view will scale preserving a central position on the screen and his frame after my changes will be for example (34,-8,251,376), as you can see X and Y are now different from 0.
If i use the same function on a CGRect with frame (0,0,320,480):
CGAffineTransform t = CGAffineTransformMakeScale(factor,factor);
CGRect rect2 = CGRectApplyAffineTransform(rect,t);
rect2 will preserve 0 for X and Y and i'll obtain as result something like (0,0,251,376)
Why X and Y for rect2 doesn't change as in UIView example ?
It's true that you're not technically supposed to look at the frame property of a UIView after transformation, but it's also not technically pertinent to the question you're asking.
When applying CAffineTransforms to a UIView, the transformation takes into consideration the UIView's backing CALayer's anchorPoint property. From the CALayer docs on anchorPoint:
Defaults to (0.5, 0.5), the center of
the bounds rectangle.
This means that when you apply that scale transform, it uses the center of the view as the anchor point, so the view scales around that location. I'm guessing if you were to set the anchor point to (0, 0), it would behave like CGRect does.
CGRect, on the other hand, is a simple C struct, and doesn't have a backing layer or an anchor point. Thus the difference in behavior.
The UIView reference page says specifically:
Warning: If the transform property is
not the identity transform, the value
of this property is undefined and
therefore should be ignored.
So don't look at a view's frame after setting it's transform.