Get size of UIView after applying CGAffineTransform - objective-c

I was surprised not to find an answer to this question, maybe is something very simple I somehow overlook :
How to get the real size of an UIView after I apply a CGAffineTransform to it?
eg.
my UIView has size 300 x 200, I apply a scaling transform let's say factor 2 both horizontal and vertical, so the UIView now takes 600 x 400 on the screen, but it's bounds and it's layer's bounds are still returning a size of 300 x 200 ... where do I find the real size of the UIView ?
ps. forgot to mention I want to also rotate the uiview. If I apply only scaling CGSizeApplyAffineTransform works great, but when there's also rotation, then it does not work properly.
Edit: drawnonward pointed me in the right direction, I just refined a bit the code to compile and here it is :
UIView* view = (your view being transformed);
CGAffineTransform trans = (view.transform or create a new transformation);
CGRect rect = [view bounds];
CGMutablePathRef path = CGPathCreateMutable();
rect.origin = CGPointZero;
CGPathAddRect(path , &trans , rect);
rect = CGPathGetBoundingBox( path );
CGPathRelease( path );
Now rect.size contains the dimensions of the view with the transformation applied
Thanks again to drawnonward

I use this in Objective C:
CGRect transformedBounds = CGRectApplyAffineTransform(view.bounds, view.transform);
or in Swift 4:
let transformedBounds = view.bounds.applying(view.transform)

[myView frame] returns the frame of the view as seen by the parent, for layout and relative sizes. [myView bounds] returns the bounds of the view as seen by itself, for drawing. If you have transforms applied to multiple views, you can use convertRect: to or from a view.
Edit:
Maybe something like this.
CGRect rect = [view bounds];
CGPathRef path = CGPathCreateMutable();
rect.origin = CGPointZero;
CGPathAddRect( rect , [view transform] );
rect = CGPathGetBoundingBox( path );
CGPathRelease( path );
The use [view center] to find the position in the superview.

Old question, but bumped into here, after searching a solution and tons of attempts. It was simple;
view.layer.frame has all transformations applied and you'll get the size from view.layer.frame.size easily.
-- below here is not an answer to this question - -
And for my problem, I was trying to calculate new center value after changing layer.anchorPoint of my rotated view, so it doesn't move. And finally did it like this;
CGPoint topLeft = [self.superview convertPoint:CGPointMake(0, 0) fromView:self];
self.layer.anchorPoint = CGPointMake(0, 0);
self.center = topLeft;
for reverse
CGPoint center = [self.superview convertPoint:CGPointMake(self.bounds.size.width / 2, self.bounds.size.height / 2) fromView:self];
self.layer.anchorPoint = CGPointMake(.5, .5);
self.center = center;
finally.

Use CGSizeApplyAffineTransform(size, transform) and it will return a transformed size. There are similar CGPoint and CGRect functions as well.

Simpler: A view with (bounds) size s to which transform tr is applied has resulting size:
CGSizeMake(s.width*hypotf(tr.a, tr.b), s.height*hypotf(tr.c, tr.d))
However, if view's superview or any ancestor view has any non-unit transform applied, this size makes little sense in absolute terms.
If you want the absolute size of a view in window coordinates after any arbitrary transform has been applied to that view or its superviews, you should first compute the absolute transform matrix by composing all the view transform up to the root window, and then apply the above formula to the result.

But you apply a rotating transform, it don't get right size by CGPathGetBoundingBox.

If you applied the CGAffineTransform the view's .layer then the adjusted CGRect region after scale and/or translation transforms is simply view.layer.frame

Related

How to Combine SKTextures

I'm looking for a way to combine an SKTexture on top of another SKTexture, similar in appearance to how a textured SKSpritenode would look when adding another textured SKSPritenode as a child. I need it to be a single SKTexture as an end-product please.
The short of it is that you can't do it. A SKSpriteNode can only ever take one SKTexture. The only way to do this within the SpriteKit framework is to add children on top of the parent node.
Another way is to use a series of images and combine them before adding the final product as a texture to your sprite node.
CGSize mergedSize = CGSizeMake(maxWidth, maxHeight); // <- use the largest width and height for your images if they are different sizes
UIGraphicsBeginImageContextWithOptions(mergedSize, NO, 0.0f);
[textureImage1 drawInRect:CGRectMake(0, 0, maxWidth, textureImage1.size.height)];
[textureImage2 drawInRect:CGRectMake(0, 0, 75, textureImage2.size.height)];
// ... add any additional images ...
UIImage *mergedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.texture = [SKTexture textureWithImage:mergedImage];
Make sure you have your SKSpriteNode size property set to (maxWidth, maxHeight) so nothing gets clipped.

How do I avoid interpolation artifacts when drawing NSImage into a different size rect?

My end goal is to fill an arbitrarily sized rectangle with an NSImage. I want to:
Fill the entire rectangle
Preserve the aspect ratio of the image
Show as much as possible of the image while maintaining 1) and 2)
When not all the image can be shown, crop toward the center.
This demonstrates what I'm trying to do. The original image of the boat at the top is drawn into various sized rectangles below.
Okay, so far so good. I added a category to NSImage to do this.
#implementation NSImage (Fill)
/**
* Crops source to best fit the destination
*
* destRect is the rect in which we want to draw the image
* sourceRect is the rect of the image
*/
-(NSRect)scaleAspectFillRect:(NSRect)destRect fromRect:(NSRect)sourceRect
{
NSSize sourceSize = sourceRect.size;
NSSize destSize = destRect.size;
CGFloat sourceAspect = sourceSize.width / sourceSize.height;
CGFloat destAspect = destSize.width / destSize.height;
NSRect cropRect = NSZeroRect;
if (sourceAspect > destAspect) { // source is proportionally wider than dest
cropRect.size.height = sourceSize.height;
cropRect.size.width = cropRect.size.height * destAspect;
cropRect.origin.x = (sourceSize.width - cropRect.size.width) / 2;
} else { // dest is proportionally wider than source (or they are equal)
cropRect.size.width = sourceSize.width;
cropRect.size.height = cropRect.size.width / destAspect;
cropRect.origin.y = (sourceSize.height - cropRect.size.height) / 2;
}
return cropRect;
}
- (void)drawScaledAspectFilledInRect:(NSRect)rect
{
NSRect imageRect = NSMakeRect(0, 0, [self size].width, [self size].height);
NSRect sourceRect = [self scaleAspectFillRect:rect fromRect:imageRect];
[[NSGraphicsContext currentContext]
setImageInterpolation:NSImageInterpolationHigh];
[self drawInRect:rect
fromRect:sourceRect
operation:NSCompositeSourceOver
fraction:1.0 respectFlipped:YES hints:nil];
}
#end
When I want to draw the image into a certain rectangle I call:
[myImage drawScaledAspectFilledInRect:onScreenRect];
Works really well except for one problem. At certain sizes the image looks quite blurry:
My first thought was that I need to draw on integral pixels, so I used NSIntegralRect() before drawing. No luck.
As I thought about it I figured that it's probably a result of the interpolation. To draw from the larger image to the smaller draw rect NSImage has to interpolate. The blurry images are likely just a case where the values don't map very well and we end up with some undesirable artifacts that can't be avoided.
So, the question is this: How do I choose an optimal rect that avoids those artifacts? I can adjust either the draw rect or the crop rect before drawing to avoid this, but I don't know how or when to adjust them.

ConvertRect accounting for UIScrollView zoom and contentOffset

I've been trying to get the converted CGRect of a UIView within a UIScrollView. It works fine if I'm not zoomed, but once I zoom, the new CGRect shifts. Here is the code that's gotten me close:
CGFloat zoomScale = (scrollView.zoomScale);
CGRect newRect = [self.view convertRect:widgetView.frame fromView:scrollView];
CGPoint newPoint = [self.view convertPoint:widgetView.center fromView:scrollView];
// Increase the size of the CGRect by multiplying by the zoomScale
CGSize newSize = CGSizeMake(newRect.size.width * zoomScale, newRect.size.height * zoomScale);
// Subtract the offset of the UIScrollView for proper positioning
CGPoint newCenter = CGPointMake(newPoint.x - scrollView.contentOffset.x, newPoint.y - scrollView.contentOffset.y);
// Create rect with the proper width/height (x and y set by center)
newRect = CGRectMake(0, 0, newSize.width, newSize.height);
[self.view addSubview:widgetView];
widgetView.frame = newRect;
widgetView.center = newCenter;
I'm fairly certain that my issue lies in the zoomScale - I should probably be modifying the x and y coordinates based on the zoomScale value. Everything I have tried thus far has been unsuccessful, though.
I received the following answer from user Brian2012 on the iOS dev forums:
What I did:
Created a UIScrollView that covers the view controller's main view.
Put a desktop view (a standard UIView) in the scroll view. The origin of the desktop is at 0,0 and the size is bigger than the scroll view so I could scroll around without having to zoom first.
Put some widget views (UIImageView) into the desktop view at various locations.
Set the contentSize of the scroll view to the size of the desktop view.
Implemented viewForZoomingInScrollView to return the desktop view as the view to scroll.
Put NSLogs in scrollViewDidZoom to print out the frame of the desktop view and one of the widget views.
What I found out:
The widget frame never changes from the initial value that I set. So for example, if a widget started at position 108, 108 with a size of 64x64, then the frame is always reported as 108,108,64,64 regardless of zooming or scrolling.
The desktop frame origin never changes. I put the origin of the desktop at 0,0 in the scroll view, and the origin is always reported as 0,0 regardless of zooming or scrolling.
The only thing that changes is the desktop view's frame size, and the size is just the original size multiplied by the scroll view's zoomScale.
Conclusion:
To figure out the location of a widget relative to the coordinate system of the view controller's main view, you need to do the math yourself. The convertRect method doesn't do anything useful in this case. Here's some code to try
- (CGRect)computePositionForWidget:(UIView *)widgetView fromView:(UIScrollView *)scrollView
{
CGRect frame;
float scale;
scale = scrollView.zoomScale;
// compute the widget size based on the zoom scale
frame.size.width = widgetView.frame.size.width * scale;
frame.size.height = widgetView.frame.size.height * scale;
// compute the widget position based on the zoom scale and contentOffset
frame.origin.x = widgetView.frame.origin.x * scale - scrollView.contentOffset.x + scrollView.frame.origin.x;
frame.origin.y = widgetView.frame.origin.y * scale - scrollView.contentOffset.y + scrollView.frame.origin.y;
// return the widget coordinates in the coordinate system of the view that contains the scroll view
return( frame );
}
I have also similar kind of issue, i fixed it by a different trick. i catch the zoom scale in a temporary variable, and set scrollView's zoom scale to minimum(1.0) and then calculate my frame using convertRect() and set the original zoom scale again
CGFloat actualZoomScal = self.baseVideoView.zoomScale;
CGPoint actualOffset = self.baseVideoView.contentOffset;
self.baseVideoView.zoomScale = 1.0;
CGRect iFrame = [[RGLayout layout] rectToPixels:[self recordingIndicatorFrame]];
self.recordIndicator = [[RiscoRecordingIndicatorView alloc] initWithFrame:[self convertRect:iFrame fromView:self.previewView]];
[self addSubview:self.recordIndicator] ;
[self bringSubviewToFront:self.recordIndicator] ;
self.baseVideoView.zoomScale = actualZoomScal;
self.baseVideoView.contentOffset = actualOffset;

How to rotate an object around a arbitrary point?

I want to rotate an UILabel around an arbitrary point in a circular manner, not a straight line. This is my code.The final point is perfect but it goes through a straight line between the initial and the end points.
- (void)rotateText:(UILabel *)label duration:(NSTimeInterval)duration degrees:(CGFloat)degrees {
/* Setup the animation */
[UILabel beginAnimations:nil context:NULL];
[UILabel setAnimationDuration:duration];
CGPoint rotationPoint = CGPointMake(160, 236);
CGPoint transportPoint = CGPointMake(rotationPoint.x - label.center.x, rotationPoint.y - label.center.y);
CGAffineTransform t1 = CGAffineTransformTranslate(label.transform, transportPoint.x, -transportPoint.y);
CGAffineTransform t2 = CGAffineTransformRotate(label.transform,DEGREES_TO_RADIANS(degrees));
CGAffineTransform t3 = CGAffineTransformTranslate(label.transform, -transportPoint.x, +transportPoint.y);
CGAffineTransform t4 = CGAffineTransformConcat(CGAffineTransformConcat(t1, t2), t3);
label.transform = t4;
/* Commit the changes */
[UILabel commitAnimations];
}
You should set your own anchorPoint
Its very much overkill to use a keyframe animation for what really is a change of the anchor point.
The anchor point is the point where all transforms are applied from, the default anchor point is the center. By moving the anchor point to (0,0) you can instead make the layer rotate from the bottom most corner. By setting the anchor point to something where x or y is outside the range 0.0 - 1.0 you can have the layer rotate around a point that lies outside of its bounds.
Please read the section about Layer Geometry and Transforms in the Core Animation Programming Guide for more information. It goes through this in detail with images to help you understand.
EDIT: One thing to remember
The frame of your layer (which is also the frame of your view) is calculated using the position, bounds and anchor point. Changing the anchorPoint will change where your view appears on screen. You can counter this by re-setting the frame after changing the anchor point (this will set the position for you). Otherwise you can set the position to the point you are rotating to yourself. The documentation (linked to above) also mentions this.
Applied to you code
The point you called "transportPoint" should be updated to calculate the difference between the rotation point and the lower left corner of the label divided by the width and height.
// Pseudocode for the correct anchor point
transportPoint = ( (rotationX - labelMinX)/labelWidth,
(rotationX - labelMinY)/labelHeight )
I also made the rotation point an argument to your method. The full updated code is below:
#define DEGREES_TO_RADIANS(angle) (angle/180.0*M_PI)
- (void)rotateText:(UILabel *)label
aroundPoint:(CGPoint)rotationPoint
duration:(NSTimeInterval)duration
degrees:(CGFloat)degrees {
/* Setup the animation */
[UILabel beginAnimations:nil context:NULL];
[UILabel setAnimationDuration:duration];
// The anchor point is expressed in the unit coordinate
// system ((0,0) to (1,1)) of the label. Therefore the
// x and y difference must be divided by the width and
// height of the label (divide x difference by width and
// y difference by height).
CGPoint transportPoint = CGPointMake((rotationPoint.x - CGRectGetMinX(label.frame))/CGRectGetWidth(label.bounds),
(rotationPoint.y - CGRectGetMinY(label.frame))/CGRectGetHeight(label.bounds));
[label.layer setAnchorPoint:transportPoint];
[label.layer setPosition:rotationPoint]; // change the position here to keep the frame
[label.layer setTransform:CATransform3DMakeRotation(DEGREES_TO_RADIANS(degrees), 0, 0, 1)];
/* Commit the changes */
[UILabel commitAnimations];
}
I decided to post my solution as an answer. It works fine accept it doesn't have the old solutions's curve animations (UIViewAnimationCurveEaseOut), but I can sort that out.
#define DEGREES_TO_RADIANS(angle) (angle / 180.0 * M_PI)
- (void)rotateText:(UILabel *)label duration:(NSTimeInterval)duration degrees:(CGFloat)degrees {
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddArc(path,nil, 160, 236, 100, DEGREES_TO_RADIANS(0), DEGREES_TO_RADIANS(degrees), YES);
CAKeyframeAnimation *theAnimation;
// animation object for the key path
theAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
theAnimation.path=path;
CGPathRelease(path);
// set the animation properties
theAnimation.duration=duration;
theAnimation.removedOnCompletion = NO;
theAnimation.autoreverses = NO;
theAnimation.rotationMode = kCAAnimationRotateAutoReverse;
theAnimation.fillMode = kCAFillModeForwards;
[label.layer addAnimation:theAnimation forKey:#"position"];
}
CAKeyframeAnimation is the right tool for this job. Most UIKit animations are between start and end points. The middle points are not considered. CAKeyframeAnimation allows you to define those middle points to provide a non-linear animation. You will have to provide the appropriate bezier path for your animation. You should look at this example and the one's provided in the Apple documentation to see how it works.
translate, rotate around center, translate back.

UIView frame, bounds and center

I would like to know how to use these properties in the right manner.
As I understand, frame can be used from the container of the view I am creating.
It sets the view position relative to the container view. It also sets the size of that view.
Also center can be used from the container of the view I'm creating. This property changes the position of the view relative to its container.
Finally, bounds is relative to the view itself. It changes the drawable area for the view.
Can you give more info about the relationship between frame and bounds? What about the clipsToBounds and masksToBounds properties?
Since the question I asked has been seen many times I will provide a detailed answer of it. Feel free to modify it if you want to add more correct content.
First a recap on the question: frame, bounds and center and theirs relationships.
Frame A view's frame (CGRect) is the position of its rectangle in the superview's coordinate system. By default it starts at the top left.
Bounds A view's bounds (CGRect) expresses a view rectangle in its own coordinate system.
Center A center is a CGPoint expressed in terms of the superview's coordinate system and it determines the position of the exact center point of the view.
Taken from UIView + position these are the relationships (they don't work in code since they are informal equations) among the previous properties:
frame.origin = center - (bounds.size / 2.0)
center = frame.origin + (bounds.size / 2.0)
frame.size = bounds.size
NOTE: These relationships do not apply if views are rotated. For further info, I will suggest you take a look at the following image taken from The Kitchen Drawer based on Stanford CS193p course. Credits goes to #Rhubarb.
Using the frame allows you to reposition and/or resize a view within its superview. Usually can be used from a superview, for example, when you create a specific subview. For example:
// view1 will be positioned at x = 30, y = 20 starting the top left corner of [self view]
// [self view] could be the view managed by a UIViewController
UIView* view1 = [[UIView alloc] initWithFrame:CGRectMake(30.0f, 20.0f, 400.0f, 400.0f)];
view1.backgroundColor = [UIColor redColor];
[[self view] addSubview:view1];
When you need the coordinates to drawing inside a view you usually refer to bounds. A typical example could be to draw within a view a subview as an inset of the first. Drawing the subview requires to know the bounds of the superview. For example:
UIView* view1 = [[UIView alloc] initWithFrame:CGRectMake(50.0f, 50.0f, 400.0f, 400.0f)];
view1.backgroundColor = [UIColor redColor];
UIView* view2 = [[UIView alloc] initWithFrame:CGRectInset(view1.bounds, 20.0f, 20.0f)];
view2.backgroundColor = [UIColor yellowColor];
[view1 addSubview:view2];
Different behaviours happen when you change the bounds of a view.
For example, if you change the bounds size, the frame changes (and vice versa). The change happens around the center of the view. Use the code below and see what happens:
NSLog(#"Old Frame %#", NSStringFromCGRect(view2.frame));
NSLog(#"Old Center %#", NSStringFromCGPoint(view2.center));
CGRect frame = view2.bounds;
frame.size.height += 20.0f;
frame.size.width += 20.0f;
view2.bounds = frame;
NSLog(#"New Frame %#", NSStringFromCGRect(view2.frame));
NSLog(#"New Center %#", NSStringFromCGPoint(view2.center));
Furthermore, if you change bounds origin you change the origin of its internal coordinate system. By default the origin is at (0.0, 0.0) (top left corner). For example, if you change the origin for view1 you can see (comment the previous code if you want) that now the top left corner for view2 touches the view1 one. The motivation is quite simple. You say to view1 that its top left corner now is at the position (20.0, 20.0) but since view2's frame origin starts from (20.0, 20.0), they will coincide.
CGRect frame = view1.bounds;
frame.origin.x += 20.0f;
frame.origin.y += 20.0f;
view1.bounds = frame;
The origin represents the view's position within its superview but describes the position of the bounds center.
Finally, bounds and origin are not related concepts. Both allow to derive the frame of a view (See previous equations).
View1's case study
Here is what happens when using the following snippet.
UIView* view1 = [[UIView alloc] initWithFrame:CGRectMake(30.0f, 20.0f, 400.0f, 400.0f)];
view1.backgroundColor = [UIColor redColor];
[[self view] addSubview:view1];
NSLog(#"view1's frame is: %#", NSStringFromCGRect([view1 frame]));
NSLog(#"view1's bounds is: %#", NSStringFromCGRect([view1 bounds]));
NSLog(#"view1's center is: %#", NSStringFromCGPoint([view1 center]));
The relative image.
This instead what happens if I change [self view] bounds like the following.
// previous code here...
CGRect rect = [[self view] bounds];
rect.origin.x += 30.0f;
rect.origin.y += 20.0f;
[[self view] setBounds:rect];
The relative image.
Here you say to [self view] that its top left corner now is at the position (30.0, 20.0) but since view1's frame origin starts from (30.0, 20.0), they will coincide.
Additional references (to update with other references if you want)
UIView Geometry
UIView Frames and Bounds
About clipsToBounds (source Apple doc)
Setting this value to YES causes subviews to be clipped to the bounds
of the receiver. If set to NO, subviews whose frames extend beyond the
visible bounds of the receiver are not clipped. The default value is
NO.
In other words, if a view's frame is (0, 0, 100, 100) and its subview is (90, 90, 30, 30), you will see only a part of that subview. The latter won't exceed the bounds of the parent view.
masksToBounds is equivalent to clipsToBounds. Instead to a UIView, this property is applied to a CALayer. Under the hood, clipsToBounds calls masksToBounds. For further references take a look to How is the relation between UIView's clipsToBounds and CALayer's masksToBounds?.
This question already has a good answer, but I want to supplement it with some more pictures. My full answer is here.
To help me remember frame, I think of a picture frame on a wall. Just like a picture can be moved anywhere on the wall, the coordinate system of a view's frame is the superview. (wall=superview, frame=view)
To help me remember bounds, I think of the bounds of a basketball court. The basketball is somewhere within the court just like the coordinate system of the view's bounds is within the view itself. (court=view, basketball/players=content inside the view)
Like the frame, view.center is also in the coordinates of the superview.
Frame vs Bounds - Example 1
The yellow rectangle represents the view's frame. The green rectangle represents the view's bounds. The red dot in both images represents the origin of the frame or bounds within their coordinate systems.
Frame
origin = (0, 0)
width = 80
height = 130
Bounds
origin = (0, 0)
width = 80
height = 130
Example 2
Frame
origin = (40, 60) // That is, x=40 and y=60
width = 80
height = 130
Bounds
origin = (0, 0)
width = 80
height = 130
Example 3
Frame
origin = (20, 52) // These are just rough estimates.
width = 118
height = 187
Bounds
origin = (0, 0)
width = 80
height = 130
Example 4
This is the same as example 2, except this time the whole content of the view is shown as it would look like if it weren't clipped to the bounds of the view.
Frame
origin = (40, 60)
width = 80
height = 130
Bounds
origin = (0, 0)
width = 80
height = 130
Example 5
Frame
origin = (40, 60)
width = 80
height = 130
Bounds
origin = (280, 70)
width = 80
height = 130
Again, see here for my answer with more details.
I found this image most helpful for understanding frame, bounds, etc.
Also please note that frame.size != bounds.size when the image is rotated.
I think if you think it from the point of CALayer, everything is more clear.
Frame is not really a distinct property of the view or layer at all, it is a virtual property, computed from the bounds, position(UIView's center), and transform.
So basically how the layer/view layouts is really decided by these three property(and anchorPoint), and either of these three property won't change any other property, like changing transform doesn't change bounds.
There are very good answers with detailed explanation to this post. I just would like to refer that there is another explanation with visual representation for the meaning of Frame, Bounds, Center, Transform, Bounds Origin in WWDC 2011 video Understanding UIKit Rendering starting from #4:22 till 20:10
After reading the above answers, here adding my interpretations.
Suppose browsing online, web browser is your frame which decides where and how big to show webpage. Scroller of browser is your bounds.origin that decides which part of webpage will be shown. bounds.origin is hard to understand. The best way to learn is creating Single View Application, trying modify these parameters and see how subviews change.
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
UIView *view1 = [[UIView alloc] initWithFrame:CGRectMake(100.0f, 200.0f, 200.0f, 400.0f)];
[view1 setBackgroundColor:[UIColor redColor]];
UIView *view2 = [[UIView alloc] initWithFrame:CGRectInset(view1.bounds, 20.0f, 20.0f)];
[view2 setBackgroundColor:[UIColor yellowColor]];
[view1 addSubview:view2];
[[self view] addSubview:view1];
NSLog(#"Old view1 frame %#, bounds %#, center %#", NSStringFromCGRect(view1.frame), NSStringFromCGRect(view1.bounds), NSStringFromCGPoint(view1.center));
NSLog(#"Old view2 frame %#, bounds %#, center %#", NSStringFromCGRect(view2.frame), NSStringFromCGRect(view2.bounds), NSStringFromCGPoint(view2.center));
// Modify this part.
CGRect bounds = view1.bounds;
bounds.origin.x += 10.0f;
bounds.origin.y += 10.0f;
// incase you need width, height
//bounds.size.height += 20.0f;
//bounds.size.width += 20.0f;
view1.bounds = bounds;
NSLog(#"New view1 frame %#, bounds %#, center %#", NSStringFromCGRect(view1.frame), NSStringFromCGRect(view1.bounds), NSStringFromCGPoint(view1.center));
NSLog(#"New view2 frame %#, bounds %#, center %#", NSStringFromCGRect(view2.frame), NSStringFromCGRect(view2.bounds), NSStringFromCGPoint(view2.center));