How do I transfer the frame *and* transform from one UIView to another without distortion? - cocoa-touch

I have a UIView that may have scale and/or rotation transforms applied to it. My controller creates a new controller and passes the view to it. The new controller creates a new view and tries to place it in the same location and rotation as the passed view. It sets the location and size by converting the original view's frame:
CGRect frame = [self.view convertRect:fromView.frame fromView:fromView.superview];
ImageScrollView *isv = [[ImageScrollView alloc]initWithFrame:frame image:image];
This works great, with the scaled size and location copied perfectly. However, if there is a rotation transform applied to fromView, it does not transfer.
So I added this line:
isv.transform = fromView.transform;
That nicely handles transfers the rotation, but also the scale transform. The result is that the scale transform is effectively applied twice, so the resulting view is much too large.
So how do I go about transferring the location (origin), scale, and rotation from one view to another, without doubling the scale?
Edit
Here is a more complete code example, where the original UIImageView (fromView) is being used to size and position a UIScrollView (the ImageScrollView subclass):
CGRect frame = [self.view convertRect:fromView.frame fromView:fromView.superview];
frame.origin.y += pagingScrollView.frame.origin.y;
ImageScrollView *isv = [[ImageScrollView alloc]initWithFrame:frame image:image];
isv.layer.anchorPoint = fromView.layer.anchorPoint;
isv.transform = fromView.transform;
isv.bounds = fromView.bounds;
isv.center = [self.view convertPoint:fromView.center fromView:fromView.superview];
[self.view insertSubview:isv belowSubview:captionView];
And here is the entirety of the configuration in ImageScrollView:
- (id)initWithFrame:(CGRect)frame image:(UIImage *)image {
if (self = [self initWithFrame:frame]) {
CGRect rect = CGRectMake(0, 0, frame.size.width, frame.size.height);
imageLoaded = YES;
imageView = [[UIImageView alloc] initWithImage:image];
imageView.frame = rect;
imageView.contentMode = UIViewContentModeScaleAspectFill;
imageView.clipsToBounds = YES;
[self addSubview:imageView];
}
return self;
}
It looks as though the transform causes the imageView to scale up too large, as you can see in this ugly video.

Copy the first view's bounds, center, and transform to the second view.
Your code doesn't work because frame is a value that is derived from the bounds, center, and transform. The setter for frame tries to do the right thing by reversing the process, but it can't always work correctly when a non-identity transform is set.
The documentation is pretty clear on this point:
If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
...
If the transform property contains a non-identity transform, the value of the frame property is undefined and should not be modified. In that case, you can reposition the view using the center property and adjust the size using the bounds property instead.

Let say viewA is first view that contains frame & transform, and you want to pass those values to viewB.
So that, you need to get the original frame of viewA and pass it to viewB before pass the transform. Otherwise, viewB's frame will be changed 1 more time on when you add transform.
To get the original frame, just make viewA.transform to CGAffineTransformIdentity
Here is code
CGAffineTransform originalTransform = viewA.transform; // Remember old transform
viewA.transform = CGAffineTransformIdentity; // Remove transform so that you can get original frame
viewB.frame = viewA.frame; // Pass originalFrame into viewB
viewA.transform = originalTransform; // Restore transform into viewA
viewB.transform = originalTransform; // At this step, transform will change frame and make it the same with viewA
After that, viewA & viewB will have the same UI on superView.

Related

UIImageView autoresizingmask not working in certain cases

I am experimenting with a block-breaking iOS app to learn more about UI features. Currently, I am having issues trying to make it work for screen rotation.
I am able to get the blocks to re-arrange properly after screen rotation but am having trouble with getting the UIImageView for the paddle re-arrange.
My code is split as follows, VC calls initializes an object of the BlockModel class. This object stores a CGRect property (which is the CGRect corresponding to the paddle's ImageView).
The VC then creates an imageView initialized with the paddle image, sets the autoresinging property on the image view (to have flexible external masks), sets the frame based on the CGRect in the model object and adds the imageView as a sub-view of the main view being handled by the VC.
The code is below.
When I rotate, I am seeing that the ImageView is not being automatically repositioned.
If I do all the image view and CGRect creation in the vC, then it works (code sample 2).
Is this expected behavior? If yes, why is autoresizing not kicking in if the CGRect is obtained from a property in another object?
Full Xcode project code is here (github link)
EDIT
Looks like things don't work if I store the imageView as a property. I was doing this to have quick access to it. Why doesn't it work if imageView is stored as a property?
Code where model is initialized
self.myModel = [[BlockerModel alloc] initWithScreenWidth:self.view.bounds.size.width andHeight:self.view.bounds.size.height];
Model initialization code
-(instancetype) initWithScreenWidth:(CGFloat)width andHeight:(CGFloat)height
{
self = [super init];
if (self)
{
self.screenWidth = width;
self.screenHeight = height;
UIImage* paddleImage = [UIImage imageNamed:#"paddle.png"];
CGSize paddleSize = [paddleImage size];
self.paddleRect = CGRectMake((self.screenWidth-paddleSize.width)/2, (1 - PADDLE_BOTTOM_OFFSET)*self.screenHeight, paddleSize.width, paddleSize.height);
}
return self;
}
Code in VC where imageView is initialized
self.paddleView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"paddle"]];
self.paddleView.backgroundColor = [UIColor clearColor];
self.paddleView.opaque = NO;
self.paddleView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
NSLog(#"Paddle rect is %#",NSStringFromCGRect(self.myModel.paddleRect));
[self.paddleView setFrame:self.myModel.paddleRect];
[self.view addSubview:self.paddleView];
If I instead use this code in the VC to initialize imageView things work
UIImage* paddleImage = [UIImage imageNamed:#"paddle.png"];
CGSize paddleSize = [paddleImage size];
CGRect paddleRect = CGRectMake((self.view.bounds.size.width-paddleSize.width)/2, (1 - PADDLE_BOTTOM_OFFSET)*self.view.bounds.size.height, paddleSize.width, paddleSize.height);
UIImageView *paddleView = [[UIImageView alloc] initWithImage:paddleImage];
paddleView.backgroundColor = [UIColor clearColor];
paddleView.opaque = NO;
paddleView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
[paddleView setFrame:paddleRect];
[self.view addSubview:paddleView];
Found the issue. I was using the model object to handle all my "game object location" logic. E.g VC would calculate the X axis deltas from the touch events & forward them to the model object. Also, CADisplayLink events would be forwarded so that model can update ball location based on velocity and time since last event. It will then use updated location to detect collisions. This split was used because model class also had the methods to detect collisions with sides, paddle/ball etc.
The issue was that the model object was rewriting the CGRect of the paddleView by adding the delta it received from VC to the origin.x of current paddleRect it had stored. This paddleRect did not take into account the automatic adjustment to the CGRect that is done by auto-resizing after a rotation.
The fix was for the VC to set the CGRect of the paddleRect (set to paddleView frame) before calling the method in the model to update all the game properties and detect collisions. This way model only takes care of logic of collusion detection and updating ball movement and velocity based on it. The VC uses current paddleView location and hence automatically accounts for the automatic adjustment to the CGRect that is done by auto-resizing after a rotation.
Source code in github link updated.

How to get object position and size in UIView objective c

I put UIImageView in my Scene from Object library, and give it an image and defined OUTLET in .h file. Now I want to check its coordinates, or center point, or frame X,Y,Width,Height.
I am using
This
CGRect newFrameSize = CGRectMake(recycleBin.frame.origin.x, recycleBin.frame.origin.y,
recycleBin.frame.size.width, recycleBin.frame.size.height);
or
CGRect newFrameSize = recycleBin.frame;
by using this
NSLog(#"%#", NSStringFromCGRect(newFrameSize));
gives same result that is
2013-01-16 21:42:25.101 xyzapp[6474:c07] {{0, 0}, {0, 0}}
I want its actual position and size when viewcontroller loaded, so when user click on image view it will fadeout by zoom-in towards users and will disappear, and when user tap on reset button, it fadein and zoom-in back to original form (reverse to the previous animation).
Also give me hint, how to perform this animation on UIImageView or any button or label. Thx
Unfortunately, you can't check an item's actual frame as set in IB in -viewDidLoad. The earliest you can check it (that I've found) is by overriding -viewDidAppear:. But, since -viewDidAppear: could be called multiple times throughout the life of the view, you need to make sure you're not saving the frame it's in the modified state.
- (void)viewDidAppear:(BOOL)animated
{
[super viewDidAppear:animated];
if(savedFrame == CGRectZero) {
savedFrame = self.recycleBin.frame;
NSLog(#"Frame: %#", NSStringFromCGRect(savedFrame));
}
}
Where savedFrame is a member variable (or you could make it a property).
From the description of the animation you're wanting, it sounds like adjusting the frame isn't the way to go about it. It sounds like you're wanting to get the effect of the view stretching and fading out (and the reverse when being reset)? If so, some code like this might be more so what you're looking for...
Fade out:
float animationDuration = 2.0f; // Duration of animation in seconds
float zoomScale = 3.0f; // How much to zoom in duration the animation
[UIView animateWithDuration:animationDuration animations:^{
CGAffineTransform transform = CGAffineTransformMakeScale(zoomScale, zoomScale);
self.recycleBin.transform = transform;
self.recycleBin.alpha = 0; // Make fully transparent
}];
And then, to reset the view:
float animationDuration = 2.0f; // Duration of animation in seconds
[UIView animateWithDuration:animationDuration animations:^{
CGAffineTransform transform = CGAffineTransformMakeScale(1.0f, 1.0f);
self.recycleBin.transform = transform;
self.recycleBin.alpha = 1.0; // Make fully opaque
}];
You can play around with the numbers to see if you get the effects you desire. Most animations in iOS are actually extremely simple to do. This code would work for any UIView subclass.
It sounds as if your IBOutlet is not attached to your class.
Open up your view controller header file (if that is where you property declaration is) and look beside the declaration:
Notice how on the first IBOutlet, the circle (to the left of the line number) is filled in. This means that it is connected to your scene. However, the second one is not (the circle is not filled in).

Is a UIView animation dependant on order of properties being set?

I have encountered something strange with UIView animations. The animation scales a sub view
from a rect to fill its parent view:
//update views
CGRect startRect = ...; //A rect in parentView co-ordinates space that childView appears from
UIView *parentView = ...;
UIView *childView = ...;
[parentView addSubview:childView];
//animation start state
childView.alpha = 0;
childView.center = (CGPointMake( CGRectGetMidX(startRect), CGRectGetMidY(startRect)));
//TODO: set childViews transform and so that it is completely contained with in startRect
childView.transform = CGAffineTransformMakeScale(.25, .25);
[UIView animateWithDuration:.25 animations:^{
childView.transform = CGAffineTransformIdentity;
childView.alpha = 1;
childView.frame = parentView.bounds;
}];
The above code works as expected. However, if the animation block is reordered to the following then the animation goes haywire (scales massively and center point is off screen):
[UIView animateWithDuration:.25 animations:^{
childView.frame = parentView.bounds; //This line was after .alpha
childView.transform = CGAffineTransformIdentity;
childView.alpha = 1;
}];
What's going on here? Why is the order that the properties are set significant to the animation?
The order of the properties is probably relevant because the frame is undefined when the transform is not the identity transform.
Warning: If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
From the documentation for frame (on UIView).
This is true for getting the frame value but I believe that it also is true for setting the frame.
I'm almost certain that the changes in your animation block happens on the model of the views layer before they are animated on the presentation of the views layer.
Since you are coming form a state where the transform is a scale, setting the frame is undefined.
In your top example the transform is set to identity before setting the frame, thus it works as expected, but in the second example you set the frame before restoring the transform.

how to rotate image?

I had a hard time fixing this code.
UIGestureRecognizer *tapGest = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tapped:)];
tapIt = (UITapGestureRecognizer *)tapGest;
[tapIt setNumberOfTapsRequired:2];
[tapIt setNumberOfTouchesRequired:1];
tapIt.delegate = self;
[pc1 addGestureRecognizer:tapIt];
I want to rotate it (the image) to another degree, example 0, 90, 180, 360 degrees.
What do I have to add?
It is a tap recognizer, so the "tapped:" method will get called whenever you tap (double tap in your case) the specific view.
So inside the tapped: method you will change the rotation of the view by applying a rotation transform, like this:
- (void)tapped:(UIGestureRecognizer *)gesture {
// Rotate the view that was tapped to 90 degrees (pi/4)
UIView *tappedView = [gesture view];
[tappedView setTransform:CGAffineTransformMakeRotation(M_PI_4)];
}
To rotate the view to another degree you would change the angle argument for the transform.
If you wanted the view to add 90 degrees every time it was tapped you could do this by rotating the existing transform of the view and then setting it again, like this:
CGAffineTransform newTransform = CGAffineTransformRotate([tappedView transform], M_PI_4);
[tappedView setTransform:newTransform];
please, use the UIImageView class, set the image property of this class properly, add the UIImageView to your any UIViewController, and then you have chance to rotate the UIImageView using the the following method:
_imageView.transform = CGAffineTransformMakeRotation(_angle);
(please, don't forget the _angle must be in radian)
to make the rest of the code (from where and when you get the parameter _angle) is still wanting for you.

Synchronised scrolling between two instances of NSScrollView

I have two instances of NSScrollView both presenting a view on the same content. The second scroll view however has a scaled down version of the document view presented in the first scroll view. Both width and height can be individually scaled and the original width - height constraints can be lost, but this is of no importance.
I have the synchronised scrolling working, even taking into account that the second scroll view needs to align its scrolling behaviour based on the scaling. There's one little snag I've been pulling my hairs out over:
As both views happily scroll along the smaller view needs to slowly catch up with the larger view, so that they both "arrive" at the end of their document at the same time. Right now this is not happening and the result is that the smaller view is at "end-of-document" before the larger view.
The code for synchronised scrolling is based on the example found in Apple's documentation titled "Synchronizing Scroll Views". I have adapted the synchronizedViewContentBoundsDidChange: to the following code:
- (void) synchronizedViewContentBoundsDidChange: (NSNotification *) notification {
// get the changed content view from the notification
NSClipView *changedContentView = [notification object];
// get the origin of the NSClipView of the scroll view that
// we're watching
NSPoint changedBoundsOrigin = [changedContentView documentVisibleRect].origin;;
// get our current origin
NSPoint curOffset = [[self contentView] bounds].origin;
NSPoint newOffset = curOffset;
// scrolling is synchronized in the horizontal plane
// so only modify the x component of the offset
// "scale" variable will correct for difference in size between views
NSSize ownSize = [[self documentView] frame].size;
NSSize otherSize = [[[self synchronizedScrollView] documentView] frame].size;
float scale = otherSize.width / ownSize.width;
newOffset.x = floor(changedBoundsOrigin.x / scale);
// if our synced position is different from our current
// position, reposition our content view
if (!NSEqualPoints(curOffset, changedBoundsOrigin)) {
// note that a scroll view watching this one will
// get notified here
[[self contentView] scrollToPoint:newOffset];
// we have to tell the NSScrollView to update its
// scrollers
[self reflectScrolledClipView:[self contentView]];
}
}
How would I need to change that code so that the required effect (both scroll bars arriving at an end of document) is achieved?
EDIT: Some clarification as it was confusing when I read it back myself: The smaller view needs to slow down when scrolling the first view reaches the end. This would probably mean re-evaluating that scaling factor... but how?
EDIT 2: I changed the method based on Alex's suggestion:
NSScroller *myScroll = [self horizontalScroller];
NSScroller *otherScroll = [[self synchronizedScrollView] horizontalScroller];
//[otherScroll setFloatValue: [myScroll floatValue]];
NSLog(#"My scroller value: %f", [myScroll floatValue]);
NSLog(#"Other scroller value: %f", [otherScroll floatValue]);
// Get the changed content view from the notification.
NSClipView *changedContentView = [notification object];
// Get the origin of the NSClipView of the scroll view that we're watching.
NSPoint changedBoundsOrigin = [changedContentView documentVisibleRect].origin;;
// Get our current origin.
NSPoint curOffset = [[self contentView] bounds].origin;
NSPoint newOffset = curOffset;
// Scrolling is synchronized in the horizontal plane so only modify the x component of the offset.
NSSize ownSize = [[self documentView] frame].size;
newOffset.x = floor(ownSize.width * [otherScroll floatValue]);
// If our synced position is different from our current position, reposition our content view.
if (!NSEqualPoints(curOffset, changedBoundsOrigin)) {
// Note that a scroll view watching this one will get notified here.
[[self contentView] scrollToPoint: newOffset];
// We have to tell the NSScrollView to update its scrollers.
[self reflectScrolledClipView:[self contentView]];
}
Using this method the smaller view is "overtaken" by the larger view when both scrollers reach a value of 0.7, which is not good. The larger view then scrolls past its end of document.
I think you might be approaching this in the wrong way. I think you should be getting a percentage of how far down each scroll be is scrolled in relation to itself and apply that to the other view. One example of how this could be done is this way using NSScroller's -floatValue:
NSScroller *myScroll = [self verticalScroller];
NSScroller *otherScroll = [otherScrollView verticalScroller];
[myScroll setFloatValue:otherScroll.floatValue];
I finally figured it out. The answer from Alex was a good hint but not the full solution as just setting the float value of a scroller doesn't do anything. That value needs translation to specific coordinates to which the scroll view needs to scroll its contents.
However, due to differences in size of the scrolled document view, you cannot just simply use this value, as the scaled down view will be overtaken by the "normal" view at some point. This will cause the normal view to scroll past its end of document.
The second part of the solution was to make the normal sized view wait with scrolling until the scaled down view has scrolled its own width.
The code:
// Scrolling is synchronized in the horizontal plane so only modify the x component of the offset.
NSSize ownSize = [[self documentView] frame].size;
newOffset.x = MAX(floor(ownSize.width * [otherScroll floatValue] - [self frame].size.width),0);
The waiting is achieved by subtracting the width of the scroll view from the width times the value of the scroller. When the scaled down version is still traversing its first scroll view width of pixels, this calculation will result in a negative offset. Using MAX will prevent strange effects and the original view will quietly wait until the value turns positive and then start its own scrolling. This solution also works when the user resizes the app window.