I have 1 view with 2 subviews. One of them being 10 times bigger than the other one. I have a gesture recognizer for the big one (which is on top).
I want to be able to scale the big one with the pinch gesture from an anchor point between the fingers. And I want the little one to make that same transform from the same global position anchor point but without changing its own anchor point.
Hope I explain myself. Here is the code:
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)gestureRecognizer
{
//this changes the anchor point of "big" without moving it
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
float scale = sqrt(transform.a*transform.a+transform.c*transform.c);
//this transforms "big"
[gestureRecognizer view].transform = transform;
//anchor point location in little view
CGPoint pivote = [gestureRecognizer locationInView:little];
CGAffineTransform transform_t = CGAffineTransformConcat(CGAffineTransformMakeTranslation(-pivote.x, -pivote.y), transform);
transform_t = CGAffineTransformConcat(transform_t, CGAffineTransformMakeTranslation(pivote.x, pivote.y));
little.transform = transform_t;
}
[gestureRecognizer setScale:1];
}
But this is not working, the little view keeps jumping around and goes crazy.
EDIT: More info.
Ok, this is the diagram:
The red square is the big view, the dark one is the little one. The dotted square is the main view.
The line: [self adjustAnchorPointForGestureRecognizer:gestureRecognizer]; changes the big views anchor point to the center of the pinch gesture. That works.
As I scale the big view, the small view should scale the same amount and move so it's centered in the big view as it is now. That is, it should scale with the same anchor point as the big view.
I would like to keep those transforms to the little view in a CGAffineTransform, if possible.
Ok, I finally found it. I don't know if it's the better solution, but it works.
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
float scale = sqrt(transform.a*transform.a+transform.c*transform.c);
if((scale>0.1)&&(scale<20)) {
[gestureRecognizer view].transform = transform;
CGPoint anchor = [gestureRecognizer locationInView:little];
anchor = CGPointMake(anchor.x - little.bounds.size.width/2, anchor.y-little.bounds.size.height/2);
CGAffineTransform affineMatrix = little.transform;
affineMatrix = CGAffineTransformTranslate(affineMatrix, anchor.x, anchor.y);
affineMatrix = CGAffineTransformScale(affineMatrix, [gestureRecognizer scale], [gestureRecognizer scale]);
affineMatrix = CGAffineTransformTranslate(affineMatrix, -anchor.x, -anchor.y);
little.transform = affineMatrix;
[eagleView setTransform:little.transform];
[gestureRecognizer setScale:1];
}
}
}
That eaglView line, is the real reason why I needed a CGAffineTransform and I couldn't change the anchor. I'm sending it to OpenGL to change the model view transform matrix.
Now it works perfectly with 3 transforms (rotate, scale, translate) at the same time through user feedback.
EDIT
Just a little note: It seems that when I move the view too fast, the eaglView and the UIView get out of sync. So I don't apply the transforms to the UIViews directly, I apply them with the info out of the eaglView. Like this:
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
float scale = sqrt(transform.a*transform.a+transform.c*transform.c);
if((scale>0.1)&&(scale<20)) {
//[gestureRecognizer view].transform = transform;
CGPoint anchor = [gestureRecognizer locationInView:little];
anchor = CGPointMake(anchor.x - little.bounds.size.width/2, anchor.y-little.bounds.size.height/2);
CGAffineTransform affineMatrix = little.transform;
affineMatrix = CGAffineTransformTranslate(affineMatrix, anchor.x, anchor.y);
affineMatrix = CGAffineTransformScale(affineMatrix, [gestureRecognizer scale], [gestureRecognizer scale]);
affineMatrix = CGAffineTransformTranslate(affineMatrix, -anchor.x, -anchor.y);
//little.transform = affineMatrix;
[eagleView setTransform:affineMatrix];
[gestureRecognizer setScale:1];
CGAffineTransform transform = CGAffineTransformMakeRotation(eaglView.myRotation*M_PI/180);
transform = CGAffineTransformConcat(CGAffineTransformMakeScale(eaglView.myScale, eaglView.myScale), transform);
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeTranslation(eaglView.myTranslate.x, -eaglView.myTranslate.y));
little.transform = transform;
big.transform = transform;
}
}
}
To scale the smaller view using the center of the pinch as the anchor point then you'll need to calculate the new position by hand:
CGRect frame = little.frame; // Returns the frame based on the current transform
frame.origin.x = (frame.origin.x - pivot.x) * gestureRecognizer.scale;
frame.origin.y = (frame.origin.y - pivot.y) * gestureRecognizer.scale;
frame.width = frame.width * gestureRecognizer.scale;
frame.height = frame.height * gestureRecognizer.scale;
Then, update the transform. Personally I would do this based on the view's real position rather than transforming the current transform - I find it easier to think about. So for example:
little.transform = CGAffineTransformIndentity; // Remove the current transform
CGRect orgFrame = little.frame
CGFloat scale = frame.width / orgFrame.size.width;
CGAffineTransform t = CGAffineTransformMakeScale(scale, scale);
t = CGAffineTransformConcat(t, CGAffineTransformMakeTranslation(newFrame.origin.x - frame.origin.x, newFrame.origin.y - frame.origin.y));
little.transform = t;
Note that I've just typed in the code off the top of my head to give you and idea. It'll need testing and debugging!
Also, some of that code can be removed if you use the scale value based on the original pinch rather than resetting it each time and then transforming the transforms.
Tim
Related
I need to be able to save the frame after transformation of a UIImageView. In the below example, the original frame is when the image is added to the superview. The user then has the ability to rotate, scale and pan the image anywhere in the gray area (superview).
I take these images and save their coordinates to an NSDictionary (which is not the problem). The problem is that if I get the frame after the rotation, the frame is completely off. I need to be able to store the new frame with transform in the dictionary, so that when the user comes back to this view and the images are loaded, the frames and saved transformations are just like they intended.
Panning
CGPoint translation = [gestureRecognizer translationInView:[object superview]];
if (CGRectContainsPoint(self.frame, CGPointMake([object center].x + translation.x, [object center].y + translation.y))) {
[object setCenter:CGPointMake([object center].x + translation.x, [object center].y + translation.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[object superview]];
}
Rotating
self.transformRotation = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
[gestureRecognizer view].transform = self.transformRotation;
if ( [gestureRecognizer rotation] != 0 ) {
self.rotate = [gestureRecognizer rotation];
}
[gestureRecognizer setRotation:0];
Scaling
self.transformScale = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer view].transform = self.transformScale;
if ( [gestureRecognizer scale] != 1 ) {
self.scale = [gestureRecognizer scale];
}
[gestureRecognizer setScale:1];
Using the Center point of the view keeps the image closer to the original location when it was saved, the first time it is loaded. Each time it is saved after that the position is the same, because the transform did not change during that session.
- (CGPoint)centerOnCanvas {
CGPoint originalCenter = self.center;
return originalCenter;
}
- (CGRect)frameOnCanvas {
return CGRectMake(
self.preTransformedFrame.origin.x,
self.preTransformedFrame.origin.y,
self.preTransformedFrame.size.width,
self.preTransformedFrame.size.height
);
}
- (CGRect)preTransformedFrame {
CGAffineTransform currentTransform = self.transform;
self.transform = CGAffineTransformIdentity;
CGRect originalFrame = self.bounds;
self.transform = currentTransform;
return originalFrame;
}
UPDATE: Slightly off and a little larger than the original
According to Apple document for UIView's transform property
When the value of this property is anything other than the identity transform, the value in the frame property is undefined and should be ignored.
That's the reason why you can't get frame after changing transform.
As I understand, after rotating, scaling or panning you want to save current state to restore later. In my opinion, all you need to do it is saving transform and center of UIImageView each time they are changed. You don't need frame in this case.
For example, _transformTarget is your UIImageView, to save its current state you can use below method (Instead of saving in a NSDictionary, I use NSUserDefaults. You can change it to NSDictionary)
- (void)saveCurrentState {
[[NSUserDefaults standardUserDefaults] setObject:NSStringFromCGAffineTransform(_transformTarget.transform) forKey:#"_transformTarget.transform"];
[[NSUserDefaults standardUserDefaults] setObject:NSStringFromCGPoint(_transformTarget.center) forKey:#"_transformTarget.center"];
}
At the end of each handling gesture method, save current state by using saveCurrentState.
- (void)handlePanGesture:(UIPanGestureRecognizer*)gesture {
CGPoint translation = [gesture translationInView:self.view];
CGPoint newCenter = CGPointMake(_transformTarget.center.x + translation.x, _transformTarget.center.y + translation.y);
if (CGRectContainsPoint(self.view.frame, newCenter)) {
_transformTarget.center = newCenter;
[gesture setTranslation:CGPointZero inView:self.view];
[self saveCurrentState]; // Save current state when center is changed
}
}
- (void)handleRotationGesture:(UIRotationGestureRecognizer*)gesture {
_transformTarget.transform = CGAffineTransformRotate(_transformTarget.transform, gesture.rotation);
gesture.rotation = 0;
[self saveCurrentState]; // Save current state when transform is changed
}
- (void)handlePinchGesture:(UIPinchGestureRecognizer*)gesture {
_transformTarget.transform = CGAffineTransformScale(_transformTarget.transform, gesture.scale, gesture.scale);
gesture.scale = 1;
[self saveCurrentState]; // Save current state when transform is changed
}
Now, the information about UIImageView is saved every time it's changed. At the next time user comes back, get info about center and transform from your dictionary and set them again.
- (void)restoreFromSavedState {
NSString *transformString = [[NSUserDefaults standardUserDefaults] objectForKey:#"_transformTarget.transform"];
CGAffineTransform transform = transformString ? CGAffineTransformFromString(transformString) : CGAffineTransformIdentity;
NSString *centerString = [[NSUserDefaults standardUserDefaults] objectForKey:#"_transformTarget.center"];
CGPoint center = centerString ? CGPointFromString(centerString) : self.view.center;
_transformTarget.center = center;
_transformTarget.transform = transform;
}
Result
For more detail, you can take a look at my sample repo
https://github.com/trungducc/stackoverflow/tree/restore-view-transform
I have created a shape layer animation using bezier paths. So, I added a shape layer as a sublayer to view and drew on it. Pan, pinch and rotation gestures added to the view with the required delegate methods. The transform values from gestures applied to the layer, not for the view. I have found some issues when applying gestures
When applying the gestures to the view, shape layer doesn't rotate but layer moves randomly.
when applying pinch, layer zoom in/out from layer's top-left corner.
Please help me to fix these two issues. And I want to rotate and pinch from layer's center point.
The following code segments were used to handle gestures
-(void)handlePanGesture:(UIPanGestureRecognizer*)recognizer{
if (recognizer.state == UIGestureRecognizerStateBegan || recognizer.state == UIGestureRecognizerStateChanged){
CGPoint translation = [recognizer translationInView:recognizer.view];
CGAffineTransform transform = CGAffineTransformTranslate(recognizer.view.transform, translation.x, translation.y);
_pathOfShape = CGPathCreateCopyByTransformingPath(_shapeLayer.path, &transform);
_shapeLayer.path = [UIBezierPath bezierPathWithRect:CGPathGetBoundingBox(_pathOfShape)].CGPath;
[recognizer setTranslation:CGPointZero inView:self];
}
}
-(void)handlePinchGesture:(UIPinchGestureRecognizer*)recognizer{
if (recognizer.state == UIGestureRecognizerStateBegan || recognizer.state == UIGestureRecognizerStateChanged)
{
CGFloat scale = [recognizer scale];
CGAffineTransform transform =CGAffineTransformScale(recognizer.view.transform, scale, scale);
_pathOfShape = CGPathCreateCopyByTransformingPath(_shapeLayer.path, &transform);
_shapeLayer.path = [UIBezierPath bezierPathWithRect:CGPathGetBoundingBox(_pathOfShape)].CGPath;
[recognizer setScale:1.0];
}
}
-(void)handleRotationGesture:(UIRotationGestureRecognizer*)recognizer{
if (recognizer.state == UIGestureRecognizerStateBegan || recognizer.state == UIGestureRecognizerStateChanged)
{
CGFloat rotation = [recognizer rotation];
CGAffineTransform transform = CGAffineTransformRotate(recognizer.view.transform, rotation);
_pathOfShape = CGPathCreateCopyByTransformingPath(_shapeLayer.path, &transform);
_shapeLayer.path = [UIBezierPath bezierPathWithRect:CGPathGetBoundingBox(_pathOfShape)].CGPath;
[recognizer setRotation:0];
}
}
I have been trying to implement a UI feature which I've seen in a few apps which use cards to display information. My current view controller looks like this:
and users are able to drag the card along the x axis to the left and right. Dragging to the right side of the screen does nothing to the scale of the card (simply changes position) but if the user swipes it to the left I wanted to slowly decrease the scale of it depending on its y coordinate (e.g. the scale is smallest when card is furthest to the left, getting bigger from that point until the original size is reached). If the card is dragged far enough to the left it will fade out, but if the user does not drag it far enough it increases in scale and moves back into the middle. Code I've tried so far:
- (void)handlePanImage:(UIPanGestureRecognizer *)sender
{
static CGPoint originalCenter;
if (sender.state == UIGestureRecognizerStateBegan)
{
originalCenter = sender.view.center;
sender.view.alpha = 0.8;
[sender.view.superview bringSubviewToFront:sender.view];
}
else if (sender.state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [sender translationInView:self.view];
NSLog(#"%f x %f y",translation.x ,translation.y);
sender.view.center=CGPointMake(originalCenter.x + translation.x, yOfView);
CGAffineTransform transform = sender.view.transform;
i-=0.001;
transform = CGAffineTransformScale(transform, i, i);
//transform = CGAffineTransformRotate(transform, self.rotationAngle);
sender.view.transform = transform;
}
else if (sender.state == UIGestureRecognizerStateEnded || sender.state == UIGestureRecognizerStateCancelled || sender.state == UIGestureRecognizerStateFailed)
{
if(sender.view.center.x>0){
[UIView animateWithDuration:0.2 animations:^{
CGRect rect=[sender.view frame];
rect.origin.x=([self.view frame].size.width/2)-_protoypeView.frame.size.width/2;
rect.size.height=originalHeight;
rect.size.width=originalWidth;
[sender.view setFrame:rect];
i=1.0;
}];
}
[UIView animateWithDuration:0.1 animations:^{
sender.view.alpha = 1.0;
}];
}
}
This seems very buggy and doesn't properly work. I also tried to change scale according to translation:
else if (sender.state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [sender translationInView:self.view];
NSLog(#"%f x %f y",translation.x ,translation.y);
sender.view.center=CGPointMake(originalCenter.x + translation.x, yOfView);
CGAffineTransform transform = sender.view.transform;
transform = CGAffineTransformScale(transform, translation.x/100, translation.x/100);
//transform = CGAffineTransformRotate(transform, self.rotationAngle);
sender.view.transform = transform;
}
but the scale either gets too big or too small. Any help would be greatly appreciated :)
In the last piece of code you tried, you're doing a couple of things wrong. The scale transform will be cumulative because you never set the translation of your pan gesture recognizer back to 0. Also, if you look at the math in your transform, you'll see that a small movement of say -1 point would scale the view to 0.01 times its original size. You obviously don't want that. You need to add that small negative number to 1, so that initial -1 point move would scale to 0.99. The change of setting the panner's translation back to zero necessitates changing the center calculation to use sender.view.center.x rather than originalCenter.X. You also need an if statement to check whether the center is left or right of its starting position so you know whether you should apply the scaling transform. Something like this,
-(void)handlePan:(UIPanGestureRecognizer *) sender {
if (sender.state == UIGestureRecognizerStateBegan) {
originalCenter = sender.view.center;
sender.view.alpha = 0.8;
[sender.view.superview bringSubviewToFront:sender.view];
}else if (sender.state == UIGestureRecognizerStateChanged){
CGPoint translation = [sender translationInView:self.view];
sender.view.center=CGPointMake(sender.view.center.x + translation.x, sender.view.center.y);
if (sender.view.center.x < originalCenter.x) {
CGAffineTransform transform = sender.view.transform;
transform = CGAffineTransformScale(transform, 1+translation.x/100.0, 1+translation.x/100.0);
sender.view.transform = transform;
}
[sender setTranslation:CGPointZero inView:self.view];
}
}
This doesn't take care of animating the view back, or fading out the view, but it should get you most of the way there.
i have a circle image that circles round the screen using a path animation. And i want to detect when the user touches the moving circle. However even though the image is moving round in a continuous circle its frame is still in the top left hand corner not moving, how can i update this so that i can detect a touch on the moving image? Here is the code...
Set Up Animation in ViewDidLoad:
//set up animation
CAKeyframeAnimation *pathAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
pathAnimation.calculationMode = kCAAnimationPaced;
pathAnimation.fillMode = kCAFillModeForwards;
pathAnimation.removedOnCompletion = NO;
pathAnimation.duration = 10.0;
pathAnimation.repeatCount = 1000;
CGMutablePathRef curvedPath = CGPathCreateMutable();
//path as a circle
CGRect bounds = CGRectMake(60,170,200,200);
CGPathAddEllipseInRect(curvedPath, NULL, bounds);
//tell animation to use this path
pathAnimation.path = curvedPath;
CGPathRelease(curvedPath);
//add subview
circleView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"ball.png"]];
[testView addSubview:circleView];
//animate
[circleView.layer addAnimation:pathAnimation forKey:#"moveTheSquare"];
Touches Method:
- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
//detect touch
UITouch *theTouch = [touches anyObject];
//locate and assign touch location
CGPoint startPoint = [theTouch locationInView:self.view];
CGFloat x = startPoint.x;
CGFloat y = startPoint.y;
//create touch point
CGPoint touchPoint = CGPointMake(x, y);
//check to see if the touch is in the rect
if (CGRectContainsPoint(circleView.bounds, touchPoint)) {
NSLog(#"yes");
}
//check image view position
NSLog(#"frame x - %f, y - %f", circleView.frame.origin.x, circleView.frame.origin.y);
NSLog(#"center x - %f, y - %f", circleView.center.x, circleView.center.y);
NSLog(#"bounds x - %f, y - %f", circleView.bounds.origin.x, circleView.bounds.origin.y);
}
the imageview just seems to stay at the top left hand corner. I cant seem to figure out how to recognise if a touch gesture has been made on the moving ball.
any help would be appreciated,
Chris
You need to query the presentation layer of the view not it's frame. Only the presentation will be updated during the course of an animation...
[myImageView.layer presentationLayer]
Access the properties of this layer (origin, size etc) and determine if your touch point is within the bounds.
The following code is a method I created inside a UIViewController to popup/down a "reader" overlay on top of the controller's own view. The intention is for the reader to begin as transparent, size zero, at a specific point. "Popup" is then animated as increasing in opacity and size, and shifts towards an application frame central position. "Popdown" is subsequently animated as the reverse, shrinking back whilst moving toward a specified location, fading out.
The popup code works exactly as desired. However, the popdown version (i.e. code executed if isPopup == NO) immediately changes the bounds rather than doing so gradually. Thus the popdown animation shows from the beginning a 1 pixel square view moving towards its destination and fading out.
-(void)popupReader:(BOOL)isPopup from:(CGPoint)loc {
CGFloat newAlpha = 0.0f;
CGPoint newCenter = CGPointZero;
CGRect newBounds = CGRectZero;
CGRect appFrame = [UIScreen mainScreen].applicationFrame;
CGSize readerSize = [self viewSize];
if (isPopup) {
newAlpha = 1.0f;
newCenter = CGPointMake(appFrame.origin.x+appFrame.size.width/2,
appFrame.origin.y+appFrame.size.height/2);
newBounds = CGRectMake(0,0,readerSize.width,readerSize.height);
[self.view setAlpha:0.0f];
[self.view setCenter:loc];
[self.view setBounds:CGRectMake(0, 0, 0, 0)];
} else {
newCenter = loc;
newBounds = CGRectMake(0,0,1,1);
}
const CGFloat animDur = 0.3f;
[UIView transitionWithView:self.view
duration:animDur
options:UIViewAnimationOptionTransitionNone|UIViewAnimationOptionCurveEaseOut
animations:^{
self.view.alpha = newAlpha;
self.view.center = newCenter;
self.view.bounds = newBounds;
}
completion:nil];
}
I've already tried animating just the frame, rather than bounds and center, but the result was identical.
Does anyone know why this is happening, and how I can overcome this problem?
Many thanks for your time.
from the docs:
UIViewAnimationOptionTransitionNone -
An option for specifying that no transition should occur.
try one of:
UIViewAnimationOptionTransitionFlipFromLeft
UIViewAnimationOptionTransitionFlipFromRight
UIViewAnimationOptionTransitionCurlUp
UIViewAnimationOptionTransitionCurlDown
Edit:
Or if you're just looking to animate those values, try
[UIView animateWithDuration:(NSTimeInterval)duration
animations:(void (^)(void))animations
completion:(void (^)(BOOL finished))completion];