How to apply a view's gestures transform values to shapelayer - objective-c

I have created a shape layer animation using bezier paths. So, I added a shape layer as a sublayer to view and drew on it. Pan, pinch and rotation gestures added to the view with the required delegate methods. The transform values from gestures applied to the layer, not for the view. I have found some issues when applying gestures
When applying the gestures to the view, shape layer doesn't rotate but layer moves randomly.
when applying pinch, layer zoom in/out from layer's top-left corner.
Please help me to fix these two issues. And I want to rotate and pinch from layer's center point.
The following code segments were used to handle gestures
-(void)handlePanGesture:(UIPanGestureRecognizer*)recognizer{
if (recognizer.state == UIGestureRecognizerStateBegan || recognizer.state == UIGestureRecognizerStateChanged){
CGPoint translation = [recognizer translationInView:recognizer.view];
CGAffineTransform transform = CGAffineTransformTranslate(recognizer.view.transform, translation.x, translation.y);
_pathOfShape = CGPathCreateCopyByTransformingPath(_shapeLayer.path, &transform);
_shapeLayer.path = [UIBezierPath bezierPathWithRect:CGPathGetBoundingBox(_pathOfShape)].CGPath;
[recognizer setTranslation:CGPointZero inView:self];
}
}
-(void)handlePinchGesture:(UIPinchGestureRecognizer*)recognizer{
if (recognizer.state == UIGestureRecognizerStateBegan || recognizer.state == UIGestureRecognizerStateChanged)
{
CGFloat scale = [recognizer scale];
CGAffineTransform transform =CGAffineTransformScale(recognizer.view.transform, scale, scale);
_pathOfShape = CGPathCreateCopyByTransformingPath(_shapeLayer.path, &transform);
_shapeLayer.path = [UIBezierPath bezierPathWithRect:CGPathGetBoundingBox(_pathOfShape)].CGPath;
[recognizer setScale:1.0];
}
}
-(void)handleRotationGesture:(UIRotationGestureRecognizer*)recognizer{
if (recognizer.state == UIGestureRecognizerStateBegan || recognizer.state == UIGestureRecognizerStateChanged)
{
CGFloat rotation = [recognizer rotation];
CGAffineTransform transform = CGAffineTransformRotate(recognizer.view.transform, rotation);
_pathOfShape = CGPathCreateCopyByTransformingPath(_shapeLayer.path, &transform);
_shapeLayer.path = [UIBezierPath bezierPathWithRect:CGPathGetBoundingBox(_pathOfShape)].CGPath;
[recognizer setRotation:0];
}
}

Related

Scaling view depending on x coordinate while being dragged

I have been trying to implement a UI feature which I've seen in a few apps which use cards to display information. My current view controller looks like this:
and users are able to drag the card along the x axis to the left and right. Dragging to the right side of the screen does nothing to the scale of the card (simply changes position) but if the user swipes it to the left I wanted to slowly decrease the scale of it depending on its y coordinate (e.g. the scale is smallest when card is furthest to the left, getting bigger from that point until the original size is reached). If the card is dragged far enough to the left it will fade out, but if the user does not drag it far enough it increases in scale and moves back into the middle. Code I've tried so far:
- (void)handlePanImage:(UIPanGestureRecognizer *)sender
{
static CGPoint originalCenter;
if (sender.state == UIGestureRecognizerStateBegan)
{
originalCenter = sender.view.center;
sender.view.alpha = 0.8;
[sender.view.superview bringSubviewToFront:sender.view];
}
else if (sender.state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [sender translationInView:self.view];
NSLog(#"%f x %f y",translation.x ,translation.y);
sender.view.center=CGPointMake(originalCenter.x + translation.x, yOfView);
CGAffineTransform transform = sender.view.transform;
i-=0.001;
transform = CGAffineTransformScale(transform, i, i);
//transform = CGAffineTransformRotate(transform, self.rotationAngle);
sender.view.transform = transform;
}
else if (sender.state == UIGestureRecognizerStateEnded || sender.state == UIGestureRecognizerStateCancelled || sender.state == UIGestureRecognizerStateFailed)
{
if(sender.view.center.x>0){
[UIView animateWithDuration:0.2 animations:^{
CGRect rect=[sender.view frame];
rect.origin.x=([self.view frame].size.width/2)-_protoypeView.frame.size.width/2;
rect.size.height=originalHeight;
rect.size.width=originalWidth;
[sender.view setFrame:rect];
i=1.0;
}];
}
[UIView animateWithDuration:0.1 animations:^{
sender.view.alpha = 1.0;
}];
}
}
This seems very buggy and doesn't properly work. I also tried to change scale according to translation:
else if (sender.state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [sender translationInView:self.view];
NSLog(#"%f x %f y",translation.x ,translation.y);
sender.view.center=CGPointMake(originalCenter.x + translation.x, yOfView);
CGAffineTransform transform = sender.view.transform;
transform = CGAffineTransformScale(transform, translation.x/100, translation.x/100);
//transform = CGAffineTransformRotate(transform, self.rotationAngle);
sender.view.transform = transform;
}
but the scale either gets too big or too small. Any help would be greatly appreciated :)
In the last piece of code you tried, you're doing a couple of things wrong. The scale transform will be cumulative because you never set the translation of your pan gesture recognizer back to 0. Also, if you look at the math in your transform, you'll see that a small movement of say -1 point would scale the view to 0.01 times its original size. You obviously don't want that. You need to add that small negative number to 1, so that initial -1 point move would scale to 0.99. The change of setting the panner's translation back to zero necessitates changing the center calculation to use sender.view.center.x rather than originalCenter.X. You also need an if statement to check whether the center is left or right of its starting position so you know whether you should apply the scaling transform. Something like this,
-(void)handlePan:(UIPanGestureRecognizer *) sender {
if (sender.state == UIGestureRecognizerStateBegan) {
originalCenter = sender.view.center;
sender.view.alpha = 0.8;
[sender.view.superview bringSubviewToFront:sender.view];
}else if (sender.state == UIGestureRecognizerStateChanged){
CGPoint translation = [sender translationInView:self.view];
sender.view.center=CGPointMake(sender.view.center.x + translation.x, sender.view.center.y);
if (sender.view.center.x < originalCenter.x) {
CGAffineTransform transform = sender.view.transform;
transform = CGAffineTransformScale(transform, 1+translation.x/100.0, 1+translation.x/100.0);
sender.view.transform = transform;
}
[sender setTranslation:CGPointZero inView:self.view];
}
}
This doesn't take care of animating the view back, or fading out the view, but it should get you most of the way there.

Keep UICollectionViewCell centered after Zooming with UIPinchGestureRecognizer

I'm having this issue. I change the scale and as well the transition of the view of a UICollectionViewCell, for doing zoom in the position of my fingers do the UIPinchGesture. But after that, the position of the view changes and the view go off the screen and its bounds. I want that the Gesture View get back to its correct position.
This is what is happening:
I have tried to assign a CGAffineTransformIdentity to the view if the center of the cell changes, but after the Scale and the Translate, its center always changes, so its frame as well.
The code in UIGestureRecognizerStateEnded:
if ([gesture state] == UIGestureRecognizerStateEnded){
//Get the Cell that is getting zoomed
CGPoint initialPinchPoint = [gesture locationInView:self.collectionView];
NSIndexPath* pinchedCellPath = [self.collectionView indexPathForItemAtPoint:initialPinchPoint];
SBQPhotosDetailCollectionViewCell *cell=[[SBQPhotosDetailCollectionViewCell alloc] init];
cell=(SBQPhotosDetailCollectionViewCell *)[self.collectionView cellForItemAtIndexPath:pinchedCellPath];
//Check ifs frame with the gesture of the Pinch
if ((cell.frame.origin.x != [gesture view].frame.origin.x) || (cell.frame.origin.y != [gesture view].frame.origin.y)){
[gesture view].transform = CGAffineTransformIdentity;
}
}
I would like to apply the CGAffineTransformIdentity only if the view of the Gesture change its center, position, but not if the view got zoomed.
Thanks

ios drag and drop subview only working once

I have UIView in which I want to be able to drag and drop subviews, and this is working fine once.
- (void)pan: (UIPanGestureRecognizer *)recognizer
{
if (recognizer.state == UIGestureRecognizerStateBegan) {
CGPoint startPos = [recognizer locationInView:self];
for (UIControl *sv in self.subviews){
if ([sv pointInside:startPos withEvent:nil]){
self.controlBeingDragged = sv;
}
}
}
if (((recognizer.state == UIGestureRecognizerStateChanged) ||
(recognizer.state == UIGestureRecognizerStateEnded)) &&
self.controlBeingDragged){
CGPoint translation = [recognizer translationInView:self];
self.controlBeingDragged.center = CGPointMake(self.controlBeingDragged.center.x + translation.x, self.controlBeingDragged.center.y + translation.y);
[recognizer setTranslation:CGPointZero inView:self];
if (recognizer.state == UIGestureRecognizerStateEnded){
self.controlBeingDragged = nil;
}
}
}
However, if I try to drag the same UIControl again, the containing view no longer knows where it is. I can drag it again by starting at the original position, so clearly there is something I am not doing to inform the containing view that its subview has moved. But what?
I'm surprised it's even working once. The pointInside:withEvent: message requires a point in the receiver's coordinate system, but you're sending a point that's in the receiver's superview's coordinate system. You can use convertPoint:toView: for each subview to convert the point to the subview's coordinate system before sending the point in the pointInside:withEvent: message.
CGPoint subviewPoint = [self convertPoint:startPos toView:sv];
if ([sv pointInside:subviewPoint withEvent:nil]) {
...
You might also want to check whether the -[UIView hitTest:withEvent:] message can replace the entire subview-searching loop.

Same CGAffineTransform different anchor

I have 1 view with 2 subviews. One of them being 10 times bigger than the other one. I have a gesture recognizer for the big one (which is on top).
I want to be able to scale the big one with the pinch gesture from an anchor point between the fingers. And I want the little one to make that same transform from the same global position anchor point but without changing its own anchor point.
Hope I explain myself. Here is the code:
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)gestureRecognizer
{
//this changes the anchor point of "big" without moving it
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
float scale = sqrt(transform.a*transform.a+transform.c*transform.c);
//this transforms "big"
[gestureRecognizer view].transform = transform;
//anchor point location in little view
CGPoint pivote = [gestureRecognizer locationInView:little];
CGAffineTransform transform_t = CGAffineTransformConcat(CGAffineTransformMakeTranslation(-pivote.x, -pivote.y), transform);
transform_t = CGAffineTransformConcat(transform_t, CGAffineTransformMakeTranslation(pivote.x, pivote.y));
little.transform = transform_t;
}
[gestureRecognizer setScale:1];
}
But this is not working, the little view keeps jumping around and goes crazy.
EDIT: More info.
Ok, this is the diagram:
The red square is the big view, the dark one is the little one. The dotted square is the main view.
The line: [self adjustAnchorPointForGestureRecognizer:gestureRecognizer]; changes the big views anchor point to the center of the pinch gesture. That works.
As I scale the big view, the small view should scale the same amount and move so it's centered in the big view as it is now. That is, it should scale with the same anchor point as the big view.
I would like to keep those transforms to the little view in a CGAffineTransform, if possible.
Ok, I finally found it. I don't know if it's the better solution, but it works.
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
float scale = sqrt(transform.a*transform.a+transform.c*transform.c);
if((scale>0.1)&&(scale<20)) {
[gestureRecognizer view].transform = transform;
CGPoint anchor = [gestureRecognizer locationInView:little];
anchor = CGPointMake(anchor.x - little.bounds.size.width/2, anchor.y-little.bounds.size.height/2);
CGAffineTransform affineMatrix = little.transform;
affineMatrix = CGAffineTransformTranslate(affineMatrix, anchor.x, anchor.y);
affineMatrix = CGAffineTransformScale(affineMatrix, [gestureRecognizer scale], [gestureRecognizer scale]);
affineMatrix = CGAffineTransformTranslate(affineMatrix, -anchor.x, -anchor.y);
little.transform = affineMatrix;
[eagleView setTransform:little.transform];
[gestureRecognizer setScale:1];
}
}
}
That eaglView line, is the real reason why I needed a CGAffineTransform and I couldn't change the anchor. I'm sending it to OpenGL to change the model view transform matrix.
Now it works perfectly with 3 transforms (rotate, scale, translate) at the same time through user feedback.
EDIT
Just a little note: It seems that when I move the view too fast, the eaglView and the UIView get out of sync. So I don't apply the transforms to the UIViews directly, I apply them with the info out of the eaglView. Like this:
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
float scale = sqrt(transform.a*transform.a+transform.c*transform.c);
if((scale>0.1)&&(scale<20)) {
//[gestureRecognizer view].transform = transform;
CGPoint anchor = [gestureRecognizer locationInView:little];
anchor = CGPointMake(anchor.x - little.bounds.size.width/2, anchor.y-little.bounds.size.height/2);
CGAffineTransform affineMatrix = little.transform;
affineMatrix = CGAffineTransformTranslate(affineMatrix, anchor.x, anchor.y);
affineMatrix = CGAffineTransformScale(affineMatrix, [gestureRecognizer scale], [gestureRecognizer scale]);
affineMatrix = CGAffineTransformTranslate(affineMatrix, -anchor.x, -anchor.y);
//little.transform = affineMatrix;
[eagleView setTransform:affineMatrix];
[gestureRecognizer setScale:1];
CGAffineTransform transform = CGAffineTransformMakeRotation(eaglView.myRotation*M_PI/180);
transform = CGAffineTransformConcat(CGAffineTransformMakeScale(eaglView.myScale, eaglView.myScale), transform);
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeTranslation(eaglView.myTranslate.x, -eaglView.myTranslate.y));
little.transform = transform;
big.transform = transform;
}
}
}
To scale the smaller view using the center of the pinch as the anchor point then you'll need to calculate the new position by hand:
CGRect frame = little.frame; // Returns the frame based on the current transform
frame.origin.x = (frame.origin.x - pivot.x) * gestureRecognizer.scale;
frame.origin.y = (frame.origin.y - pivot.y) * gestureRecognizer.scale;
frame.width = frame.width * gestureRecognizer.scale;
frame.height = frame.height * gestureRecognizer.scale;
Then, update the transform. Personally I would do this based on the view's real position rather than transforming the current transform - I find it easier to think about. So for example:
little.transform = CGAffineTransformIndentity; // Remove the current transform
CGRect orgFrame = little.frame
CGFloat scale = frame.width / orgFrame.size.width;
CGAffineTransform t = CGAffineTransformMakeScale(scale, scale);
t = CGAffineTransformConcat(t, CGAffineTransformMakeTranslation(newFrame.origin.x - frame.origin.x, newFrame.origin.y - frame.origin.y));
little.transform = t;
Note that I've just typed in the code off the top of my head to give you and idea. It'll need testing and debugging!
Also, some of that code can be removed if you use the scale value based on the original pinch rather than resetting it each time and then transforming the transforms.
Tim

detecting touch in UIImageView animation

i have a circle image that circles round the screen using a path animation. And i want to detect when the user touches the moving circle. However even though the image is moving round in a continuous circle its frame is still in the top left hand corner not moving, how can i update this so that i can detect a touch on the moving image? Here is the code...
Set Up Animation in ViewDidLoad:
//set up animation
CAKeyframeAnimation *pathAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
pathAnimation.calculationMode = kCAAnimationPaced;
pathAnimation.fillMode = kCAFillModeForwards;
pathAnimation.removedOnCompletion = NO;
pathAnimation.duration = 10.0;
pathAnimation.repeatCount = 1000;
CGMutablePathRef curvedPath = CGPathCreateMutable();
//path as a circle
CGRect bounds = CGRectMake(60,170,200,200);
CGPathAddEllipseInRect(curvedPath, NULL, bounds);
//tell animation to use this path
pathAnimation.path = curvedPath;
CGPathRelease(curvedPath);
//add subview
circleView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"ball.png"]];
[testView addSubview:circleView];
//animate
[circleView.layer addAnimation:pathAnimation forKey:#"moveTheSquare"];
Touches Method:
- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
//detect touch
UITouch *theTouch = [touches anyObject];
//locate and assign touch location
CGPoint startPoint = [theTouch locationInView:self.view];
CGFloat x = startPoint.x;
CGFloat y = startPoint.y;
//create touch point
CGPoint touchPoint = CGPointMake(x, y);
//check to see if the touch is in the rect
if (CGRectContainsPoint(circleView.bounds, touchPoint)) {
NSLog(#"yes");
}
//check image view position
NSLog(#"frame x - %f, y - %f", circleView.frame.origin.x, circleView.frame.origin.y);
NSLog(#"center x - %f, y - %f", circleView.center.x, circleView.center.y);
NSLog(#"bounds x - %f, y - %f", circleView.bounds.origin.x, circleView.bounds.origin.y);
}
the imageview just seems to stay at the top left hand corner. I cant seem to figure out how to recognise if a touch gesture has been made on the moving ball.
any help would be appreciated,
Chris
You need to query the presentation layer of the view not it's frame. Only the presentation will be updated during the course of an animation...
[myImageView.layer presentationLayer]
Access the properties of this layer (origin, size etc) and determine if your touch point is within the bounds.