CALayer Draggable Mask - objective-c

I have two images. Both of the same scene, of the same size, that take up the entire screen. One is image blurry, one is in focus. The desired effect is that the user will initially see the blurry image and as they drag their finger across the screen from left to right they will see the part of the image to the left of where they drag be in focus (for example if they drag only halfway across, then the left-half of the scene is the focused image while the right half is still blurry.
I'm doing this by subclassing UIImageView with the focused image as the image - then adding a CALayer of the blurry image with a mask applied, then changing the location of the mask according to touchesBegan/touchesMoved. The problem is that performance is very slow using the approach below. So I'm wondering what I'm doing wrong.
#interface DragMaskImageView : UIImageView
#end
#implementation DragMaskImageView{
BOOL userIsTouchingMask;
CALayer *maskingLayer;
CALayer *topLayer;
float horzDistanceOfTouchFromCenter;
}
- (void)awakeFromNib {
topLayer = [CALayer layer];
topLayer.contents = (id) [UIImage imageNamed:#"blurryImage.jpg"].CGImage;
topLayer.frame = CGRectMake(0, 0, 480, 300);
[self.layer addSublayer:topLayer];
maskingLayer = [CALayer layer];
maskingLayer.contents = (id) [UIImage imageNamed:#"maskImage.png"].CGImage;
maskingLayer.anchorPoint = CGPointMake(0.0, 0.0);
maskingLayer.bounds = CGRectMake(0, 0, 480, 300);
[topLayer setMask:maskingLayer];
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
CGPoint touchPoint = [[[event allTouches] anyObject] locationInView:self];
if (touchPoint.x < maskingLayer.frame.origin.x) {
NSLog(#"user is touching to the left of mask - disregard");
userIsTouchingMask = NO;
} else {
NSLog(#"user is touching ");
horzDistanceOfTouchFromCenter = touchPoint.x - maskingLayer.frame.origin.x;
userIsTouchingMask = YES;
}
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
if (userIsTouchingMask) {
CGPoint touchPoint = [[[event allTouches] anyObject] locationInView:self];
float newMaskX = touchPoint.x - horzDistanceOfTouchFromCenter;
if (newMaskX < 0) {
newMaskX = 0;
}
if (newMaskX > 480) {
newMaskX = 480;
}
maskingLayer.frame = CGRectMake(newMaskX, 0 ,480, 300);
}
}
I checked the related thread core animation calayer mask animation performance but setting shouldRasterize to YES on any layer doesn't seem to help the performance problem.

I ran into the same problem, and updating the frame of the mask always lags behind the touch. I found that actually re-creating the mask with the new frame ensures that the mask is always updated instantly.

Perhaps the problem is that the layer's implicit animation is causing it to appear to be sluggish. Turning off the implicit animation should fix the problem:
[CATransaction begin];
[CATransaction setDisableActions:YES];
maskingLayer.frame = CGRectMake(newMaskX, 0 ,480, 300);
[CATransaction commit];
This is why #Rizwan's work-around works. It's a way of bypassing the implicit animation.

Related

How do I detect a touch event on a moving UIImageView?

When researching "How do I detect a touch event on a moving UIImageView?" I've come across several answers and I tried to implement them to my app. Nothing I've come across seems to work. I'll explain what I'm trying to do then post my code. Any thoughts, suggestions, comments or answers are appreciated!
My app has several cards floating across the screen from left to right. These cards are various colors and the object of the game is the drag the cards down to their similarly colored corresponding container. If the user doesn't touch and drag the cards fast enough, the cards will simply drift off the screen and points will be lost. The more cards contained in the correct containers, the better the score.
I've written code using core animation to have my cards float from the left to right. This works. However when attempting to touch a card and drag it toward it's container, it isn't correctly detecting that I'm touching the UIImageView of the card.
To test if my I'm properly implementing the code to move a card, I've also written some code allows movement for a non-moving card. In this case my touch is being detected and acting accordingly.
Why can I only interact with stationary cards? After researching this quite a bit it seems that the code:
options:UIViewAnimationOptionAllowUserInteraction
is the key ingredient to get my moving UIImages to be detected. However I tried this doesn't seem to have any effect.
I another key thing that I may be doing wrong is not properly utilizing the correct presentation layer. I've added code like this to my project and I also only works on non-moving objects:
UITouch *t = [touches anyObject];
UIView *myTouchedView = [t view];
CGPoint thePoint = [t locationInView:self.view];
if([_card.layer.presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Card!");
}
else{
NSLog(#"backgound touched");
}
After trying these types of things I'm getting stuck. Here is my code to understand this a bit more completely:
#import "RBViewController.h"
#import <QuartzCore/QuartzCore.h>
#interface RBViewController ()
#property (nonatomic, strong) UIImageView *card;
#end
#implementation RBViewController
- (void)viewDidLoad
{
srand(time (NULL)); // will be used for random colors, drift speeds, and locations of cards
[super viewDidLoad];
[self setOutFirstCardSet]; // this sends out 4 floating cards across the screen
// the following creates a non-moving image that I can move.
_card = [[UIImageView alloc] initWithFrame:CGRectMake(400,400,100,100)];
_card.image = [UIImage imageNamed:#"goodguyPINK.png"];
_card.userInteractionEnabled = YES;
[self.view addSubview:_card];
}
the following method sends out cards from a random location on the left side of the screen and uses core animation to drift the card across the screen. Notice the color of the card and the speed of the drift will be randomly generated as well.
-(void) setOutFirstCardSet
{
for(int i=1; i < 5; i++) // sends out 4 shapes
{
CGRect cardFramei;
int startingLocation = rand() % 325;
CGRect cardOrigini = CGRectMake(-100,startingLocation + 37, 92, 87);
cardFramei.size = CGSizeMake(92, 87);
CGPoint origini;
origini.y = startingLocation + 37;
origini.x = 1200;
cardFramei.origin = origini;
_card.userInteractionEnabled = YES;
_card = [[UIImageView alloc] initWithFrame:cardOrigini];
int randomColor = rand() % 7;
if(randomColor == 0)
{
_card.image = [UIImage imageNamed:#"goodguy.png"];
}
else if (randomColor == 1)
{
_card.image = [UIImage imageNamed:#"goodguyPINK.png"];
}
else if (randomColor == 2)
{
_card.image = [UIImage imageNamed:#"goodGuyPURPLE.png"];
}
else if (randomColor == 3)
{
_card.image = [UIImage imageNamed:#"goodGuyORANGE.png"];
}
else if (randomColor == 4)
{
_card.image = [UIImage imageNamed:#"goodGuyLightPINK.png"];
}
else if (randomColor == 5)
{
_card.image = [UIImage imageNamed:#"goodGuyBLUE.png"];
}
else if (randomColor == 6)
{
_card.image = [UIImage imageNamed:#"goodGuyGREEN.png"];
}
_card.userInteractionEnabled = YES; // this is also written in my viewDidLoad method
[[_card.layer presentationLayer] hitTest:origini]; // not really sure what this does
[self.view addSubview:_card];
int randomSpeed = rand() % 20;
int randomDelay = rand() % 2;
[UIView animateWithDuration:randomSpeed + 10
delay: randomDelay + 4
options:UIViewAnimationOptionAllowUserInteraction // here is the method that I thought would allow me to interact with the moving cards. Not sure why I can't
animations: ^{
_card.frame = cardFramei;
}
completion:NULL];
}
}
notice the following method is where I put CALayer and hit test information. I'm not sure if I'm doing this correctly.
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
UIView *myTouchedView = [t view];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [self.view.layer convertPoint:thePoint toLayer:self.view.layer.superlayer];
CALayer *theLayer = [self.view.layer hitTest:thePoint];
if([_card.layer.presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!"); // This only logs when I touch a non-moving shape
}
else{
NSLog(#"backgound touched"); // this logs when I touch the background or an moving shape.
}
if(myTouchedView == _card)
{
NSLog(#"Touched a card");
_boolHasCard = YES;
}
else
{
NSLog(#"Didn't touch a card");
_boolHasCard = NO;
}
}
I want the following method to work on moving shapes. It only works on non-moving shapes. Many answers say to have the touch ask which class the card is from. As of now all my cards on of the same class (the viewController class). When trying to have the cards be their own class, I was having trouble having that view appear on my main background controller. Must I have various cards be from different classes for this to work, or can I have it work without needing to do so?
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
if([touch view]==self.card)
{
CGPoint location = [touch locationInView:self.view];
self.card.center=location;
}
}
This next method resets the movement of a card if the user starts moving it and then lifts up on it.
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
if(_boolHasCard == YES)
{
[UIView animateWithDuration:3
delay: 0
options:UIViewAnimationOptionAllowUserInteraction
animations: ^{
CGRect newCardOrigin = CGRectMake(1200,_card.center.y - 92/2, 92, 87);
_card.frame = newCardOrigin;
}
completion:NULL];
}
}
#end
The short answer is, you can't.
Core Animation does not actually move the objects along the animation path. They move the presentation layer of the object's layer.
The moment the animation begins, the system thinks the object is at it's destination.
There is no way around this if you want to use Core Animation.
You have a couple of choices.
You can set up a CADisplayLink on your view controller and roll your own animation, where you move the center of your views by a small amount on each call to the display link. This might lead to poor performance and jerky animation if you're animating a lot of objects however.
You can add a gesture recognizer to the parent view that contains all your animations, and then use layer hit testing on the paren't view's presentation view to figure out which animating layer got tapped, then fetch that layer's delegate, which will be the view you are animating. I have a project on github that shows how to do this second technique. It only detects taps on a single moving view, but it will show you the basics: Core Animation demo project on github.
(up-votes always appreciated if you find this post helpful)
It looks to me that your problem is really with just an incomplete understanding of how to convert a point between coordinate spaces. This code works exactly as expected:
- (void)viewDidLoad
{
[super viewDidLoad];
CGPoint endPoint = CGPointMake([[self view] bounds].size.width,
[[self view] bounds].size.height);
CABasicAnimation *animation = [CABasicAnimation
animationWithKeyPath:#"position"];
animation.fromValue = [NSValue valueWithCGPoint:[[_imageView layer] position]];
animation.toValue = [NSValue valueWithCGPoint:endPoint];
animation.duration = 30.0f;
[[_imageView layer] addAnimation:animation forKey:#"position"];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [[_imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
if([[_imageView layer].presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!");
}
else{
NSLog(#"backgound touched");
}
}
Notice the line in particular:
thePoint = [[_imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
When I tap on the layer image view while it's animating, I get "You touched a Shape!" in the console window and I get "background touched" when I tap around it. That's what you're wanting right?
Here's a sample project on Github
UPDATE
To help with your follow up question in the comments, I've written the touchesBegan code a little differently. Imagine that you've add all of your image views to an array (cleverly named imageViews) when you create them. You would alter your code to look something like this:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
for (UIImageView *imageView in [self imageViews]) {
thePoint = [[imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
if([[imageView layer].presentationLayer hitTest:thePoint]) {
NSLog(#"Found it!!");
break; // No need to keep iterating, we've found it
} else{
NSLog(#"Not this one!");
}
}
}
I'm not sure how expensive this is, so you may have to profile it, but it should do what you're expecting.

How to add images while moving finger across UIView?

I am having issues adding an image across a view while moving finger across the screen.
Currently it is adding the image multiple times but squeezing them too close together and not really following my touch.
EDIT:
What I want:
After taking or choosing an image, the user can then select another image from a list. I want the user to touch and move their finger across the view and the selected image will appear where they drag their finger, without overlapping, in each location.
This works:
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
currentTouch = [touch locationInView:self.view];
CGRect myImageRect = CGRectMake(currentTouch.x, currentTouch.y, 80.0f, 80.0f);
myImage = [[UIImageView alloc] initWithFrame:myImageRect];
[myImage setImage:[UIImage imageNamed:#"dot.png"]];
[self.view addSubview:myImage];
[myImage release];
}
New Question: How do I add spaces in this so that the image isn't so snug when drawing it on the view?
You might want to explain your question more, what exactly are you trying to achieve! If you dont want the images to overlap than you can try this!
UITouch * touch = [touches anyObject];
touchPoint = [touch locationInView:imageView];
prev_touchPoint = [touch previousLocationInView:imageView];
if (ABS(touchPoint.x - prev_touchPoint.x) > 80
|| ABS(touchPoint.y - prev_touchPoint.y) > 80) {
_aImageView = [[UIImageView alloc] initWithImage:aImage];
_aImageView.multipleTouchEnabled = YES;
_aImageView.userInteractionEnabled = YES;
[_aImageView setFrame:CGRectMake(touchPoint.x, touchPoint.y, 80.0, 80.0)];
[imageView addSubview:_aImageView];
[_aImageView release];
}
I'm sorry because I'm in company and I can't post too big data(the code). The squeezing is because you have not checked the touch point's distance with last touch point. Check a point whether it's in a view:bool CGRectContainsPoint (CGRect rect,CGPoint point);I mean remember a touch point in touchesBegan:. Update it if the new touches in touchesMomved: is bigger than image's width or left. And put the add image view in a method and call it use - (void)performSelectorInBackground:(SEL)aSelector withObject:(id)arg.
you can also use UISwipeGestureRecognizer as below instead of touchesMoved method,while swiping across the screen.in viewDidload:method,
UISwipeGestureRecognizer *swipeup = [[UISwipeGestureRecognizer alloc]initWithTarget:self action:#selector(swipedone:)];
swipeup.direction = UISwipeGestureRecognizerDirectionUp;
swipeup.numberOfTouchesRequired=1;
[self.view addGestureRecognizer:swipeup];
method definition:
-(IBAction)swipedone:(UISwipeGestureRecognizer*)recognizer
{
NSLog(#"swiped");
UIImageView* _aImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"1.png"]];
_aImageView.frame = CGRectMake(10, 10, 100, 100);
_aImageView.multipleTouchEnabled = YES;
_aImageView.userInteractionEnabled = YES;
CGPoint point = [recognizer locationInView:recognizer.view];
[_aImageView setFrame:CGRectMake(point.x, point.y, 80.0, 80.0)];
[self.view addSubview:_aImageView];
[_aImageView release];
}
currently i am using this code for swipe up.i think it will works fine.once try it.

If statement to choose between multiple CALayer

I have a Core Animation image on boxLayer and I'm duplicating it, changing the action of and shifting the position of the 2nd (boxLayer2) so that someone can choose between the 2.
I want the user to be able to tap the image for boxLayer and the boxLayer2 image does nothing but boxLayer moves (I didn't include my animation code beyond receiving the touch) and viceversa.
I cannot get an if statement to work. I've tried multiple variations self.layer == boxLayer or CALayer == boxlayer ... sublayer is an array so that's out. Any help/explanation as I know I'm missing something would be greatly appreciated.
Thanks!
UIView *BounceView is declared in the VC
In BounceView I have 2 CALayers declared: boxlayer & boxlayer2
BounceView.m
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
[self setBackgroundColor:[UIColor clearColor]];
// Create the new layer object
boxLayer = [[CALayer alloc] init];
boxLayer2 = [[CALayer alloc] init];
// Give it a size
[boxLayer setBounds:CGRectMake(0.0, 0.0, 185.0, 85.0)];
[boxLayer2 setBounds:CGRectMake(0.0, 0.0, 185.0, 85.0)];
// Give it a location
[boxLayer setPosition:CGPointMake(150.0, 140.0)];
[boxLayer2 setPosition:CGPointMake(150.0, 540.0)];
// Create a UIImage
UIImage *layerImage = [UIImage imageNamed:#"error-label.png"];
UIImage *layerImage2 = [UIImage imageNamed:#"error-label.png"];
// Get the underlying CGImage
CGImageRef image = [layerImage CGImage];
CGImageRef image2 = [layerImage2 CGImage];
// Put the CGImage on the layer
[boxLayer setContents:(__bridge id)image];
[boxLayer2 setContents:(__bridge id)image2];
// Let the image resize (without changing the aspect ratio)
// to fill the contentRect
[boxLayer setContentsGravity:kCAGravityResizeAspect];
[boxLayer2 setContentsGravity:kCAGravityResizeAspect];
// Make it a sublayer of the view's layer
[[self layer] addSublayer:boxLayer];
[[self layer] addSublayer:boxLayer2];
}
return self;
}
- (void)touchesBegan:(NSSet *)touches
withEvent:(UIEvent *)event
{
if (CAlayer == boxLayer)
{
// do something
}
else
{
// do something else
}
}
It looks to me like you are trying to know what layer the user tapped on inside touched began and that this is your problem.
How to find out what layer was tapped
CALayer has an instance method - (CALayer *)hitTest:(CGPoint)thePoint that
Returns the farthest descendant of the receiver in the layer hierarchy (including itself) that contains a specified point.
So to find out what layer you tapped you should do something like
- (void)touchesBegan:(NSSet *)touches
withEvent:(UIEvent *)event {
UITouch *anyTouch = [[event allTouches] anyObject];
CGPoint pointInView = [anyTouch locationInView:self];
// Now you can test what layer that was tapped ...
if ([boxLayer hitTest:pointInView]) {
// do something with boxLayer
}
// the rest of your code
}
This works because hitTest will return nil if the point is outside the layer's bounds.
David Rönnqvist's post tells you how to use hitTest on the layer to figure out which layer was touched. That should work. I would code that method slightly differently, though. I would have my view's layer include boxLayer and boxLayer2 as sub-layers, and then send the hitTest method to the parent layer. It would then return the layer that contains the touch.
It would be much simpler, though, if you use separate views, each with a layer that contains your content. Then you can use gesture recognizers on each view, and higher level Cocoa Touch code rather than CA code to manage taps. Cleaner and easier to maintain.

How to add movable UIView as subview with dynamic size at runtime in objective C?

I need to add a UIView as subview of a UIView where a UIImageview is displayed. What I need to implement is that when the image is loaded in my UIIMageview, at the same time I am showing the UIView call it as newView. In the beginning,I am showing it with static size say 190X170 and alpha as 0.5. Once i tap my finger on that view,it should move on the whole UIImageview. Once it is moved at the last,that coordinated I am taking and cropping my image with the same points. Now I am done with the moving part using the code below.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
CGPoint touchLocation = [touch locationInView:self.view];
if (CGRectContainsPoint(newView.frame, touchLocation))
{
dragging = YES;
oldX = touchLocation.x;
oldY = touchLocation.y;
}
if (dragging)
{
newView.layer.borderColor = [UIColor clearColor].CGColor;
newView.layer.borderWidth = 1.0f;
CGRect frame = newView.frame;
frame.origin.x = newView.frame.origin.x + touchLocation.x - oldX;
x = frame.origin.x;
NSLog(#"x value : %d",x);
frame.origin.y = newView.frame.origin.y + touchLocation.y - oldY;
y = frame.origin.y;
NSLog(#"y value : %d",y);
newView.frame = frame;
}
oldX = touchLocation.x;
oldY = touchLocation.y;
}
What I need to implement is to resize the UIView on taping two fingers like that of UIScrollview. I tried to implement the UIScrollview but it is not giving me good effect. I tried to implement
NSUInteger resizingMask = NSViewHeightSizable | NSViewWidthSizable;
[self.view setAutoresizesSubviews:YES];
[newView setAutoresizingMask:resizingMask];
but in vain. Nothing is taking place.
Can anybody show me a path for the same. Thanks in advance.
To provide the zoom in zoom out feature you can use the UIPinchGuestureRecognizer. If you are using iOS 5 then it will be pretty easy for you.
here is the very cool tutorial for that hope it will help for you.
http://www.raywenderlich.com/6567/uigesturerecognizer-tutorial-in-ios-5-pinches-pans-and-more

detecting touch in UIImageView animation

i have a circle image that circles round the screen using a path animation. And i want to detect when the user touches the moving circle. However even though the image is moving round in a continuous circle its frame is still in the top left hand corner not moving, how can i update this so that i can detect a touch on the moving image? Here is the code...
Set Up Animation in ViewDidLoad:
//set up animation
CAKeyframeAnimation *pathAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
pathAnimation.calculationMode = kCAAnimationPaced;
pathAnimation.fillMode = kCAFillModeForwards;
pathAnimation.removedOnCompletion = NO;
pathAnimation.duration = 10.0;
pathAnimation.repeatCount = 1000;
CGMutablePathRef curvedPath = CGPathCreateMutable();
//path as a circle
CGRect bounds = CGRectMake(60,170,200,200);
CGPathAddEllipseInRect(curvedPath, NULL, bounds);
//tell animation to use this path
pathAnimation.path = curvedPath;
CGPathRelease(curvedPath);
//add subview
circleView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"ball.png"]];
[testView addSubview:circleView];
//animate
[circleView.layer addAnimation:pathAnimation forKey:#"moveTheSquare"];
Touches Method:
- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
//detect touch
UITouch *theTouch = [touches anyObject];
//locate and assign touch location
CGPoint startPoint = [theTouch locationInView:self.view];
CGFloat x = startPoint.x;
CGFloat y = startPoint.y;
//create touch point
CGPoint touchPoint = CGPointMake(x, y);
//check to see if the touch is in the rect
if (CGRectContainsPoint(circleView.bounds, touchPoint)) {
NSLog(#"yes");
}
//check image view position
NSLog(#"frame x - %f, y - %f", circleView.frame.origin.x, circleView.frame.origin.y);
NSLog(#"center x - %f, y - %f", circleView.center.x, circleView.center.y);
NSLog(#"bounds x - %f, y - %f", circleView.bounds.origin.x, circleView.bounds.origin.y);
}
the imageview just seems to stay at the top left hand corner. I cant seem to figure out how to recognise if a touch gesture has been made on the moving ball.
any help would be appreciated,
Chris
You need to query the presentation layer of the view not it's frame. Only the presentation will be updated during the course of an animation...
[myImageView.layer presentationLayer]
Access the properties of this layer (origin, size etc) and determine if your touch point is within the bounds.