I'm trying to implement a scrollable background into my current GameScene. This is supposed to be done via Gesture Recognition, which I'm already using for Taps and moving other scene objects.
Unlike pretty much every other result returned by my Google searches, I don't want an infinitely scrolling background. It just needs to move with your finger, and stay where it's been moved.
The Problem:
I can move the background SKSpriteNode in my scene, but as soon as I try to move it again it snaps to the center and your scrolling effectively becomes useless. It keeps resetting itself.
Here's what I've got so far for moving my Sprites:
-(void)selectTouchedNode:(CGPoint)location
{
SKSpriteNode *node = (SKSpriteNode *)[self nodeAtPoint:location];
if ([self.selectedNode isEqual:node]){
if (![self.selectedNode isEqual:self.background]){
self.selectedNode = NULL;
}
}
if ([node isKindOfClass:[SKLabelNode class]]){
self.selectedNode = node.parent;
} else {
self.selectedNode = node;
}
NSLog(#"Node Selected: %# | Position: %f, %f",node.name,node.position.x,node.position.y);
}
- (void)respondToPan:(UIPanGestureRecognizer *)recognizer {
if (recognizer.state == UIGestureRecognizerStateBegan) {
// Get Touch Location in the View
CGPoint touchLocation = [recognizer locationInView:recognizer.view];
// Convert that Touch Location
touchLocation = [self convertPointFromView:touchLocation];
// Select Node at said Location.
[self selectTouchedNode:touchLocation];
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
// Get the translation being performed on the sprite.
CGPoint translation = [recognizer translationInView:recognizer.view];
// Copy to another CGPoint
translation = CGPointMake(translation.x, -translation.y);
// Translate the currently selected object
[self translateMotion:recognizer Translation:translation];
// Reset translation to zero.
[recognizer setTranslation:CGPointZero inView:recognizer.view];
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
// Fetch Current Location in View
CGPoint touchLocation = [recognizer locationInView:recognizer.view];
// Convert to location in game.
CGPoint correctLocation = [self convertPointFromView:touchLocation];
// If the selected node is the background node
if ([self.selectedNode isEqual:self.background]) {
NSLog(#"Scrolling the background: Node is: %#",self.selectedNode.name);
// Set up a scroll duration
float scrollDuration = 0.2;
// Get the new position based on what is allowed by the function
CGPoint newPos = [self backgroundPanPos:correctLocation];
NSLog(#"New Position: %f, %f",newPos.x,newPos.y);
// Remove all Actions from the background
[_selectedNode removeAllActions];
// Move the background to the new position with defined duration.
SKAction *moveTo = [SKAction moveTo:newPos duration:scrollDuration];
// SetTimingMode for a smoother transition
[moveTo setTimingMode:SKActionTimingEaseOut];
// Run the action
[_selectedNode runAction:moveTo];
} else {
// Otherwise, just put the damn node where the touch occured.
self.selectedNode.position = correctLocation;
}
}
}
// NEW PLAN: Kill myself
- (void)translateMotion:(UIGestureRecognizer *)recognizer Translation:(CGPoint)translation {
// Fetch Location being touched
CGPoint touchLocation = [recognizer locationInView:recognizer.view];
// Convert to place in View
CGPoint location = [self convertPointFromView:touchLocation];
// Set node to that location
self.selectedNode.position = location;
}
- (CGPoint)backgroundPanPos:(CGPoint)newPos {
// Create a new point based on the touched location
CGPoint correctedPos = newPos;
return correctedPos;
}
What do I know so far?
I've tried printing the positions before the scrolling, when it ends, and when it gets initiated again.
Results are that the background does move positions, and once you try to move it again it starts at those new Coordinates, the screen has just repositioned itself over the centre of the sprite.
Supporting Illustration:
I'm not sure I 100% understand the situation you are describing, but I believe that it might be related to the anchor point of your background sprite node.
in your method:
- (void)translateMotion:(UIGestureRecognizer *)recognizer Translation:(CGPoint)translation
you have the line:
self.selectedNode.position = location;
Since your background sprite's anchor point is set to its center by default, any time you move a new touch, it will snap the background sprite's center to the location of your finger.
In other words, background.sprite.position sets the coordinates for the background sprite's anchor point (which by default is the center of the sprite), and in this case any time you set a new position, it is moving the center to that position.
The solution in this case would be to shift the anchor point of the background sprite to be directly under the touch each time, so you are changing the position of the background relative to the point of the background the touch started on.
It's a little hard to explain, so here's some sample code to show you:
1) Create a new project in Xcode using the Sprite Kit Game template
2) Replace the contents of GameScene.m with the following:
#import "GameScene.h"
#interface GameScene ()
#property (nonatomic, strong) SKSpriteNode *sprite;
#end
#implementation GameScene
-(void)didMoveToView:(SKView *)view {
/* Setup your scene here */
self.sprite = [SKSpriteNode spriteNodeWithImageNamed:#"Spaceship"];
self.sprite.position = CGPointMake(CGRectGetMidX(self.frame),
CGRectGetMidY(self.frame));
[self addChild:self.sprite];
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
/* Called when a touch begins */
UITouch *touch = [touches anyObject];
CGPoint nodelocation = [touch locationInNode:self.sprite];
//[self adjustAnchorPointForSprite:self.sprite toLocation:nodelocation];
CGPoint location = [touch locationInNode:self];
self.sprite.position = location;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInNode:self];
self.sprite.position = location;
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
}
-(void)adjustAnchorPointForSprite:(SKSpriteNode *)sprite toLocation:(CGPoint)location {
// Remember the sprite's current position
CGPoint originalPosition = sprite.position;
// Convert the coordinates of the passed-in location to be relative to the bottom left of the sprite (instead of its current anchor point)
CGPoint adjustedNodeLocation = CGPointMake(sprite.anchorPoint.x * sprite.size.width + location.x, sprite.anchorPoint.y * sprite.size.height + location.y);
// Move the anchor point of the sprite to match the passed-in location
sprite.anchorPoint = CGPointMake(adjustedNodeLocation.x / self.sprite.size.width, adjustedNodeLocation.y / self.sprite.size.height);
// Undo any change of position caused by moving the anchor point
self.sprite.position = CGPointMake(sprite.position.x - (sprite.position.x - originalPosition.x), sprite.position.y - (sprite.position.y - originalPosition.y));
}
-(void)update:(CFTimeInterval)currentTime {
/* Called before each frame is rendered */
}
#end
3) Run the project in the sim or on a device and click / touch the top tip of the space ship and start dragging.
4) Notice that the spaceship snaps its center point to where your touch is. If you drag and release it to a new location, and the touch the screen again, it snaps its center back to your finger.
5) Now uncomment the line:
[self adjustAnchorPointForSprite:self.sprite toLocation:nodelocation];
and run the project again.
6) See how you can now drag the ship where you want it, and when you touch it later, it stays in place and follows your finger from the point you touched it. This is because the anchor point is now being adjusted to the point under the touch each time a new touch begins.
Hopefully this gives you a solution you can use in your game as well.
Related
When researching "How do I detect a touch event on a moving UIImageView?" I've come across several answers and I tried to implement them to my app. Nothing I've come across seems to work. I'll explain what I'm trying to do then post my code. Any thoughts, suggestions, comments or answers are appreciated!
My app has several cards floating across the screen from left to right. These cards are various colors and the object of the game is the drag the cards down to their similarly colored corresponding container. If the user doesn't touch and drag the cards fast enough, the cards will simply drift off the screen and points will be lost. The more cards contained in the correct containers, the better the score.
I've written code using core animation to have my cards float from the left to right. This works. However when attempting to touch a card and drag it toward it's container, it isn't correctly detecting that I'm touching the UIImageView of the card.
To test if my I'm properly implementing the code to move a card, I've also written some code allows movement for a non-moving card. In this case my touch is being detected and acting accordingly.
Why can I only interact with stationary cards? After researching this quite a bit it seems that the code:
options:UIViewAnimationOptionAllowUserInteraction
is the key ingredient to get my moving UIImages to be detected. However I tried this doesn't seem to have any effect.
I another key thing that I may be doing wrong is not properly utilizing the correct presentation layer. I've added code like this to my project and I also only works on non-moving objects:
UITouch *t = [touches anyObject];
UIView *myTouchedView = [t view];
CGPoint thePoint = [t locationInView:self.view];
if([_card.layer.presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Card!");
}
else{
NSLog(#"backgound touched");
}
After trying these types of things I'm getting stuck. Here is my code to understand this a bit more completely:
#import "RBViewController.h"
#import <QuartzCore/QuartzCore.h>
#interface RBViewController ()
#property (nonatomic, strong) UIImageView *card;
#end
#implementation RBViewController
- (void)viewDidLoad
{
srand(time (NULL)); // will be used for random colors, drift speeds, and locations of cards
[super viewDidLoad];
[self setOutFirstCardSet]; // this sends out 4 floating cards across the screen
// the following creates a non-moving image that I can move.
_card = [[UIImageView alloc] initWithFrame:CGRectMake(400,400,100,100)];
_card.image = [UIImage imageNamed:#"goodguyPINK.png"];
_card.userInteractionEnabled = YES;
[self.view addSubview:_card];
}
the following method sends out cards from a random location on the left side of the screen and uses core animation to drift the card across the screen. Notice the color of the card and the speed of the drift will be randomly generated as well.
-(void) setOutFirstCardSet
{
for(int i=1; i < 5; i++) // sends out 4 shapes
{
CGRect cardFramei;
int startingLocation = rand() % 325;
CGRect cardOrigini = CGRectMake(-100,startingLocation + 37, 92, 87);
cardFramei.size = CGSizeMake(92, 87);
CGPoint origini;
origini.y = startingLocation + 37;
origini.x = 1200;
cardFramei.origin = origini;
_card.userInteractionEnabled = YES;
_card = [[UIImageView alloc] initWithFrame:cardOrigini];
int randomColor = rand() % 7;
if(randomColor == 0)
{
_card.image = [UIImage imageNamed:#"goodguy.png"];
}
else if (randomColor == 1)
{
_card.image = [UIImage imageNamed:#"goodguyPINK.png"];
}
else if (randomColor == 2)
{
_card.image = [UIImage imageNamed:#"goodGuyPURPLE.png"];
}
else if (randomColor == 3)
{
_card.image = [UIImage imageNamed:#"goodGuyORANGE.png"];
}
else if (randomColor == 4)
{
_card.image = [UIImage imageNamed:#"goodGuyLightPINK.png"];
}
else if (randomColor == 5)
{
_card.image = [UIImage imageNamed:#"goodGuyBLUE.png"];
}
else if (randomColor == 6)
{
_card.image = [UIImage imageNamed:#"goodGuyGREEN.png"];
}
_card.userInteractionEnabled = YES; // this is also written in my viewDidLoad method
[[_card.layer presentationLayer] hitTest:origini]; // not really sure what this does
[self.view addSubview:_card];
int randomSpeed = rand() % 20;
int randomDelay = rand() % 2;
[UIView animateWithDuration:randomSpeed + 10
delay: randomDelay + 4
options:UIViewAnimationOptionAllowUserInteraction // here is the method that I thought would allow me to interact with the moving cards. Not sure why I can't
animations: ^{
_card.frame = cardFramei;
}
completion:NULL];
}
}
notice the following method is where I put CALayer and hit test information. I'm not sure if I'm doing this correctly.
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
UIView *myTouchedView = [t view];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [self.view.layer convertPoint:thePoint toLayer:self.view.layer.superlayer];
CALayer *theLayer = [self.view.layer hitTest:thePoint];
if([_card.layer.presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!"); // This only logs when I touch a non-moving shape
}
else{
NSLog(#"backgound touched"); // this logs when I touch the background or an moving shape.
}
if(myTouchedView == _card)
{
NSLog(#"Touched a card");
_boolHasCard = YES;
}
else
{
NSLog(#"Didn't touch a card");
_boolHasCard = NO;
}
}
I want the following method to work on moving shapes. It only works on non-moving shapes. Many answers say to have the touch ask which class the card is from. As of now all my cards on of the same class (the viewController class). When trying to have the cards be their own class, I was having trouble having that view appear on my main background controller. Must I have various cards be from different classes for this to work, or can I have it work without needing to do so?
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
if([touch view]==self.card)
{
CGPoint location = [touch locationInView:self.view];
self.card.center=location;
}
}
This next method resets the movement of a card if the user starts moving it and then lifts up on it.
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
if(_boolHasCard == YES)
{
[UIView animateWithDuration:3
delay: 0
options:UIViewAnimationOptionAllowUserInteraction
animations: ^{
CGRect newCardOrigin = CGRectMake(1200,_card.center.y - 92/2, 92, 87);
_card.frame = newCardOrigin;
}
completion:NULL];
}
}
#end
The short answer is, you can't.
Core Animation does not actually move the objects along the animation path. They move the presentation layer of the object's layer.
The moment the animation begins, the system thinks the object is at it's destination.
There is no way around this if you want to use Core Animation.
You have a couple of choices.
You can set up a CADisplayLink on your view controller and roll your own animation, where you move the center of your views by a small amount on each call to the display link. This might lead to poor performance and jerky animation if you're animating a lot of objects however.
You can add a gesture recognizer to the parent view that contains all your animations, and then use layer hit testing on the paren't view's presentation view to figure out which animating layer got tapped, then fetch that layer's delegate, which will be the view you are animating. I have a project on github that shows how to do this second technique. It only detects taps on a single moving view, but it will show you the basics: Core Animation demo project on github.
(up-votes always appreciated if you find this post helpful)
It looks to me that your problem is really with just an incomplete understanding of how to convert a point between coordinate spaces. This code works exactly as expected:
- (void)viewDidLoad
{
[super viewDidLoad];
CGPoint endPoint = CGPointMake([[self view] bounds].size.width,
[[self view] bounds].size.height);
CABasicAnimation *animation = [CABasicAnimation
animationWithKeyPath:#"position"];
animation.fromValue = [NSValue valueWithCGPoint:[[_imageView layer] position]];
animation.toValue = [NSValue valueWithCGPoint:endPoint];
animation.duration = 30.0f;
[[_imageView layer] addAnimation:animation forKey:#"position"];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [[_imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
if([[_imageView layer].presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!");
}
else{
NSLog(#"backgound touched");
}
}
Notice the line in particular:
thePoint = [[_imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
When I tap on the layer image view while it's animating, I get "You touched a Shape!" in the console window and I get "background touched" when I tap around it. That's what you're wanting right?
Here's a sample project on Github
UPDATE
To help with your follow up question in the comments, I've written the touchesBegan code a little differently. Imagine that you've add all of your image views to an array (cleverly named imageViews) when you create them. You would alter your code to look something like this:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
for (UIImageView *imageView in [self imageViews]) {
thePoint = [[imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
if([[imageView layer].presentationLayer hitTest:thePoint]) {
NSLog(#"Found it!!");
break; // No need to keep iterating, we've found it
} else{
NSLog(#"Not this one!");
}
}
}
I'm not sure how expensive this is, so you may have to profile it, but it should do what you're expecting.
I have been trying to update two UIImageView elements almost simultaneously. Whilst one should have its position changed (image01), the other should have alter its image file (image02).
The problem here is that when I change image02's image file, image01's position data is reset to the center (which is the element's origin).
I am completely here. Do any of you have any ideas on what is causing this?
Here is the code:
- (void) touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event {
UITouch* touch = [touches anyObject];
CGPoint point = [touch locationInView:self.view];
// Setting crosshair new position
[self setCrossHairPosition:point];
// Getting camera preview, and discovering pixel color at touch location
// I also set colorName here
[self captureNow:touch];
// Getting file and inserting it in the UIImageView
fileName = [NSString stringWithFormat:#"%#.png", colorName];
_colorADDPreview.image = [UIImage imageNamed:fileName];
}
- (void) setCrossHairPosition:(CGPoint)point {
CGRect crossHairRect = _crossHair.frame;
crossHairRect.origin.x = point.x - (crossHairRect.size.width/2);
crossHairRect.origin.y = point.y - (crossHairRect.size.height/2);
_crossHair.frame = crossHairRect;
}
I realised that the problem was in the redrawing of the screen.
So instead of having the element inserted through the interface builder, I inserted it programatically and kept track of its position in a variable =]
I've got two UIViews on my window: one to hold player scores, (a sidebar), and a main play area. They both fit on the UIWindow, and neither scroll. The user can drag UIButtons around on the main play area – but at present, they can drop them onto the sidebar. Once they do, they can't drag them again to bring them back, presumably because you're then tapping on the second view, which doesn't contain the button in question.
I'd like to prevent anything inside the main view being moved onto the sidebar view. I've managed this, but I need the drag to be released if the player's finger moves off that view. With the code below, the button keeps moving with the finger, but just won't go past the X coordinate of the view. How can I go about this? Dragging is enabled using this call:
[firstButton addTarget: self action: #selector(wasDragged: withEvent:) forControlEvents: UIControlEventTouchDragInside];
To this method:
- (void) wasDragged: (UIButton *) button withEvent: (UIEvent *) event
{
if (button == firstButton) {
UITouch *touch = [[event touchesForView:button] anyObject];
CGPoint previousLocation = [touch previousLocationInView:button];
CGPoint location = [touch locationInView:button];
CGFloat delta_x = location.x - previousLocation.x;
CGFloat delta_y = location.y - previousLocation.y;
if ((button.center.x + delta_x) < 352)
{
button.center = CGPointMake(button.center.x + delta_x, button.center.y + delta_y);
} else {
button.center = CGPointMake(345, button.center.y + delta_y);
}
}
}
implement
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
touch delegate method, then check the location of the UITouch, if the location is outside of the bounds you want to allow (the first view), then don't move it any further. You could also kill the touch at the point the user drags outside the view using a BOOL iVar
//In .h file
BOOL touchedOutside;
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
touchedOutside = NO;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
if (!touchedOutside) {
UITouch *touch = [[event allTouches] anyObject];
CGPoint location = [touch locationInView:firstView];
if (location.x < UPPER_XLIMIT && location.x > LOWER_XLIMIT) {
if (location.y < UPPER_YLIMIT && location.x > LOWER_YLIMIT) {
//Moved within acceptable bounds
button.centre = location;
}
} else {
//This will end the touch sequence
touchedOutside = YES;
//This is optional really, but you can implement
//touchesCancelled: to handle the end of the touch
//sequence, and execute the code immediately rather than
//waiting for the user to remove the finger from the screen
[self touchesCancelled:touches withEvent:event];
}
}
so I have a mutable array that holds several circles which are UIViews.
right now I have my touches began method setup like this.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint touchLocation = [touch locationInView:self.view];
for (Circle *circle in playerOneCircles)
{
if ([circle.layer.presentationLayer hitTest:touchLocation])
{
[circle playerTap:nil];
break;
}
}
}
this works fine. but it gives problems with overlapping views.
I want other UIviews to also respond to the touchesbegan method (that would then trigger other methods). but if 2 objects would overlap then my touchesbegan would trigger the wrong method.
so I would like to define multiple UITouches that only respond to certain objects instead of anyObject. how would I have to define the UITouch to only work with objects from my mutable array?
EDIT
Added comments to answer your comment to explain the code.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
// We want to find all the circles that contain the touch point.
// A circle does not have to be "on top" (or even visible) to be touched.
// All circles that contain the touch point will be added to the touchedCircles set.
NSMutableSet *touchedCircles = [NSMutableSet set];
// To support multiple touches, we have to look at every touch object.
for (UITouch *touch in touches) {
CGPoint touchLocation = [touch locationInView:self.view];
// Search through our collection of circle objects. Any circle that
// contains this touch point gets added to our collection of touched
// circles. If you need to know which UITouch is "touching" each circle,
// you will need to store that as well.
for (Circle *circle in playerOneCircles) {
if ([circle containsPoint:touchLocation]) {
[touchedCircles addObject:circle];
}
}
}
// We have completed our search for all touches and all circles. If
// and circle was touched, then it will be in the set of touchedCircles.
if (touchedCircles.count) {
// When any circle has been touched, we want to call some special method
// to process the touched circle. Send the method the set of circles, so it
// knows which circles were touched.
[self methodAWithTouchedCircles:touchedCircles];
} else {
// None of our circles were touched, so call a different method.
[self methodB];
}
}
You would implement containsPoint for a Circle something like this...
- (BOOL)containsPoint:(CGPoint)point
{
// Since each of our objects is a circle, to determine if a point is inside
// the circle, we just want to know if the distance between that point and
// the center of the circle is less than the radius of the circle.
// If your circle is really contained inside a view, you can compute the radius
// as one-half the width of the frame.
// Otherwise, your circle may actually have its own radius property, in which case
// you can just use the known radius.
CGFloat radius = self.frame.size.width *.5;
// This is just the Pythagorean Theorem, or instance formula.
// distance = sqrt((x0-x1)^2 + (y0-y1)^2)
// and we want to check that
// distance < radius
// By simple algebra, that is the same as checking
// distance^2 < radius^2
// which saves us from having to compute the square root.
CGFloat diffX = self.center.x - point.x;
CGFloat diffY = self.center.y - point.y;
return (diffX*diffX) + (diffY*diffY) < radius*radius;
}
I am developing a simple animation where an UIImageView moves along a UIBezierPath, now I want to provide user interation to the moving UIImageView so that user can guide the UIImageView by touching the UIImageView and drag the UIImageview around the screen.
Change the frame of the position of the image view in touchesMoved:withEvent:.
Edit: Some code
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
UITouch *touch = [[event allTouches] anyObject];
if ([touch.view isEqual: self.view] || touch.view == nil) {
return;
}
lastLocation = [touch locationInView: self.view];
}
- (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event {
UITouch *touch = [[event allTouches] anyObject];
if ([touch.view isEqual: self.view]) {
return;
}
CGPoint location = [touch locationInView: self.view];
CGFloat xDisplacement = location.x - lastLocation.x;
CGFloat yDisplacement = location.y - lastLocation.y;
CGRect frame = touch.view.frame;
frame.origin.x += xDisplacement;
frame.origin.y += yDisplacement;
touch.view.frame = frame;
lastLocation=location;
}
You should also implement touchesEnded:withEvent: and touchesCanceled:withEvent:.
So you want the user to be able to touch an image in the middle of a keyframe animation along a curved path, and drag it to a different location? What do you want to happen to the animation at that point?
You have multiple challenges.
First is detecting the touch on the object while a keyframe animation is "in flight".
To do that, you want to use the parent view's layer's presentation layer's hitTest method.
A layer's presentation layer represents the state of the layer at any given instant, including animations.
Once you detect touches on your view, you will need to get the image's current location from the presentation layer, stop the animation, and take over with a touchesMoved/touchesDragged based animation.
I wrote a demo application that shows how to detect touches on an object that's being animated along a path. That would be a good starting point.
Take a look here:
Core Animation demo including detecting touches on a view while an animation is "in flight".
Easiest way would be subclassing UIImageView.
For simple dragging take a look at the code here (code borrowed from user MHC):
UIView drag (image and text)
Since you want to drag along Bezier path you'll have to modify touchesMoved:
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *aTouch = [touches anyObject];
//here you have location of user's finger
CGPoint location = [aTouch locationInView:self.superview];
[UIView beginAnimations:#"Dragging A DraggableView" context:nil];
//commented code would simply move the view to that point
//self.frame = CGRectMake(location.x-offset.x,location.y-offset.y,self.frame.size.width, self.frame.size.height);
//you need some kind of a function
CGPoint calculatedPosition = [self calculatePositonForPoint: location];
self.frame = CGRectMake(calculatedPosition.x,calculatedPosition.y,self.frame.size.width, self.frame.size.height);
[UIView commitAnimations];
}
What exactly you would like to do in -(CGPoint) calculatePositionForPoint:(CGPoint)location
is up to you. You could for example calculate point in Bezier path that is the closest to location. For simple test you can do:
-(CGPoint) calculatePositionForPoint:(CGPoint)location {
return location;
}
Along the way you're gonna have to decide what happens if user wonders off to far from your
precalculated Bezier path.