Control the animation by touch - objective-c

i have this type of code....
// create a UIImageView
UIImageView *rollDiceImageMainTemp = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"rollDiceAnimationImage1.png"]];
// position and size the UIImageView
rollDiceImageMainTemp.frame = CGRectMake(0, 0, 100, 100);
// create an array of images that will represent your animation (in this case the array contains 2 images but you will want more)
NSArray *savingHighScoreAnimationImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"rollDiceAnimationImage1.png"],
[UIImage imageNamed:#"rollDiceAnimationImage2.png"],
nil];
// set the new UIImageView to a property in your view controller
self.viewController.rollDiceImage = rollDiceImageMainTemp;
// release the UIImageView that you created with alloc and init to avoid memory leak
[rollDiceImageMainTemp release];
// set the animation images and duration, and repeat count on your UIImageView
[self.viewController.rollDiceImageMain setAnimationImages:savingHighScoreAnimationImages];
[self.viewController.rollDiceImageMain setAnimationDuration:2.0];
[self.viewController.rollDiceImageMain.animationRepeatCount:3];
// start the animation
[self.viewController.rollDiceImageMain startAnimating];
// show the new UIImageView
[self.viewController.view addSubview:self.rollDiceImageMain];
Instead of startAnimation directly..there any way to control this code using touchesMoved??

Actually, what you want is the - (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event. Below is a sample of code that will allow a UIView to move on the x-axis only. Adapt to whatever you need your code to do.
- (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
CGPoint original = self.center;
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInView:self.superview];
// Taking the delta will give us a nice, smooth movement that will
// track with the touch.
float delta = location.x - original.x;
// We need to substract half of the width of the view
// because we are using the view's center to reposition it.
float maxPos = self.superview.bounds.size.width
- (self.frame.size.width * 0.5f);
float minPos = self.frame.size.width * 0.5f;
float intendedPos = delta + original.x;
// Make sure they can't move the view off-screen
if (intendedPos > maxPos)
{
intendedPos = maxPos;
}
// Make sure they can't move the view off-screen
if (intendedPos < minPos)
{
intendedPos = minPos;
}
self.center = CGPointMake(intendedPos, original.y);
// We want to cancel all other touches for the view
// because we don't want the touchInside event firing.
[self touchesCancelled:touches withEvent:event];
// Pass on the touches to the super
[super touchesMoved:touches withEvent:event];
}
Notice here that I am taking the delta of the movement and not with the finger. If you track with the finger you will get very erratic behavior that is very undesirable. Applying the delta will give nice, fluid movement of the view that will track perfectly with the touch input.
Update: Also, for those wondering why I chose to multiply by 0.5f rather than divide by 2, the ARM processor doesn't support division in hardware, so there is a minuscule performance bump by going with multiplication. This performance optimization may only get called only a few times during the life of a program so it might not make a difference, but this particular message is called many, many times when dragging. Because it is in this case, it might be worth the multiplication, instead.

You can detect touches by using UIResponder methods like - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event in that you can call the [self.viewController.rollDiceImageMain startAnimating]; method.
And once it starts then you can stop after some time and restrict the touches.

Related

Background Sprite Image keeps being centred on screen

I'm trying to implement a scrollable background into my current GameScene. This is supposed to be done via Gesture Recognition, which I'm already using for Taps and moving other scene objects.
Unlike pretty much every other result returned by my Google searches, I don't want an infinitely scrolling background. It just needs to move with your finger, and stay where it's been moved.
The Problem:
I can move the background SKSpriteNode in my scene, but as soon as I try to move it again it snaps to the center and your scrolling effectively becomes useless. It keeps resetting itself.
Here's what I've got so far for moving my Sprites:
-(void)selectTouchedNode:(CGPoint)location
{
SKSpriteNode *node = (SKSpriteNode *)[self nodeAtPoint:location];
if ([self.selectedNode isEqual:node]){
if (![self.selectedNode isEqual:self.background]){
self.selectedNode = NULL;
}
}
if ([node isKindOfClass:[SKLabelNode class]]){
self.selectedNode = node.parent;
} else {
self.selectedNode = node;
}
NSLog(#"Node Selected: %# | Position: %f, %f",node.name,node.position.x,node.position.y);
}
- (void)respondToPan:(UIPanGestureRecognizer *)recognizer {
if (recognizer.state == UIGestureRecognizerStateBegan) {
// Get Touch Location in the View
CGPoint touchLocation = [recognizer locationInView:recognizer.view];
// Convert that Touch Location
touchLocation = [self convertPointFromView:touchLocation];
// Select Node at said Location.
[self selectTouchedNode:touchLocation];
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
// Get the translation being performed on the sprite.
CGPoint translation = [recognizer translationInView:recognizer.view];
// Copy to another CGPoint
translation = CGPointMake(translation.x, -translation.y);
// Translate the currently selected object
[self translateMotion:recognizer Translation:translation];
// Reset translation to zero.
[recognizer setTranslation:CGPointZero inView:recognizer.view];
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
// Fetch Current Location in View
CGPoint touchLocation = [recognizer locationInView:recognizer.view];
// Convert to location in game.
CGPoint correctLocation = [self convertPointFromView:touchLocation];
// If the selected node is the background node
if ([self.selectedNode isEqual:self.background]) {
NSLog(#"Scrolling the background: Node is: %#",self.selectedNode.name);
// Set up a scroll duration
float scrollDuration = 0.2;
// Get the new position based on what is allowed by the function
CGPoint newPos = [self backgroundPanPos:correctLocation];
NSLog(#"New Position: %f, %f",newPos.x,newPos.y);
// Remove all Actions from the background
[_selectedNode removeAllActions];
// Move the background to the new position with defined duration.
SKAction *moveTo = [SKAction moveTo:newPos duration:scrollDuration];
// SetTimingMode for a smoother transition
[moveTo setTimingMode:SKActionTimingEaseOut];
// Run the action
[_selectedNode runAction:moveTo];
} else {
// Otherwise, just put the damn node where the touch occured.
self.selectedNode.position = correctLocation;
}
}
}
// NEW PLAN: Kill myself
- (void)translateMotion:(UIGestureRecognizer *)recognizer Translation:(CGPoint)translation {
// Fetch Location being touched
CGPoint touchLocation = [recognizer locationInView:recognizer.view];
// Convert to place in View
CGPoint location = [self convertPointFromView:touchLocation];
// Set node to that location
self.selectedNode.position = location;
}
- (CGPoint)backgroundPanPos:(CGPoint)newPos {
// Create a new point based on the touched location
CGPoint correctedPos = newPos;
return correctedPos;
}
What do I know so far?
I've tried printing the positions before the scrolling, when it ends, and when it gets initiated again.
Results are that the background does move positions, and once you try to move it again it starts at those new Coordinates, the screen has just repositioned itself over the centre of the sprite.
Supporting Illustration:
I'm not sure I 100% understand the situation you are describing, but I believe that it might be related to the anchor point of your background sprite node.
in your method:
- (void)translateMotion:(UIGestureRecognizer *)recognizer Translation:(CGPoint)translation
you have the line:
self.selectedNode.position = location;
Since your background sprite's anchor point is set to its center by default, any time you move a new touch, it will snap the background sprite's center to the location of your finger.
In other words, background.sprite.position sets the coordinates for the background sprite's anchor point (which by default is the center of the sprite), and in this case any time you set a new position, it is moving the center to that position.
The solution in this case would be to shift the anchor point of the background sprite to be directly under the touch each time, so you are changing the position of the background relative to the point of the background the touch started on.
It's a little hard to explain, so here's some sample code to show you:
1) Create a new project in Xcode using the Sprite Kit Game template
2) Replace the contents of GameScene.m with the following:
#import "GameScene.h"
#interface GameScene ()
#property (nonatomic, strong) SKSpriteNode *sprite;
#end
#implementation GameScene
-(void)didMoveToView:(SKView *)view {
/* Setup your scene here */
self.sprite = [SKSpriteNode spriteNodeWithImageNamed:#"Spaceship"];
self.sprite.position = CGPointMake(CGRectGetMidX(self.frame),
CGRectGetMidY(self.frame));
[self addChild:self.sprite];
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
/* Called when a touch begins */
UITouch *touch = [touches anyObject];
CGPoint nodelocation = [touch locationInNode:self.sprite];
//[self adjustAnchorPointForSprite:self.sprite toLocation:nodelocation];
CGPoint location = [touch locationInNode:self];
self.sprite.position = location;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInNode:self];
self.sprite.position = location;
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
}
-(void)adjustAnchorPointForSprite:(SKSpriteNode *)sprite toLocation:(CGPoint)location {
// Remember the sprite's current position
CGPoint originalPosition = sprite.position;
// Convert the coordinates of the passed-in location to be relative to the bottom left of the sprite (instead of its current anchor point)
CGPoint adjustedNodeLocation = CGPointMake(sprite.anchorPoint.x * sprite.size.width + location.x, sprite.anchorPoint.y * sprite.size.height + location.y);
// Move the anchor point of the sprite to match the passed-in location
sprite.anchorPoint = CGPointMake(adjustedNodeLocation.x / self.sprite.size.width, adjustedNodeLocation.y / self.sprite.size.height);
// Undo any change of position caused by moving the anchor point
self.sprite.position = CGPointMake(sprite.position.x - (sprite.position.x - originalPosition.x), sprite.position.y - (sprite.position.y - originalPosition.y));
}
-(void)update:(CFTimeInterval)currentTime {
/* Called before each frame is rendered */
}
#end
3) Run the project in the sim or on a device and click / touch the top tip of the space ship and start dragging.
4) Notice that the spaceship snaps its center point to where your touch is. If you drag and release it to a new location, and the touch the screen again, it snaps its center back to your finger.
5) Now uncomment the line:
[self adjustAnchorPointForSprite:self.sprite toLocation:nodelocation];
and run the project again.
6) See how you can now drag the ship where you want it, and when you touch it later, it stays in place and follows your finger from the point you touched it. This is because the anchor point is now being adjusted to the point under the touch each time a new touch begins.
Hopefully this gives you a solution you can use in your game as well.

How do I detect a touch event on a moving UIImageView?

When researching "How do I detect a touch event on a moving UIImageView?" I've come across several answers and I tried to implement them to my app. Nothing I've come across seems to work. I'll explain what I'm trying to do then post my code. Any thoughts, suggestions, comments or answers are appreciated!
My app has several cards floating across the screen from left to right. These cards are various colors and the object of the game is the drag the cards down to their similarly colored corresponding container. If the user doesn't touch and drag the cards fast enough, the cards will simply drift off the screen and points will be lost. The more cards contained in the correct containers, the better the score.
I've written code using core animation to have my cards float from the left to right. This works. However when attempting to touch a card and drag it toward it's container, it isn't correctly detecting that I'm touching the UIImageView of the card.
To test if my I'm properly implementing the code to move a card, I've also written some code allows movement for a non-moving card. In this case my touch is being detected and acting accordingly.
Why can I only interact with stationary cards? After researching this quite a bit it seems that the code:
options:UIViewAnimationOptionAllowUserInteraction
is the key ingredient to get my moving UIImages to be detected. However I tried this doesn't seem to have any effect.
I another key thing that I may be doing wrong is not properly utilizing the correct presentation layer. I've added code like this to my project and I also only works on non-moving objects:
UITouch *t = [touches anyObject];
UIView *myTouchedView = [t view];
CGPoint thePoint = [t locationInView:self.view];
if([_card.layer.presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Card!");
}
else{
NSLog(#"backgound touched");
}
After trying these types of things I'm getting stuck. Here is my code to understand this a bit more completely:
#import "RBViewController.h"
#import <QuartzCore/QuartzCore.h>
#interface RBViewController ()
#property (nonatomic, strong) UIImageView *card;
#end
#implementation RBViewController
- (void)viewDidLoad
{
srand(time (NULL)); // will be used for random colors, drift speeds, and locations of cards
[super viewDidLoad];
[self setOutFirstCardSet]; // this sends out 4 floating cards across the screen
// the following creates a non-moving image that I can move.
_card = [[UIImageView alloc] initWithFrame:CGRectMake(400,400,100,100)];
_card.image = [UIImage imageNamed:#"goodguyPINK.png"];
_card.userInteractionEnabled = YES;
[self.view addSubview:_card];
}
the following method sends out cards from a random location on the left side of the screen and uses core animation to drift the card across the screen. Notice the color of the card and the speed of the drift will be randomly generated as well.
-(void) setOutFirstCardSet
{
for(int i=1; i < 5; i++) // sends out 4 shapes
{
CGRect cardFramei;
int startingLocation = rand() % 325;
CGRect cardOrigini = CGRectMake(-100,startingLocation + 37, 92, 87);
cardFramei.size = CGSizeMake(92, 87);
CGPoint origini;
origini.y = startingLocation + 37;
origini.x = 1200;
cardFramei.origin = origini;
_card.userInteractionEnabled = YES;
_card = [[UIImageView alloc] initWithFrame:cardOrigini];
int randomColor = rand() % 7;
if(randomColor == 0)
{
_card.image = [UIImage imageNamed:#"goodguy.png"];
}
else if (randomColor == 1)
{
_card.image = [UIImage imageNamed:#"goodguyPINK.png"];
}
else if (randomColor == 2)
{
_card.image = [UIImage imageNamed:#"goodGuyPURPLE.png"];
}
else if (randomColor == 3)
{
_card.image = [UIImage imageNamed:#"goodGuyORANGE.png"];
}
else if (randomColor == 4)
{
_card.image = [UIImage imageNamed:#"goodGuyLightPINK.png"];
}
else if (randomColor == 5)
{
_card.image = [UIImage imageNamed:#"goodGuyBLUE.png"];
}
else if (randomColor == 6)
{
_card.image = [UIImage imageNamed:#"goodGuyGREEN.png"];
}
_card.userInteractionEnabled = YES; // this is also written in my viewDidLoad method
[[_card.layer presentationLayer] hitTest:origini]; // not really sure what this does
[self.view addSubview:_card];
int randomSpeed = rand() % 20;
int randomDelay = rand() % 2;
[UIView animateWithDuration:randomSpeed + 10
delay: randomDelay + 4
options:UIViewAnimationOptionAllowUserInteraction // here is the method that I thought would allow me to interact with the moving cards. Not sure why I can't
animations: ^{
_card.frame = cardFramei;
}
completion:NULL];
}
}
notice the following method is where I put CALayer and hit test information. I'm not sure if I'm doing this correctly.
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
UIView *myTouchedView = [t view];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [self.view.layer convertPoint:thePoint toLayer:self.view.layer.superlayer];
CALayer *theLayer = [self.view.layer hitTest:thePoint];
if([_card.layer.presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!"); // This only logs when I touch a non-moving shape
}
else{
NSLog(#"backgound touched"); // this logs when I touch the background or an moving shape.
}
if(myTouchedView == _card)
{
NSLog(#"Touched a card");
_boolHasCard = YES;
}
else
{
NSLog(#"Didn't touch a card");
_boolHasCard = NO;
}
}
I want the following method to work on moving shapes. It only works on non-moving shapes. Many answers say to have the touch ask which class the card is from. As of now all my cards on of the same class (the viewController class). When trying to have the cards be their own class, I was having trouble having that view appear on my main background controller. Must I have various cards be from different classes for this to work, or can I have it work without needing to do so?
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
if([touch view]==self.card)
{
CGPoint location = [touch locationInView:self.view];
self.card.center=location;
}
}
This next method resets the movement of a card if the user starts moving it and then lifts up on it.
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
if(_boolHasCard == YES)
{
[UIView animateWithDuration:3
delay: 0
options:UIViewAnimationOptionAllowUserInteraction
animations: ^{
CGRect newCardOrigin = CGRectMake(1200,_card.center.y - 92/2, 92, 87);
_card.frame = newCardOrigin;
}
completion:NULL];
}
}
#end
The short answer is, you can't.
Core Animation does not actually move the objects along the animation path. They move the presentation layer of the object's layer.
The moment the animation begins, the system thinks the object is at it's destination.
There is no way around this if you want to use Core Animation.
You have a couple of choices.
You can set up a CADisplayLink on your view controller and roll your own animation, where you move the center of your views by a small amount on each call to the display link. This might lead to poor performance and jerky animation if you're animating a lot of objects however.
You can add a gesture recognizer to the parent view that contains all your animations, and then use layer hit testing on the paren't view's presentation view to figure out which animating layer got tapped, then fetch that layer's delegate, which will be the view you are animating. I have a project on github that shows how to do this second technique. It only detects taps on a single moving view, but it will show you the basics: Core Animation demo project on github.
(up-votes always appreciated if you find this post helpful)
It looks to me that your problem is really with just an incomplete understanding of how to convert a point between coordinate spaces. This code works exactly as expected:
- (void)viewDidLoad
{
[super viewDidLoad];
CGPoint endPoint = CGPointMake([[self view] bounds].size.width,
[[self view] bounds].size.height);
CABasicAnimation *animation = [CABasicAnimation
animationWithKeyPath:#"position"];
animation.fromValue = [NSValue valueWithCGPoint:[[_imageView layer] position]];
animation.toValue = [NSValue valueWithCGPoint:endPoint];
animation.duration = 30.0f;
[[_imageView layer] addAnimation:animation forKey:#"position"];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [[_imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
if([[_imageView layer].presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!");
}
else{
NSLog(#"backgound touched");
}
}
Notice the line in particular:
thePoint = [[_imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
When I tap on the layer image view while it's animating, I get "You touched a Shape!" in the console window and I get "background touched" when I tap around it. That's what you're wanting right?
Here's a sample project on Github
UPDATE
To help with your follow up question in the comments, I've written the touchesBegan code a little differently. Imagine that you've add all of your image views to an array (cleverly named imageViews) when you create them. You would alter your code to look something like this:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
for (UIImageView *imageView in [self imageViews]) {
thePoint = [[imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
if([[imageView layer].presentationLayer hitTest:thePoint]) {
NSLog(#"Found it!!");
break; // No need to keep iterating, we've found it
} else{
NSLog(#"Not this one!");
}
}
}
I'm not sure how expensive this is, so you may have to profile it, but it should do what you're expecting.

How can I detect which subview is touched when dealing various moving subviews?

The following code produces an animation of an image of a shape from the top of the screen and it drifts downward using core animation. When the user taps, it will log whether the user tapped the image (the shape) or if they missed the shape and therefore touched the background. This seems to work fine. However what about when I add in other images of shapes? I'm looking for suggestions as to how to build onto this code to allow for more detailed information to be logged.
Let's say I want to programmatically add in a UIImage of triangle, a UIImage of a square, and a UIImage of a circle. I want all three images to start drifting from top to bottom. They may even overlap each other as they transition. I want to be able to log "You touched the square!" or whatever the appropriate shape I've touched. I want to be able to do so even if the square is positioned in between the triangle and the circle but part of the square is showing so I can tap it. (This example shows I'm not just wanting to interact with the top-most layer)
How do I tweak this code to programmatically add in different UIImages (various shape images perhaps) and be able to log which shape I'm touching?
- (void)viewDidLoad
{
[super viewDidLoad];
CGPoint endPoint = CGPointMake([[self view] bounds].size.width,
[[self view] bounds].size.height);
CABasicAnimation *animation = [CABasicAnimation
animationWithKeyPath:#"position"];
animation.fromValue = [NSValue valueWithCGPoint:[[_imageView layer] position]];
animation.toValue = [NSValue valueWithCGPoint:endPoint];
animation.duration = 30.0f;
[[_imageView layer] addAnimation:animation forKey:#"position"];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [[_imageView layer] convertPoint:thePoint toLayer:[[self view] layer]];
if([[_imageView layer].presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!");
// for now I'm just logging this information. Eventually I want to have the shape follow my figure as I move it to a new location. I want everything else to continue animating but I when I touch a particular shape I want to have complete control on repositioning that specific shape. That's just some insight beyond the scope of this question. However feel free to comment about this if you have suggestions.
}
else{
NSLog(#"backgound touched");
}
}
I'm thinking the answer to this may have something to do with looping the the various subviews. Look at how I'm thinking I might change the -touchesBegan method:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *t = [t anyObject];
CGPoint thePoint = [t locationInView:self.view];
for (UIView *myView in viewArray) {
if (CGRectContainsPoint(myView.frame, thePoint)) {....
Notice here I set up a viewArray and have put all my subviews in the viewArray. Is this something I should be using? Or perhaps something like the following if I was going to loop through my layers:
for(CALayer *mylayer in self.view.layer.sublayers)
No matter how much I try looping through my views and or layers I can't seem to get this to work. I feel like I may just be missing something obvious...
I think that the culprit is the line where you change the coordinate system for thePoint. It should probably read convertPoint:fromLayer: as prior to the execution of that line, your point is in the coordinate system of self.view and I'm assuming that you would like it to be in that of the imaveView. Alternately, you might skip that line altogether and call [t locationInView:_imageView] instead.

If statement to choose between multiple CALayer

I have a Core Animation image on boxLayer and I'm duplicating it, changing the action of and shifting the position of the 2nd (boxLayer2) so that someone can choose between the 2.
I want the user to be able to tap the image for boxLayer and the boxLayer2 image does nothing but boxLayer moves (I didn't include my animation code beyond receiving the touch) and viceversa.
I cannot get an if statement to work. I've tried multiple variations self.layer == boxLayer or CALayer == boxlayer ... sublayer is an array so that's out. Any help/explanation as I know I'm missing something would be greatly appreciated.
Thanks!
UIView *BounceView is declared in the VC
In BounceView I have 2 CALayers declared: boxlayer & boxlayer2
BounceView.m
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
[self setBackgroundColor:[UIColor clearColor]];
// Create the new layer object
boxLayer = [[CALayer alloc] init];
boxLayer2 = [[CALayer alloc] init];
// Give it a size
[boxLayer setBounds:CGRectMake(0.0, 0.0, 185.0, 85.0)];
[boxLayer2 setBounds:CGRectMake(0.0, 0.0, 185.0, 85.0)];
// Give it a location
[boxLayer setPosition:CGPointMake(150.0, 140.0)];
[boxLayer2 setPosition:CGPointMake(150.0, 540.0)];
// Create a UIImage
UIImage *layerImage = [UIImage imageNamed:#"error-label.png"];
UIImage *layerImage2 = [UIImage imageNamed:#"error-label.png"];
// Get the underlying CGImage
CGImageRef image = [layerImage CGImage];
CGImageRef image2 = [layerImage2 CGImage];
// Put the CGImage on the layer
[boxLayer setContents:(__bridge id)image];
[boxLayer2 setContents:(__bridge id)image2];
// Let the image resize (without changing the aspect ratio)
// to fill the contentRect
[boxLayer setContentsGravity:kCAGravityResizeAspect];
[boxLayer2 setContentsGravity:kCAGravityResizeAspect];
// Make it a sublayer of the view's layer
[[self layer] addSublayer:boxLayer];
[[self layer] addSublayer:boxLayer2];
}
return self;
}
- (void)touchesBegan:(NSSet *)touches
withEvent:(UIEvent *)event
{
if (CAlayer == boxLayer)
{
// do something
}
else
{
// do something else
}
}
It looks to me like you are trying to know what layer the user tapped on inside touched began and that this is your problem.
How to find out what layer was tapped
CALayer has an instance method - (CALayer *)hitTest:(CGPoint)thePoint that
Returns the farthest descendant of the receiver in the layer hierarchy (including itself) that contains a specified point.
So to find out what layer you tapped you should do something like
- (void)touchesBegan:(NSSet *)touches
withEvent:(UIEvent *)event {
UITouch *anyTouch = [[event allTouches] anyObject];
CGPoint pointInView = [anyTouch locationInView:self];
// Now you can test what layer that was tapped ...
if ([boxLayer hitTest:pointInView]) {
// do something with boxLayer
}
// the rest of your code
}
This works because hitTest will return nil if the point is outside the layer's bounds.
David Rönnqvist's post tells you how to use hitTest on the layer to figure out which layer was touched. That should work. I would code that method slightly differently, though. I would have my view's layer include boxLayer and boxLayer2 as sub-layers, and then send the hitTest method to the parent layer. It would then return the layer that contains the touch.
It would be much simpler, though, if you use separate views, each with a layer that contains your content. Then you can use gesture recognizers on each view, and higher level Cocoa Touch code rather than CA code to manage taps. Cleaner and easier to maintain.

Dragging a UIView created after touchesBegan

I have a few UIView subclasses that exist in a kind of inventory. When you tap one, another draggable version is placed over top of it. I want the inventory versions to stay put while you drag others around. The code to make a duplicate works but dragging your finger around doesn't move it.
If I release and then start dragging the newly created version it moves as expected. I think this is because the original touches (that made the dupe) didn't have the draggable version in the responder chain.
A little code...
in my stationary "icon"...
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
[self.viewController placeDraggableItem:self.item WithPoint:self.frame.origin];
}
in my viewController...
- (void)placeDraggableItem:(Item *)item WithPoint:(CGPoint)point
{
DraggableItem *draggableItem = [[DraggableItem alloc] initWithImage:[UIImage imageNamed:item.graphic]];
draggableItem.frame = CGRectMake(point.x, scrollView.frame.origin.y + point.y, 64.0f, 64.0f);
[self.view addSubview:draggableItem];
[draggableItem release];
}
in my DraggableItem...
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
currentPoint = [[touches anyObject] locationInView:self];
}
- (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
CGPoint activePoint = [[touches anyObject] locationInView:self];
CGPoint newPoint = CGPointMake(self.center.x + (activePoint.x - currentPoint.x), self.center.y + (activePoint.y - currentPoint.y));
float midPointX = CGRectGetMidX(self.bounds);
if (newPoint.x > self.superview.bounds.size.width - midPointX)
newPoint.x = self.superview.bounds.size.width - midPointX;
else if (newPoint.x < midPointX) // If too far left...
newPoint.x = midPointX;
float midPointY = CGRectGetMidY(self.bounds);
if (newPoint.y > self.superview.bounds.size.height - midPointY)
newPoint.y = self.superview.bounds.size.height - midPointY;
else if (newPoint.y < midPointY) // If too far up...
newPoint.y = midPointY;
self.center = newPoint;
}
Now again creating the draggable version works. And the draggable version is able to be moved after you first release your first touch. But I think I need to get the newly create UIView to respond to touches that were originally made for the "icon".
Any ideas?
I'm aware this question is kind of similar to this one: How to "transfer" first responder from one UIView to another? but in that case the view which should receive the touches is already there whereas I need to pass touches onto a newly created view.
I don't know of any way to hand touches off to a newly created view, though that doesn't mean there isn't a way to do it. It looks like you really only want your draggable view to handle the touches, I would probably create each draggable view with with an alpha of 0 at the same time you create the non draggable views, and in touchesBegan set it to 1. If your non-draggable views don't need to handle touches, then it doesn't make sense to make them handle touches just to pass them along.