touchesBegan converts to ccTouchesBegan? - objective-c

I am not sure if the way that applied in touchesBegan can be applied as well in ccTouchesBegan or other touches call back. For example I have some code:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
NSUInteger numTaps = [[touches anyObject] tapCount];
touchPhaseText.text = #"Phase: Touches began";
touchInfoText.text = #"";
if(numTaps >= 2) {
touchInfoText.text = [NSString stringWithFormat:#"%d taps",numTaps];
if ((numTaps == 2) && piecesOnTop) {
// A double tap positions the three pieces in a diagonal.
// The user will want to double tap when two or more pieces are on top of each other
if (firstPieceView.center.x == secondPieceView.center.x)
secondPieceView.center = CGPointMake(firstPieceView.center.x - 50, firstPieceView.center.y - 50);
if (firstPieceView.center.x == thirdPieceView.center.x)
thirdPieceView.center = CGPointMake(firstPieceView.center.x + 50, firstPieceView.center.y + 50);
if (secondPieceView.center.x == thirdPieceView.center.x)
thirdPieceView.center = CGPointMake(secondPieceView.center.x + 50, secondPieceView.center.y + 50);
touchInstructionsText.text = #"";
}
} else {
touchTrackingText.text = #"";
}
// Enumerate through all the touch objects.
NSUInteger touchCount = 0;
for (UITouch *touch in touches) {
// Send to the dispatch method, which will make sure the appropriate subview is acted upon
[self dispatchFirstTouchAtPoint:[touch locationInView:self] forEvent:nil];
touchCount++;
}
}
// Checks to see which view, or views, the point is in and then calls a method to perform the opening animation,
// which makes the piece slightly larger, as if it is being picked up by the user.
-(void)dispatchFirstTouchAtPoint:(CGPoint)touchPoint forEvent:(UIEvent *)event
{
if (CGRectContainsPoint([firstPieceView frame], touchPoint)) {
[self animateFirstTouchAtPoint:touchPoint forView:firstPieceView];
}
if (CGRectContainsPoint([secondPieceView frame], touchPoint)) {
[self animateFirstTouchAtPoint:touchPoint forView:secondPieceView];
}
if (CGRectContainsPoint([thirdPieceView frame], touchPoint)) {
[self animateFirstTouchAtPoint:touchPoint forView:thirdPieceView];
}
}
If the code like this how should I convert it to ccTouchesBegan

The same code should be reusable for ccTouches*. Just one additional thing is required, which is to return either a
kEventHandled -to indicate that the event has been handled, and to stop forwarding the event to the next handler in the chain, or
kEventIgnored - continue forwarding even to the next handler in the chain

Related

objective c: detect touch began/down when touch started from outside of view/image/button

i have something that resembles a key pad (one master view, nine subviews). the touch began/down may start in any of the nine subviews. i detect touch down/began, to know when user drags their finger into other subviews (which views finger is dragged into), and when user lifts their finger up/touch up/stop.
(stack overflow not letting this question be posted. i'll try pasting the code from answer section, see if that works).
// in IB set long press gesture min duration to 0.1
-(IBAction)handleLongPressOnVoices:(UILongPressGestureRecognizer *)gestureRecognizer
{
CGPoint location = [gestureRecognizer locationInView:gestureRecognizer.view];
if (gestureRecognizer.state == UIGestureRecognizerStateBegan){
[self whichVoiceSelected:location];
} else if(gestureRecognizer.state == UIGestureRecognizerStateChanged){
[self whichVoiceSelected:location];
} else if(gestureRecognizer.state == UIGestureRecognizerStateEnded){
[self.avPlayer stop];
}
}
-(void)whichVoiceSelected:(CGPoint)location {
int updatedSelectedVoiceNumber;
if(CGRectContainsPoint(self.voice0.frame, location)){
updatedSelectedVoiceNumber=0;
self.statusLabelOutlet.text = #"...";
}else if(CGRectContainsPoint(self.voice1.frame, location)){
updatedSelectedVoiceNumber=1;
self.statusLabelOutlet.text = #"...";
}else if(CGRectContainsPoint(self.voice2.frame, location)){
updatedSelectedVoiceNumber=2;
self.statusLabelOutlet.text = #"...";
} ...
...
}
i found parts of the answer across many stackoverflow replies. i am pasting my approach to resolving this question (a) to help the next person and (b) as invitation for better solutions.
// in IB set long press gesture min duration to 0.1
-(IBAction)handleLongPressOnVoices:(UILongPressGestureRecognizer *)gestureRecognizer
{
CGPoint location = [gestureRecognizer locationInView:gestureRecognizer.view];
if (gestureRecognizer.state == UIGestureRecognizerStateBegan){
[self whichVoiceSelected:location];
} else if(gestureRecognizer.state == UIGestureRecognizerStateChanged){
[self whichVoiceSelected:location];
} else if(gestureRecognizer.state == UIGestureRecognizerStateEnded){
[self.avPlayer stop];
}
}
-(void)whichVoiceSelected:(CGPoint)location {
int updatedSelectedVoiceNumber;
if(CGRectContainsPoint(self.voice0.frame, location)){
updatedSelectedVoiceNumber=0;
self.statusLabelOutlet.text = #"...";
}else if(CGRectContainsPoint(self.voice1.frame, location)){
updatedSelectedVoiceNumber=1;
self.statusLabelOutlet.text = #"...";
}else if(CGRectContainsPoint(self.voice2.frame, location)){
updatedSelectedVoiceNumber=2;
self.statusLabelOutlet.text = #"...";
} ...
...
}

How do I detect a touch event on a moving UIImageView?

When researching "How do I detect a touch event on a moving UIImageView?" I've come across several answers and I tried to implement them to my app. Nothing I've come across seems to work. I'll explain what I'm trying to do then post my code. Any thoughts, suggestions, comments or answers are appreciated!
My app has several cards floating across the screen from left to right. These cards are various colors and the object of the game is the drag the cards down to their similarly colored corresponding container. If the user doesn't touch and drag the cards fast enough, the cards will simply drift off the screen and points will be lost. The more cards contained in the correct containers, the better the score.
I've written code using core animation to have my cards float from the left to right. This works. However when attempting to touch a card and drag it toward it's container, it isn't correctly detecting that I'm touching the UIImageView of the card.
To test if my I'm properly implementing the code to move a card, I've also written some code allows movement for a non-moving card. In this case my touch is being detected and acting accordingly.
Why can I only interact with stationary cards? After researching this quite a bit it seems that the code:
options:UIViewAnimationOptionAllowUserInteraction
is the key ingredient to get my moving UIImages to be detected. However I tried this doesn't seem to have any effect.
I another key thing that I may be doing wrong is not properly utilizing the correct presentation layer. I've added code like this to my project and I also only works on non-moving objects:
UITouch *t = [touches anyObject];
UIView *myTouchedView = [t view];
CGPoint thePoint = [t locationInView:self.view];
if([_card.layer.presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Card!");
}
else{
NSLog(#"backgound touched");
}
After trying these types of things I'm getting stuck. Here is my code to understand this a bit more completely:
#import "RBViewController.h"
#import <QuartzCore/QuartzCore.h>
#interface RBViewController ()
#property (nonatomic, strong) UIImageView *card;
#end
#implementation RBViewController
- (void)viewDidLoad
{
srand(time (NULL)); // will be used for random colors, drift speeds, and locations of cards
[super viewDidLoad];
[self setOutFirstCardSet]; // this sends out 4 floating cards across the screen
// the following creates a non-moving image that I can move.
_card = [[UIImageView alloc] initWithFrame:CGRectMake(400,400,100,100)];
_card.image = [UIImage imageNamed:#"goodguyPINK.png"];
_card.userInteractionEnabled = YES;
[self.view addSubview:_card];
}
the following method sends out cards from a random location on the left side of the screen and uses core animation to drift the card across the screen. Notice the color of the card and the speed of the drift will be randomly generated as well.
-(void) setOutFirstCardSet
{
for(int i=1; i < 5; i++) // sends out 4 shapes
{
CGRect cardFramei;
int startingLocation = rand() % 325;
CGRect cardOrigini = CGRectMake(-100,startingLocation + 37, 92, 87);
cardFramei.size = CGSizeMake(92, 87);
CGPoint origini;
origini.y = startingLocation + 37;
origini.x = 1200;
cardFramei.origin = origini;
_card.userInteractionEnabled = YES;
_card = [[UIImageView alloc] initWithFrame:cardOrigini];
int randomColor = rand() % 7;
if(randomColor == 0)
{
_card.image = [UIImage imageNamed:#"goodguy.png"];
}
else if (randomColor == 1)
{
_card.image = [UIImage imageNamed:#"goodguyPINK.png"];
}
else if (randomColor == 2)
{
_card.image = [UIImage imageNamed:#"goodGuyPURPLE.png"];
}
else if (randomColor == 3)
{
_card.image = [UIImage imageNamed:#"goodGuyORANGE.png"];
}
else if (randomColor == 4)
{
_card.image = [UIImage imageNamed:#"goodGuyLightPINK.png"];
}
else if (randomColor == 5)
{
_card.image = [UIImage imageNamed:#"goodGuyBLUE.png"];
}
else if (randomColor == 6)
{
_card.image = [UIImage imageNamed:#"goodGuyGREEN.png"];
}
_card.userInteractionEnabled = YES; // this is also written in my viewDidLoad method
[[_card.layer presentationLayer] hitTest:origini]; // not really sure what this does
[self.view addSubview:_card];
int randomSpeed = rand() % 20;
int randomDelay = rand() % 2;
[UIView animateWithDuration:randomSpeed + 10
delay: randomDelay + 4
options:UIViewAnimationOptionAllowUserInteraction // here is the method that I thought would allow me to interact with the moving cards. Not sure why I can't
animations: ^{
_card.frame = cardFramei;
}
completion:NULL];
}
}
notice the following method is where I put CALayer and hit test information. I'm not sure if I'm doing this correctly.
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
UIView *myTouchedView = [t view];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [self.view.layer convertPoint:thePoint toLayer:self.view.layer.superlayer];
CALayer *theLayer = [self.view.layer hitTest:thePoint];
if([_card.layer.presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!"); // This only logs when I touch a non-moving shape
}
else{
NSLog(#"backgound touched"); // this logs when I touch the background or an moving shape.
}
if(myTouchedView == _card)
{
NSLog(#"Touched a card");
_boolHasCard = YES;
}
else
{
NSLog(#"Didn't touch a card");
_boolHasCard = NO;
}
}
I want the following method to work on moving shapes. It only works on non-moving shapes. Many answers say to have the touch ask which class the card is from. As of now all my cards on of the same class (the viewController class). When trying to have the cards be their own class, I was having trouble having that view appear on my main background controller. Must I have various cards be from different classes for this to work, or can I have it work without needing to do so?
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
if([touch view]==self.card)
{
CGPoint location = [touch locationInView:self.view];
self.card.center=location;
}
}
This next method resets the movement of a card if the user starts moving it and then lifts up on it.
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
if(_boolHasCard == YES)
{
[UIView animateWithDuration:3
delay: 0
options:UIViewAnimationOptionAllowUserInteraction
animations: ^{
CGRect newCardOrigin = CGRectMake(1200,_card.center.y - 92/2, 92, 87);
_card.frame = newCardOrigin;
}
completion:NULL];
}
}
#end
The short answer is, you can't.
Core Animation does not actually move the objects along the animation path. They move the presentation layer of the object's layer.
The moment the animation begins, the system thinks the object is at it's destination.
There is no way around this if you want to use Core Animation.
You have a couple of choices.
You can set up a CADisplayLink on your view controller and roll your own animation, where you move the center of your views by a small amount on each call to the display link. This might lead to poor performance and jerky animation if you're animating a lot of objects however.
You can add a gesture recognizer to the parent view that contains all your animations, and then use layer hit testing on the paren't view's presentation view to figure out which animating layer got tapped, then fetch that layer's delegate, which will be the view you are animating. I have a project on github that shows how to do this second technique. It only detects taps on a single moving view, but it will show you the basics: Core Animation demo project on github.
(up-votes always appreciated if you find this post helpful)
It looks to me that your problem is really with just an incomplete understanding of how to convert a point between coordinate spaces. This code works exactly as expected:
- (void)viewDidLoad
{
[super viewDidLoad];
CGPoint endPoint = CGPointMake([[self view] bounds].size.width,
[[self view] bounds].size.height);
CABasicAnimation *animation = [CABasicAnimation
animationWithKeyPath:#"position"];
animation.fromValue = [NSValue valueWithCGPoint:[[_imageView layer] position]];
animation.toValue = [NSValue valueWithCGPoint:endPoint];
animation.duration = 30.0f;
[[_imageView layer] addAnimation:animation forKey:#"position"];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [[_imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
if([[_imageView layer].presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!");
}
else{
NSLog(#"backgound touched");
}
}
Notice the line in particular:
thePoint = [[_imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
When I tap on the layer image view while it's animating, I get "You touched a Shape!" in the console window and I get "background touched" when I tap around it. That's what you're wanting right?
Here's a sample project on Github
UPDATE
To help with your follow up question in the comments, I've written the touchesBegan code a little differently. Imagine that you've add all of your image views to an array (cleverly named imageViews) when you create them. You would alter your code to look something like this:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
for (UIImageView *imageView in [self imageViews]) {
thePoint = [[imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
if([[imageView layer].presentationLayer hitTest:thePoint]) {
NSLog(#"Found it!!");
break; // No need to keep iterating, we've found it
} else{
NSLog(#"Not this one!");
}
}
}
I'm not sure how expensive this is, so you may have to profile it, but it should do what you're expecting.

How do I use two actions in a UIPanGestureRecognizer?

I am working with two subviews. Each will be unique and have it's own "action".
Subview 1 = User can drag around the view, rotate, and zoom it
Subview 2 = When user moves finger across their screen an image is added at each point their finger touches.
I have both of these completed by using UIPanGestureRecognizer. My question is, how can I separate these two actions? I want to be able to add one subview, do what is required, and then when I add the other subview, prevent the previous actions from occurring.
Here is what I have tried, this is done in my panGesture method:
for (UIView * subview in imageView.subviews)
{
if ([subview isKindOfClass:[UIImageView class]])
{
if (subview == _aImageView)
{
CGPoint translation = [panRecognizer translationInView:self.view];
CGPoint imageViewPosition = _aImageView.center;
imageViewPosition.x += translation.x;
imageViewPosition.y += translation.y;
_aImageView.center = imageViewPosition;
[panRecognizer setTranslation:CGPointZero inView:self.view];
}
else if (subview == _bImageView)
{
currentTouch = [panRecognizer locationInView:self.view];
CGFloat distance = [self distanceFromPoint:currentTouch ToPoint:prev_touchPoint];
accumulatedDistance += distance;
CGFloat fixedDistance = 60;
if ([self distanceFromPoint:currentTouch ToPoint:prev_touchPoint] > fixedDistance)
{
[self addbImage];
prev_touchPoint = currentTouch;
}
}
}
}
If you want different gesture recognition in two different views, put separate recognizers on each view.
Usually, you want to have your view controller own and manage gesture recognizers, e.g.
- (void)viewDidLoad {
self.panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handleGesture:)];
self.panGesture.delegate = self;
[self.viewX addGestureRecognizer:self.panGesture];
// repeat with other recognisers...
}
Note that setting your controller as delegate of the gestureRecognizer is important: this enables you to handle the following delegate method from the view controller (which was the main question):
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
// handle your logic, which gestureRecognizer should proceed...
return NO;
}
The handler method is the same is this example, but you can set up your own handlers as you like:
- (void)handleGesture:(UIGestureRecognizer*)gestureRecognizer {
// handle gesture (usually sorted by state), e.g.
// if(gesture.state == UIGestureRecognizerStateEnded) { ... }
}

CAEmitterLayer emits random unwanted particles on touch events

I'm trying to set up a CAEmitterLayer to make a confetti effect, and I've run into two issues:
Whenever I set the birthRate on my cells to something non-zero to start the animation I get a flurry of cells placed randomly on screen, which animate normally, and then the emitter continues to emit properly after that.
Whenever the emitterCells are drawing things on screen, any time I touch the screen, the emitter draws emitterCells in (seemingly) random locations that exist for a (seemingly) random amount of time. Nothing in the emitter is tied to any touch events (i.e. I'm not intentionally drawing anything on a touch event), but the layer is in a view that has multiple embedded views. The more I touch, the more cells show up.
Here's my code for setting up the emitter, and then starting and stopping it (once I've called the stop function, then taps on the screen cease creating new random elements):
- (void)setupConfetti
{
self.confettiLayer = [CAEmitterLayer layer];
[self.view.layer addSublayer:self.confettiLayer];
[self.view.layer setNeedsDisplay];
self.confettiLayer.emitterPosition = CGPointMake(1024.0/2,-50.0);
self.confettiLayer.emitterSize = CGSizeMake(1000.0, 10.0);
self.confettiLayer.emitterShape = kCAEmitterLayerLine;
self.confettiLayer.renderMode =kCAEmitterLayerUnordered;
CAEmitterCell *confetti = [CAEmitterCell emitterCell];
confetti1.contents = (id)[[UIImage imageNamed:#"confetti.png"] CGImage];
confetti.emissionLongitude = M_PI;
confetti.emissionLatitude = 0;
confetti.lifetime = 5;
confetti.birthRate = 0.0;
confetti.velocity = 125;
confetti.velocityRange = 50;
confetti.yAcceleration = 50;
confetti.spin = 0.0;
confetti.spinRange = 10;
confetti.name = #"confetti1";
self.confettiLayer.emitterCells = [NSArray arrayWithObjects:confetti, nil];
}
To start the confetti:
- (void)startConfettiAnimation
{
[self.confettiLayer setValue:[NSNumber numberWithInt:10.0] forKeyPath:#"emitterCells.confetti.birthRate"];
}
And to stop it:
- (void)stopConfettiAnimation
{
[self.confettiLayer setValue:[NSNumber numberWithInt:0.0] forKeyPath:#"emitterCells.confetti.birthRate"];
}
Again, once it gets started, after the initial flurry of random elements, this works just fine: everything animates normally, and when the birthRate is later set to zero, it ends gracefully. It just seems to respond to touch events, and I have no idea why. I've tried adding the emitterLayer to a different view, disabling user interaction on that view, and then adding it as a subview of the main view, and that didn't seem to work.
Any help/insight would be much appreciated!
Thanks,
Sam
I know this is an old post, but I also had this problem.
Jackslash answers it well in this post:
iOS 7 CAEmitterLayer spawning particles inappropriately
You need to set beginTime on your emitter layer to begin at the current time with CACurrentMediaTime(). It seems the problem we have occurs because the emitter started already in the past.
emitter.beginTime = CACurrentMediaTime();
Could it be that you aren't checking to see if the particle is emitting like in the Wenderlich example Artur Ozieranski posted? I'm not seeing the doubling as long as the check is in place.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
[fireView setEmitterPositionFromTouch: [touches anyObject]];
[fireView setIsEmitting:YES];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
[fireView setIsEmitting:NO];
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
[fireView setIsEmitting:NO];
}
-(void)setIsEmitting:(BOOL)isEmitting
{
//turn on/off the emitting of particles
[fireEmitter setValue:[NSNumber numberWithInt:isEmitting?200:0] forKeyPath:#"emitterCells.fire.birthRate"];
}
.h file
#import <UIKit/UIKit.h>
#interface DWFParticleView : UIView
-(void)setEmitterPositionFromTouch: (CGPoint*)t;
-(void)setIsEmitting:(BOOL)isEmitting;
#end
.m file
#import "DWFParticleView.h"
#import <QuartzCore/QuartzCore.h>
#implementation DWFParticleView
{
CAEmitterLayer* fireEmitter; //1
}
-(void)awakeFromNib
{
//set ref to the layer
fireEmitter = (CAEmitterLayer*)self.layer; //2
//configure the emitter layer
fireEmitter.emitterPosition = CGPointMake(50, 50);
fireEmitter.emitterSize = CGSizeMake(10, 10);
CAEmitterCell* fire = [CAEmitterCell emitterCell];
fire.birthRate = 0;
fire.lifetime = 1.5;
fire.lifetimeRange = 0.3;
fire.color = [[UIColor colorWithRed:255 green:255 blue:255 alpha:0.1] CGColor];
fire.contents = (id)[[UIImage imageNamed:#"Particles_fire.png"] CGImage];
[fire setName:#"fire"];
fire.velocity =5;
fire.velocityRange = 20;
fire.emissionRange = M_PI_2;
fire.scaleSpeed = 0.1;
fire.spin = 0.5;
fireEmitter.renderMode = kCAEmitterLayerAdditive;
//add the cell to the layer and we're done
fireEmitter.emitterCells = [NSArray arrayWithObject:fire];
}
+ (Class) layerClass //3
{
//configure the UIView to have emitter layer
return [CAEmitterLayer class];
}
-(void)setEmitterPositionFromTouch: (CGPoint*)t
{
//change the emitter's position
fireEmitter.emitterPosition = (*t);
}
-(void)setIsEmitting:(BOOL)isEmitting
{
//turn on/off the emitting of particles
[fireEmitter setValue:[NSNumber numberWithInt:isEmitting?100:0] forKeyPath:#"emitterCells.fire.birthRate"];
}
#end
I used this code for create a custom view and to emit particles on
touch
Here is the call statement for emission of particle on touch
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint p = [[touches anyObject] locationInView:self.view];
[fireView setEmitterPositionFromTouch: &p];
[fireView setIsEmitting:YES];
}
may be it will work for you .

iPhone OS3 changes to UIScrollView subclasses

I have a subclass of UIScrollView that overrides
touchesBegan:withEvent:
touchesMoved:withEvent:
touchesEnded:withEvent:
Overriding these three seems to be a technique that is widely used (based on my observations in forums). However, as soon as I compiled this code on OS3, these methods are no longer being called. Has anyone else seen this problem? Is there a known fix that doesn't use undocumented methods?
My first attempt at a solution was to move all the touchesBegan/Moved/Ended methods down into my content view and set
delaysContentTouches = NO;
canCancelContentTouches = NO;
This worked partially, but left me unable to pan when I have zoomed. My second attempt only set canCancelContentTouches = NO when there were two touches (thus passing the pinch gesture through to the content). This method was sketchy and didn't work very well.
Any ideas? My requirement is that the scroll view must handle the pan touches, and I must handle the zoom touches.
My solution is not pretty. Basically there is a scroll view who contains a content view. The scroll view does not implement touchesBegan,Moved,Ended at all. The content view maintains a pointer to his parent (called "parentScrollView" in this example). The content view handles the logic and uses [parentScrollView setCanCancelContentTouches:...] to determine whether or not to let the parent view cancel a touch event (and thus perform a scroll event). The tap count logic is there because users rarely place both fingers onscreen at exactly the same time so the first touch must be ignored if it is very quickly followed by a second.
-(void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
if(parentViewIsUIScrollView)
{
UIScrollView * parentScrollView = (UIScrollView*)self.superview;
if([touches count] == 1)
{
if([[touches anyObject] tapCount] == 1)
{
if(numberOfTouches > 0)
{
[parentScrollView setCanCancelContentTouches:NO];
//NSLog(#"cancel NO - touchesBegan - second touch");
numberOfTouches = 2;
}
else
{
[parentScrollView setCanCancelContentTouches:YES];
//NSLog(#"cancel YES - touchesBegan - first touch");
numberOfTouches = 1;
}
}
else
{
numberOfTouches = 1;
[parentScrollView setCanCancelContentTouches:NO];
//NSLog(#"cancel NO - touchesBegan - doubletap");
}
}
else
{
[parentScrollView setCanCancelContentTouches:NO];
//NSLog(#"cancel NO - touchesBegan");
numberOfTouches = 2;
}
//NSLog(#"numberOfTouches_touchesBegan = %i",numberOfTouches);
}
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
if(touchesCrossed)
return;
if(parentViewIsUIScrollView)
{
UIScrollView * parentScrollView = (UIScrollView*)self.superview;
NSArray * thoseTouches = [[event touchesForView:self] allObjects];
if([thoseTouches count] != 2)
return;
numberOfTouches = 2;
/* compute and perform pinch event */
[self setNeedsDisplay];
[parentScrollView setContentSize:self.frame.size];
}
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
touchesCrossed = NO;
if(parentViewIsUIScrollView)
{
numberOfTouches = MAX(numberOfTouches-[touches count],0);
[(UIScrollView*)self.superview setCanCancelContentTouches:YES];
//NSLog(#"cancel YES - touchesEnded");
//NSLog(#"numberOfTouches_touchesEnded = %i",numberOfTouches);
}
}