Cocos2d is prety new for me so i don't know what i should do with this situation:
I want to make a game thats something like risk. Now i made a background image like a world map (just to test). and on this map i want a swipe gesture so i can move accross the map on my ipad ( the map is prety big so i want to swipe it arround).
My problem is i don't know what the objects are called i should use. And how i can implement the gestures the best way (do i need to calculate the movement myself?).
Thanks!
Stefan.
You could maybe connect UIKit's Pan Gesture Recognizer to CCDirector's view and handle pan gesture in CCLayer class. In that way, you could have handle method which moves background with every pan movement. (Code with cocos2d 1.0.1, similar could be done with 2.0 version)
UIPanGestureRecognizer* pan = [[[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanGesture:)] autorelease];
CCDirector* director = [CCDirector sharedDirector];
[[director openGLView] addGestureRecognizer:pan];
Handler method would look like this:
- (void)handlePanGesture:(UIGestureRecognizer*)gestureRecognizer {
// If there is more than one pan gesture recognizer connected with this method, you should remember pan and check if gestureRecognizer is equal to pan
switch (gestureRecognizer.state) {
case UIGestureRecognizerStateBegan: {
// Do something that needs to be done when pan gesture started
break;
}
case UIGestureRecognizerStateChanged: {
// Get pan gesture recognizer translation
CGPoint translation = [(UIPanGestureRecognizer*)gestureRecognizer translationInView:gestureRecognizer.view];
// Invert Y since position and offset are calculated in gl coordinates
translation = ccp(translation.x, -translation.y);
// Here you should move your background, probably in oposite direction of translation vector, something like
background.position = ccp(background.position.x - translation.x, background.position.y - translation.y);
// Refresh pan gesture recognizer
[(UIPanGestureRecognizer*)gestureRecognizer setTranslation:CGPointZero inView:gestureRecognizer.view];
break;
}
case UIGestureRecognizerStateEnded: {
// Do some work that should be done after panning is finished
break;
}
default:
break;
}
}
I think you're looking for this to add an object:
CCSprite *objectName = [CCSprite spriteWithFile:#"fileName.png"];
[self addChild:objectName];
By default, I believe the object will be in the bottom left corner.
Related
Reading through the documentation here.
I know how to successfully setup PAN gestures for a C4Object. How would I disable a PAN gesture though?
Using...
[object setUserInteractionEnabled:NO]
... disables all gestures including TAP events and...
object.gestureRecognizers = NO
... doesn't allow me to reinitialize PAN gestures.
If anyone could share with me how disable PAN gestures (toggle PAN on/off) without effecting other gesture events it would be greatly appreciated.
You can get access to the gestures that you add to an object by using the gestureForName: method, which returns a UIGestureRecognizer object. From there, you can interact with that gesture recognizer and change its properties directly.
To toggle on/off a gesture recognizer, all you have to do is change the value of its enabled property.
The following works for me:
#import "C4WorkSpace.h"
#implementation C4WorkSpace {
UIGestureRecognizer *gesture;
C4Shape *square, *circle;
}
-(void)setup {
square = [C4Shape rect:CGRectMake(0, 0, 100, 100)];
square.center = self.canvas.center;
circle = [C4Shape ellipse:square.frame];
circle.center = CGPointMake(square.center.x, square.center.y + 200);
[self listenFor:#"touchesBegan" fromObject:circle andRunMethod:#"toggle"];
[self.canvas addObjects:#[square, circle]];
[square addGesture:PAN name:#"thePan" action:#"move:"];
gesture = [square gestureForName:#"thePan"];
}
-(void)toggle {
gesture.enabled = !gesture.isEnabled;
if(gesture.enabled == YES) square.fillColor = C4GREY;
else square.fillColor = C4RED;
}
#end
The key part of this example is the following:
[square addGesture:PAN name:#"thePan" action:#"move:"];
gesture = [square gestureForName:#"thePan"];
Notice, in the implementation there is a UIGestureRecognizer variable called gesture. What we do on the second line is find the PAN gesture associated with the square object and keep a reference to it.
Then, whenever we toggle by touching the circle we do the following:
gesture.enabled = !gesture.isEnabled;
That is, if the gesture is enabled then disable it (and vice-versa).
You can check out more on the UIGestureRecognizer Class Reference
I'm a newbie to this and remaking an app. I am trying to use UITapGestureRecognizer. It works in the initial project file but not the new one. The only difference is that the old one uses a navigational controller but mine doesn't.
In the new one the self distance:location to:centre is stuck at 640 no matter where you press on the screen.
Can anyone help? I have no idea why it isn't working.
- (void)handleSingleTap:(UITapGestureRecognizer *)recognizer {
CGPoint location = [recognizer locationInView:[recognizer.view superview]];
CGPoint centre = CGPointMake(512,768 / 2);
NSInteger msg = [self distance:location to:centre];
NSLog(#"location to centre: %d", msg);
if ([self distance:location to:centre] < 330)
The part that looks suspicious to me is [recognizer.view superview].
When the gesture recognizer was added to self.view in a UIViewController that is not inside a container controller (e.g. a Navigation Controller) that view does not have a superview. If you send the superview message to self.view without a container view controller it will return nil.
So your message looked basically like this
CGPoint location = [recognizer locationInView:nil];
This will return the location in the window, which is also a valid CGPoint that tells you were you tapped the screen.
Since this didn't work I guess in [self distance:location to:centre] you do something that does only work with coordinates relative to the view. Maybe it's related to rotation, because the coordinates of the window don't rotate when you rotate the device.
Without knowing your code I'm not sure what the problem is, but it probably doesn't matter.
Just replace [recognizer.view superview] with recognizer.view.
Refer below Link, You may get your answer. It's an example if Gesture Recognition.
http://www.techotopia.com/index.php/An_iPhone_iOS_6_Gesture_Recognition_Tutorial
Looks like simple task. But when I try to resize using setFrame method I got glitches. There are some other UIViews resized using setFrame method and it works perfectly. I made custom application with slider bar and map view. SlideBar changes X position for MKMapView, but keeps width equals to screen width. This approach works fine for all Views. But MKMapView resizes with smooth troubles. Can anyone please give a clue why it's happening and how to solve it?
I had the same problem, which seems to occur only on iOS 6 (Apple Maps) and not on iOS 5 (Google Maps).
My solution was to take a "screenshot" of the map when the user start dragging the divider handle, replace the map with this screenshot during the drag, and put the map back when the finger is released.
I used the code from How to capture UIView to UIImage without loss of quality on retina display for UIView screenshot and Nikolai Ruhe's answer at How do I release a CGImageRef in iOS for a nice background color.
My UIPanGestureRecognizer action is something like this (the (MKMapView)self.map is a subview of (UIView)self.mapContainer with autoresizing set on Interface Builder):
- (IBAction)handleMapPullup:(UIPanGestureRecognizer *)sender
{
CGPoint translation = [sender translationInView:self.mapContainer];
// save current map center
static CLLocationCoordinate2D centerCoordinate;
switch (sender.state) {
case UIGestureRecognizerStateBegan: {
// Save map center coordinate
centerCoordinate = self.map.centerCoordinate;
// Take a "screenshot" of the map and set the size adjustments
UIImage *mapScreenshot = [UIImage imageWithView:self.map];
self.mapImage = [[UIImageView alloc] initWithImage:mapScreenshot];
self.mapImage.autoresizingMask = self.map.autoresizingMask;
self.mapImage.contentMode = UIViewContentModeCenter;
self.mapImage.clipsToBounds = YES;
self.mapImage.backgroundColor = [mapScreenshot mergedColor];
// Replace the map with a screenshot
[self.map removeFromSuperview];
[self.mapContainer insertSubview:self.mapImage atIndex:0];
} break;
case UIGestureRecognizerStateChanged:
break;
default:
// Resize the map to the new dimension
self.map.frame = self.mapImage.frame;
// Replace the screenshot with the resized map
[self.mapImage removeFromSuperview];
[self.mapContainer insertSubview:self.map atIndex:0];
// Empty screenshot memory
self.mapImage = nil;
break;
}
// resize map container according do the translation value
CGRect mapFrame = self.mapContainer.frame;
mapFrame.size.height += translation.y;
// reset translation to make a relative read on next event
[sender setTranslation:CGPointZero inView:self.mapContainer];
self.mapContainer.frame = mapFrame;
self.map.centerCoordinate = centerCoordinate; // keep center
}
Let me show you example (360 Deg 3D Object Rotator): Demo: http://activeden.net/item/interactive-renders-360-deg-3d-object-rotator/39718?ref=mixDesign
As you see, there is a camera 3D rotating on mouse event. Actually, it is a collection of images (frames) animating frame by frame depending on mouse event.
I want to implement this animation with objective - c using swipe gesture (or maybe I should use another gesture?). So that I can make rotation by my finger, to the left, to the right (I want animation with smooth ease effect, depending on swipe speed velocity).
Note: I have ready images for each frame.
Sample codes, online tutorials doing this will really help me.
! Should I use some external graphics library, in order to keep performance? I have hundreds of images (PNG), each with size of 300kb
Thank you in advance, I really need your help!
Maybe it will be easier to go with touchesBegan:, touchesMoved:, and touchesEnded: here? This will allow you to react to velocity and direction changes very fast.
Update: example can be found here.
I don't think you should use swipe gesture here. I recommend you LongPressGesture with short minimumPressDuration.
Let me show example code:
longPress = [ [UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(handleLongPressGesture:)]
longPress.delegate = self;
longPress.minimumPressDuration = 0.05;
[viewWithImage addGestureRecognizer:longPress];
float startX;
float displacement = 0;
-(IBAction)handleLongPressGesture:(UILongPressGestureRecognizer *)sender
{
float nowX;
if ( sender.state == UIGestureRecognizerStateBegan )
{
startX = [sender locationInView:viewWithImage].x;
}
if ( sender.state == UIGestureRecognizerStateEnded || sender.state == UIGestureRecognizerStateCancelled)
{
... do something at end ...
}
nowX = [sender locationInView:mainWidgetView].x;
displacement = nowX - startX;
// set right rotation with displacement value
[self rotateImageWith:displacement];
}
I've noticed a lack of questions related to true multi-touch in iOS. I'm not talking about touch events for one finger, I'm talking about touch events for 3 or more fingers. Are there any sources or documentation articles about gesture handling for large amounts of touch input? And if not, are there any base methods that any of you have used in the past that work?
(P.S. My ultimate goal is to NSLOG a 3 finger swipe down).
Use gesture recognizers—they handle touch processing for you and most let you specify the minimum number of fingers for the gesture to be recognized. In your case, for example:
// -viewDidLoad
UISwipeGestureRecognizer *swipeRecognizer = [[UISwipeGestureRecognizer alloc] initWithTarget:self action:#selector(swiped:)];
swipeRecognizer.direction = UISwipeGestureRecognizerDirectionDown;
swipeRecognizer.numberOfTouchesRequired = 3;
[self.view addGestureRecognizer:swipeRecognizer];
[swipeRecognizer release];
…
- (void)swiped:(UISwipeGestureRecognizer *)recognizer
{
if(recognizer.state == UIGestureRecognizerStateRecognized)
{
// got a three-finger swipe
}
}