Making Sprite respond to mouse clicks - objective-c

I have a scene (SKScene), in which, whenever a click is performed a ball (SKSprtieNode) is dropped from that point.
Now what I want to do is, whenever a click on the ball is performed, the ball should bounce or something.
What I have in GameScene.m is
- (void)mouseDown:(NSEvent *)theEvent {
CGPoint location = [theEvent locationInNode:self];
[self addBallAtLocation:location];
}
- (void)addBallAtLocation:(CGPoint) location {
Ball *ball = [Ball new];
ball.position = location;
[self addChild:ball];
}
And in Ball.m I add the bounce action to mouseDown method:
- (void)mouseDown:(NSEvent *)theEvent{
CGPoint point = [theEvent locationInNode:self];
CGVector impulse = CGVectorMake(point.x * 5.0, point.y * 5.0);
[self.physicsBody applyImpulse:impulse];
}
Right now a new ball is created even when clicked on a exiting ball. I thought that the ball's mouseDown method would be called since I clicked on it, and if that did not exist the scene's mouseDown method would be called.
P.S. I have a feeling that this could be solved with delegate, I could very easily be wrong, but since I am not totally clear on to use them, I didn't. If you think that might be a good way to resolve this issue, please do use them, as it may help me understand how to use them.

By default SKSpriteNode has its userInteractionEnabled property set to NO (certainly for performance reasons). You simply have to set it to YES and your event-handling methods will be called :)

Related

UITapGestureRecognizer every value the same

I'm a newbie to this and remaking an app. I am trying to use UITapGestureRecognizer. It works in the initial project file but not the new one. The only difference is that the old one uses a navigational controller but mine doesn't.
In the new one the self distance:location to:centre is stuck at 640 no matter where you press on the screen.
Can anyone help? I have no idea why it isn't working.
- (void)handleSingleTap:(UITapGestureRecognizer *)recognizer {
CGPoint location = [recognizer locationInView:[recognizer.view superview]];
CGPoint centre = CGPointMake(512,768 / 2);
NSInteger msg = [self distance:location to:centre];
NSLog(#"location to centre: %d", msg);
if ([self distance:location to:centre] < 330)
The part that looks suspicious to me is [recognizer.view superview].
When the gesture recognizer was added to self.view in a UIViewController that is not inside a container controller (e.g. a Navigation Controller) that view does not have a superview. If you send the superview message to self.view without a container view controller it will return nil.
So your message looked basically like this
CGPoint location = [recognizer locationInView:nil];
This will return the location in the window, which is also a valid CGPoint that tells you were you tapped the screen.
Since this didn't work I guess in [self distance:location to:centre] you do something that does only work with coordinates relative to the view. Maybe it's related to rotation, because the coordinates of the window don't rotate when you rotate the device.
Without knowing your code I'm not sure what the problem is, but it probably doesn't matter.
Just replace [recognizer.view superview] with recognizer.view.
Refer below Link, You may get your answer. It's an example if Gesture Recognition.
http://www.techotopia.com/index.php/An_iPhone_iOS_6_Gesture_Recognition_Tutorial

How to differentiate the mouseDown event from mouseDragged in a Transparent NSWindow

I have a Transparent NSWindow with an simple icon in it that can be dragged around the screen.
My code is:
.h:
#interface CustomView : NSWindow{
}
#property (assign) NSPoint initialLocation;
.m
#synthesize initialLocation;
- (id) initWithContentRect: (NSRect) contentRect
styleMask: (NSUInteger) aStyle
backing: (NSBackingStoreType) bufferingType
defer: (BOOL) flag{
if (![super initWithContentRect: contentRect
styleMask: NSBorderlessWindowMask
backing: bufferingType
defer: flag]) return nil;
[self setBackgroundColor: [NSColor clearColor]];
[self setOpaque:NO];
[NSApp activateIgnoringOtherApps:YES];
return self;
}
- (void)mouseDragged:(NSEvent *)theEvent {
NSRect screenVisibleFrame = [[NSScreen mainScreen] visibleFrame];
NSRect windowFrame = [self frame];
NSPoint newOrigin = windowFrame.origin;
// Get the mouse location in window coordinates.
NSPoint currentLocation = [theEvent locationInWindow];
// Update the origin with the difference between the new mouse location and the old mouse location.
newOrigin.x += (currentLocation.x - initialLocation.x);
newOrigin.y += (currentLocation.y - initialLocation.y);
// Don't let window get dragged up under the menu bar
if ((newOrigin.y + windowFrame.size.height) > (screenVisibleFrame.origin.y + screenVisibleFrame.size.height)) {
newOrigin.y = screenVisibleFrame.origin.y + (screenVisibleFrame.size.height - windowFrame.size.height);
}
// Move the window to the new location
[self setFrameOrigin:newOrigin];
}
- (void)mouseDown:(NSEvent *)theEvent {
// Get the mouse location in window coordinates.
self.initialLocation = [theEvent locationInWindow];
}
I want to display a NSPopover when the users clicks the image of the transparent window. But, as you can see in the code, the mouseDown event is used to get the mouse location (the code above was taken from an example).
What can i do to know when the user clicks the icon just to drag it around or simply clicked it to display the NSPopover?
Thank you
This is the classic situation of receiving the defining event after you need it in order to begin the action. Specifically, you can't know if the mouseDown is the beginning of a drag until after the drag starts. However, you want to act upon that mouseDown if a drag doesn't start.
In iOS (I realize that's not directly relevant to the code here, but it is instructional), there's an entire API built around letting iOS attempt to make these kinds of decisions for you. The entire Gesture system is based on the idea that the user starts to do something that might be one of many different actions, and thus needs to be resolved over time, possibly resulting in cancelled actions during the tracking period.
On OS X, we don't have many systems to help out with this, so if you have something that needs to handle a click and a drag differentially, you will need to defer your next action until a guard time has passed, and if that passes, you can perform the original action. In this case, you will likely want to do the following:
In the mouseDown, begin an NSTimer set for an appropriate guard time (not so long that people will accidentally move the pointer, and not so short that you'll trigger before the user drags) in order to call you back later to trigger the popover.
In the mouseDragged, use a guard area to make sure that if the user just twitches a little, it doesn't count as a drag. This can be irritating, as it sometimes results in needing to drag something farther than it seems necessary in order to begin a drag, so you'll want to either find a magic constant somewhere, or do some experimentation. When the guard area is exceeded, then begin your legitimate drag operation by canceling the NSTimer with [timer invalidate] and do your drag.
In the callback for the timer, display your popover. If the user dragged, the NSTimer will have been invalidated, causing it not to fire, and so the popover won't be displayed.

Trouble with tap detection on CCLayer subclass

I have a CCLayer subclass MyLayer in which I handle touch events:
(BOOL) ccTouchBegan:(UITouch *) touch withEvent:(UIEvent *) event
I set the content size of MyLayer instances like this:
`myLayer.contentSize = CGSizeMake(30.0, 30.0);`
I then add MyLayer instances as children of ParentLayer. For some reason I can tap anywhere on the screen and a MyLayer instance will detect the tap. I want to only detect taps on the visible portion/content size. How can I do this?
Are the MyLayer instances somehow inheriting a "tappable area" from somewhere else? I've verified that the contentSize of the instance just tapped is (30, 30) as expected. Perhaps the contentSize isn't the way to specify the tappable area of CCLayer subclass.
When the touch is enabled on a particular CCLayer, it receives all of the touch events in the window. That being said, if there are multiple layers, all layers will receive the same touches.
To alleviate this, obtain the location from the UITouch, convert it to Cocos2d coordinates, then check to see if it is within the bounds of the Layer you are concerned with.
Here's some code to work with:
CCLayer * ccl = [[CCLayer alloc] init];
CGPoint location = [touch locationInView:[touch view]];
location = [[CCDirector sharedDirector] convertToGL:location];
if (CGRectContainsPoint(CGRectMake(ccl.position.x - ccl.contentSize.width/2, ccl.position.y - ccl.contentSize.height/2, ccl.contentSize.width, ccl.contentSize.height), location)) {
//continue from there...
}

Dragging the view

I have an NSView which I am adding as a sub-view of another NSView. I want to be able to drag the first NSView around the parent view. I have some code that is partially working but there's an issue with the NSView moving in the opposite direction on the Y axis from my mouse drag. (i.e. I drag down, it goes up and the inverse of that).
Here's my code:
// -------------------- MOUSE EVENTS ------------------- \\
- (BOOL) acceptsFirstMouse:(NSEvent *)e {
return YES;
}
- (void)mouseDown:(NSEvent *) e {
//get the mouse point
lastDragLocation = [e locationInWindow];
}
- (void)mouseDragged:(NSEvent *)theEvent {
NSPoint newDragLocation = [theEvent locationInWindow];
NSPoint thisOrigin = [self frame].origin;
thisOrigin.x += (-lastDragLocation.x + newDragLocation.x);
thisOrigin.y += (-lastDragLocation.y + newDragLocation.y);
[self setFrameOrigin:thisOrigin];
lastDragLocation = newDragLocation;
}
The view is flipped, though I changed that back to the default and it didn't seem to make a difference. What am I doing wrong?
The best way to approach this problem is by starting with a solid understanding of coordinate spaces.
First, it is critical to understand that when we talk about the "frame" of a window, it is in the coordinate space of the superview. This means that adjusting the flippedness of the view itself won't actually make a difference, because we haven't been changing anything inside the view itself.
But your intuition that the flippedness is important here is correct.
By default your code, as typed, seems like it would work; perhaps your superview has been flipped (or not flipped), and it is in a different coordinate space than you expect.
Rather than just flipping and unflipping views at random, it is best to convert the points you're dealing with into a known coordinate space.
I've edited your above code to always convert into the superview's coordinate space, because we're working with the frame origin. This will work if your draggable view is placed in a flipped, or non-flipped superview.
// -------------------- MOUSE EVENTS ------------------- \\
- (BOOL) acceptsFirstMouse:(NSEvent *)e {
return YES;
}
- (void)mouseDown:(NSEvent *) e {
// Convert to superview's coordinate space
self.lastDragLocation = [[self superview] convertPoint:[e locationInWindow] fromView:nil];
}
- (void)mouseDragged:(NSEvent *)theEvent {
// We're working only in the superview's coordinate space, so we always convert.
NSPoint newDragLocation = [[self superview] convertPoint:[theEvent locationInWindow] fromView:nil];
NSPoint thisOrigin = [self frame].origin;
thisOrigin.x += (-self.lastDragLocation.x + newDragLocation.x);
thisOrigin.y += (-self.lastDragLocation.y + newDragLocation.y);
[self setFrameOrigin:thisOrigin];
self.lastDragLocation = newDragLocation;
}
Additionally, I'd recommend refactoring your code to simply deal with the original mouse-down location, and the current location of the pointer, rather than deal with the deltas between mouseDragged events. This could lead to unexpected results down the line.
Instead simply store the offset between the origin of dragged view and the mouse pointer (where the mouse pointer is within the view), and set the frame origin to the location of the mouse pointer, minus the offset.
Here is some additional reading:
Cocoa Drawing Guide
Cocoa Event Handling Guide
I think you should calculate according to the position of mouse, cause according to my test,it gets more smooth.Because The way like below only provide the position inside the application's window coordinate system:
[[self superview] convertPoint:[theEvent locationInWindow] fromView:nil];
What I am suggesting is something like this:
lastDrag = [NSEvent mouseLocation];
other codes are just the same.

Drag and drop UIButton but limit to boundaries

I’m not asking for givemesamplecodenow responses, just a nudge in the right direction would be much appreciated. My searches haven’t been any good.
I have a UIButton that I am able to move freely around the screen.
I would like to limit the drag area on this object so it can only be moved up and down or side to side. I believe I need to get the x,y coordinates of the boundaries and then restrict movement outside this area. But that’s a far as I have got. My knowledge doesn’t stretch any further than that.
Has anyone implemented something similar in the past?
Adam
So let's say you're in the middle of the drag operation. You're moving the button instance around by setting its center to the center of whatever gesture is causing the movement.
You can impose restrictions by testing the gesture's center and resetting the center values if you don't like them. The below assumes a button wired to an action for all Touch Drag events but the principle still applies if you're using gesture recognizers or touchesBegan: and friends.
- (IBAction)handleDrag:(UIButton *)sender forEvent:(UIEvent *)event
{
CGPoint point = [[[event allTouches] anyObject] locationInView:self.view];
if (point.y > 200)
{
point.y = 200; //No dragging this button lower than 200px from the origin!
}
sender.center = point;
}
If you want a button that slides only on one axis, that's easy enough:
- (IBAction)handleDrag:(UIButton *)sender forEvent:(UIEvent *)event
{
CGPoint point = [[[event allTouches] anyObject] locationInView:self.view];
point.y = sender.center.y; //Always stick to the same y value
sender.center = point;
}
Or perhaps you want the button draggable only inside the region of a specific view. This might be easier to define if your boundaries are complicated.
- (IBAction)handleDrag:(UIButton *)sender forEvent:(UIEvent *)event
{
CGPoint point = [[[event allTouches] anyObject] locationInView:self.someView];
if ([self.someView pointInside:point withEvent:nil])
{
sender.center = point;
//Only if the gesture center is inside the specified view will the button be moved
}
}
Presumably you'd be using touchesBegan:, touchesMoved:, etc., so it should be as simple as testing whether the touch point is outside your view's bounds in touchesMoved:. If it is outside, ignore it, but if it's inside, adjust the position of the button.
I suspect you may find this function useful:
bool CGRectContainsPoint ( CGRect rect, CGPoint point );