I'm trying to make a SpriteKit game where the player can drag groups of sprites around, but I can't figure out how to get the sprite to follow the mouse cursor. Using the SpriteKit boilerplate, I get this:
Here is the relevant logic for how I move the "Hello, world!" sprite in the SKNode babies
SKNode *babies;
-(void)mouseDown:(NSEvent *)theEvent {
dragStart = [theEvent locationInWindow];
babiesStart = babies.position;
}
-(void)mouseDragged:(NSEvent *)theEvent {
CGPoint translation = CGPointMake([theEvent locationInWindow].x - dragStart.x,
[theEvent locationInWindow].y - dragStart.y);
float adjust = 1.0;
babies.position = CGPointMake(babiesStart.x + translation.x * adjust,
babiesStart.y + translation.y * adjust);
}
I've tried a number of different methods, such as deltaX and Y on theEvent but I get the same result. The only solution I've found is to play with the adjust variable, but that's clearly a hack.
NSEvent has another method in SpriteKit, - (CGPoint)locationInNode:(SKNode *)node. By using this I was able to get correct offset values for move the SKNode along with the mouse.
please try this [theEvent locationInNode:self];
if you don't accumulate delta in mouseDragged, the loss is inevitable.
In my case the following works quite ok.
mouseDown()
{
previous = event.location(in: self)
}
mouseDragged()
{
current = event.location(in: self)
...
delta += (current - previous)
previous = current
...
}
update()
{
...
use up your delta
delta = 0
}
cheers
My guess is that the issue is with coordinate spaces. You're performing calculations based on -[NSEvent locationInWindow] which is, of course, in the window coordinate system. In what coordinate system is babies.position? It's at least in a view's coordinate system, although maybe SprikeKit also imposes another coordinate space.
To convert the point to the view's coordinate space, you will want to use NSPoint point = [theView convertPoint:[NSEvent locationInWindow] fromView:nil];. To convert the point from the view's coordinate space to the scene's, you'd use CGPoint cgpoint = [theScene convertPointFromView:NSPointToCGPoint(point)];. If babies is not the scene object, then to convert to the coordinate system used by babies.position, you'd do cgpoint = [babies.parent convertPoint:cgpoint fromNode:scene];. You'd then compute translation by taking the difference between babiesStart and cgpoint.
Update: actually, you wouldn't compare the result with babiesStart as such. You'd compare it with the result of the same coordinate transformation done on the original cursor location. So, you'd compute dragStart similar to how you'd compute cgpoint. Later, you'd take the difference between those.
This is normal behavior.
When you take the mouse position as reported by the event, some time passes before the Sprite Kit view is redrawn, at which point the cursor has already moved to a new position.
There isn't much you can do about it, except maybe predict the position for very fast movement by factoring in the previous mouse events distances and thus predicting where the next position is likely going to be, and then take the actual mouse position and adjust it a little according to the most recent general movement direction.
Usually this is overkill though.
Related
How can I accept touch input beyond the scene's bounds, so that no matter what I set self.position to, touches can still be detected?
I'm creating a tile based game from Ray Winderlich on Cocos2d version 3.0. I am at the point of setting the view of the screen to a zoomed in state on my tile map. I have successfully been able to do that although now my touches are not responding since I'm out of the coordinate space the touches used to work on.
This method is called to set the zoomed view to the player's position:
-(void)setViewPointCenter:(CGPoint)position{
CGSize winSize = [CCDirector sharedDirector].viewSizeInPixels;
int x = MAX(position.x, winSize.width/2);
int y = MAX(position.y, winSize.height/2);
x = MIN(x, (_tileMap.mapSize.width * _tileMap.tileSize.width) - winSize.width / 2);
y = MIN(y, (_tileMap.mapSize.height * _tileMap.tileSize.height) - winSize.height / 2);
CGPoint actualPosition = ccp(x, y);
CGPoint centerOfView = ccp(winSize.width/2, winSize.height/2);
NSLog(#"centerOfView%#", NSStringFromCGPoint(centerOfView));
CGPoint viewPoint = ccpSub(centerOfView, actualPosition);
NSLog(#"viewPoint%#", NSStringFromCGPoint(viewPoint));
//This changes the position of the helloworld layer/scene so that
//we can see the portion of the tilemap we're interested in.
//That however makes my touchbegan method stop firing
self.position = viewPoint;
}
This is what the NSLog prints from the method:
2014-01-30 07:05:08.725 TestingTouch[593:60b] centerOfView{512, 384}
2014-01-30 07:05:08.727 TestingTouch[593:60b] viewPoint{0, -832}
As you can see the y coordinate is -800. If i comment out the line self.position = viewPoint then the self.position reads {0, 0} and touches are detectable again but then we don't have a zoomed view on the character. Instead it shows the view on the bottom left of the map.
Here's a video demonstration.
How can I fix this?
Update 1
Here is the github page to my repository.
Update 2
Mark has been able to come up with a temporary solution so far by setting the hitAreaExpansion to a large number like so:
self.hitAreaExpansion = 10000000.0f;
This will cause touches to respond again all over! However, if there is a solution that would not require me to set the property with an absolute number then that would be great!
-edit 3-(tldr version):
setting the contentsize of the scene/layer to the size of the tilemap solves this issue:
[self setContentSize: self.tileMap.contentSize];
original replies below:
You would take the touch coordinate and subtract the layer position.
Generally something like:
touchLocation = ccpSub(touchLocation, self.position);
if you were to scale the layer, you would also need appropriate translation for that as well.
-edit 1-:
So, I had a chance to take another look, and it looks like my 'ridiculous' number was not ridiculous enough, or I had made another change. Anyway, if you simply add
self.hitAreaExpansion = 10000000.0f; // I'll let you find a more reasonable number
the touches will now get registered.
As for the underlying issue, I believe it to be one of content scale that is not set correctly, but again, I'll now leave that to you. I did however find out that when looking through some of the tilemap class, that tilesize is said to be in pixels, not points, which I guess is somehow related to this.
-edit 2-:
It bugged me with the sub-optimal answer, so I looked a little further. Forgive me, I hadn't looked at v3 until I saw this question. :p
after inspecting the base class and observing the scene/layer's value of:
- (BOOL)hitTestWithWorldPos:(CGPoint)pos;
it became obvious that the content size of the scene/layer was being set to the current view size, which in the case of an iPad is (1024, 768)
The position of the layer after the setViewPointCenter call is fully above the initial view's position, hence, the touch was being suppressed. by setting the layer/scene contentSize to the size of the tilemap, the touchable area is now expanded over the entire map, which allows the node to process the touch.
What is the correct way to "zoom out" on your scene.
I have an object that I apply an impulse to fire it across the screen. It for example will fire about 100 px across., this works as expected - increase the force it flys more, increase the density it flys less etc.
The problem i have is zooming, the only way I know to zoom out on a scene is to setScale, and the shrinks all my nodes as expected.
But then instead of the object flying the same amount (just zoomed out) it flys more than double the distance.
When I log the mass / density etc of the object before and after I scale they are the same, as expected.
So why doesn't it fly the same amount ? Tried changing the impulse to match the scale, but it doesnt work, yes it flys less distance - but its not one for one with the scaling.
Tricky question...
Thanks for ideas.
I believe you're not supposed to scale the SKScene (like it hints you if you try setScale method with SKScene). Try resizing it instead.
myScene.scaleMode = SKSceneScaleModeAspectFill;
And then while zooming:
myScene.size = CGSizeMake(myScene.size.width + dx, myScene.size.height + dy);
*Apple documentation says:
Set the scaleMode property to SKSceneScaleModeResizeFill. Sprite Kit automatically resizes the scene so that it always matches the view’s size.
The easy fix (thanks to Chris LaPollo, an author on RW)
[self runAction:[SKAction scaleTo:0.5 duration:0]];
Nothing else needed.
The odd thing is you cannot do
[self setScale:0.5];
As you get this warning, and it doenst work - but running an action does -- weird!!!
SKScene: Setting the scale of a SKScene has no effect.
For those like me who ended up here after a search, changing the scale of the scene to zoom out no longer works.
Instead, encapsulate all your nodes in an empty SKNode and run actions on this one:
self.rootNode = [SKNode node];
// Add your children nodes here to the rootnode.
[self addChild:self.rootNode];
// Zoom out
[self.rootNode runAction:[SKAction scaleBy:2 duration:5]];
// Zoom in
[self.rootNode runAction:[SKAction scaleBy:.5 duration:5]];
self is the SKScene.
I hope this helps.
Okay, so I'm trying to make a 'Wheel of Fortune' type of effect with a wheel shape in iOS, where I can grab and spin a wheel. I can currently drag and spin the wheel around to my heart's content, but upon releasing my finger, it stops dead. I need to apply some momentum or inertia to it, to simulate the wheel spinning down naturally.
I've got the velocity calculation in place, so when I lift my finger up I NSLog out a velocity (between a minimum of 1 and a maximum of 100), which ranges from anywhere between 1 and over 1800 (at my hardest flick!), now I'm just trying to establish how I would go about converting that velocity into an actual rotation to apply to the object, and how I'd go about slowing it down over time.
My initial thoughts were something like: begin rotating full circles on a loop at the same speed as the velocity that was given, then on each subsequent rotation, slow the speed by some small percentage. This should give the effect that a harder spin goes faster and takes longer to slow down.
I'm no mathematician, so my approach may be wrong, but if anybody has any tips on how I could get this to work, at least in a basic state, I'd be really grateful. There's a really helpful answer here: iPhone add inertia/momentum physics to animate "wheel of fortune" like rotating control, but it's more theoretical and lacking in practical information on how exactly to apply the calculated velocity to the object, etc. I'm thinking I'll need some animation help here, too.
EDIT: I'm also going to need to work out if they were dragging the wheel clockwise or anti-clockwise.
Many thanks!
I have written something analogous for my program Bit, but my case I think is a bit more complex because I rotate in 3D: https://itunes.apple.com/ua/app/bit/id366236469?mt=8
Basically what I do is I set up an NSTimer that calls some method regularly. I just take the direction and speed to create a rotation matrix (as I said, 3D is a bit nastier :P ), and I multiply the speed with some number smaller than 1 so it goes down. The reason for multiplying instead of subtracting is that you don't want the object to rotate twice as long if the spin from the user is twice as hard since that becomes annoying to wait on I find.
As for figuring out which direction the wheel is spinning, just store that in the touchesEnded:withEvent: method where you have all the information. Since you say you already have the tracking working as long as the user has the finger down this should hopefully be obvious.
What I have in 3D is something like:
// MyView.h
#interface MyView : UIView {
NSTimer *animationTimer;
}
- (void) startAnimation;
#end
// MyAppDelegate.h
#implementation MyAppDelegate
- (void) applicationDidFinishLaunching:(UIApplication *)application {
[myView startAnimation];
}
#end
// MyView.m
GLfloat rotationMomentum = 0;
GLfloat rotationDeltaX = 0.0f;
GLfloat rotationDeltaY = 0.0f;
#implementation MyView
- (void)startAnimation {
animationTimer = [NSTimer scheduledTimerWithTimeInterval:(NSTimeInterval)((1.0 / 60.0) * animationFrameInterval) target:self selector:#selector(drawView:) userInfo:nil repeats:TRUE];
}
- (void) drawView:(id)sender {
addRotationByDegree(rotationMomentum);
rotationMomentum /= 1.05;
if (rotationMomentum < 0.1)
rotationMomentum = 0.1; // never stop rotating completely
[renderer render];
}
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
}
- (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
UITouch *aTouch = [touches anyObject];
CGPoint loc = [aTouch locationInView:self];
CGPoint prevloc = [aTouch previousLocationInView:self];
rotationDeltaX = loc.x - prevloc.x;
rotationDeltaY = loc.y - prevloc.y;
GLfloat distance = sqrt(rotationDeltaX*rotationDeltaX+rotationDeltaY*rotationDeltaY)/4;
rotationMomentum = distance;
addRotationByDegree(distance);
self->moved = TRUE;
}
- (void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event
{
}
- (void)touchesCancelled:(NSSet*)touches withEvent:(UIEvent*)event
{
}
I've left out the addRotationByDegree function but what it does is that it uses the global variables rotationDeltaX and rotationDeltaY and applies a rotational matrix to an already stored matrix and then saves the result. In your example you probably want something much simpler, like (I'm assuming now that only movements in the X direction spin the wheel):
- (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
UITouch *aTouch = [touches anyObject];
CGPoint loc = [aTouch locationInView:self];
CGPoint prevloc = [aTouch previousLocationInView:self];
GLfloat distance = loc.x - prevloc.x;
rotationMomentum = distance;
addRotationByDegree(distance);
self->moved = TRUE;
}
void addRotationByDegree(distance) {
angleOfWheel += distance; // probably need to divide the number with something reasonable here to have the spin be nicer
}
It's going to be a rough answer as I don't have any detailed example at hand.
If you have the velocity when you lift your finger already then it should not be hard.
The velocity you have is at pixels per second or something like that.
First you need to convert that linear speed to an angular speed. That can be done by knowing the perimeter of the circle 2*PI*radius and then do 2*PI/perimeter*velocity to get the angular speed in radians per second.
If your wheel didn't have any friction in its axis it would run forever at that speed. Well, you can just arbitrate a value for this friction, which is an acceleration and can be represented at pixels per second squared or radians per second squared for an angular acceleration. Then it's just a matter of dividing the angular speed by this angular acceleration and you get the time until it stops.
With the animation time you can use the equation finalAngle = initialAngle + angularSpeed*animationTime - angularAcceleration/2*animationTime*animationTime to get the final angle your wheel is going to be at the end of the animation. Then just do an animation on the transformation and rotate it by that angle for the time you got and say that your animation should ease out.
This should look realistic enough. If not you'll need to give an animation path for the rotation property of your wheel based on some samples from the equation from above.
This has really been a hassle, but I have two sprites, both of them are 17 pixel long arms. Both of them have anchor points at ccp(0.5f,0.0f); and what I want is for, when arm1 rotates, for arm2's CGPoint to be equal to the opposite end of the anchorpoint of arm1. Like, at a 45 degree angle, the CGPoint would just be ccp (arm1.position.y, arm1.position.x + 17);
So I update the rotation for arm1 in my ccTime function, and it calls another method to do the math for the angle rotation. Basicall what happens is... arm2 rotates reaaaally fast around in the correct circular area, meaning something is right, but the rotation is just super fast.
-(void) callEveryFrame:(ccTime)dt{
//if the user is holding down on the screen, arm1 rotates.
if(held == true){
_theArm2.position = [self fixAngle];
timer = timer + .0166; //this gets reset after touchesEnded.
_theArm1.rotation = _theArm1.rotation - (timer * 10);
}
-(CGPoint)fixAngle{
CGFloat theAngle = _theArm1.rotation;
//I'm not to sure how the degrees works in cocos2d, so I added 90 to the angle of rotation's original position, and it works until the rotation variable changes.
CGFloat thyAngle = theAngle + 90;
CGFloat theMathx = (17*cosf(thyAngle)); //17 is the length of the arm
CGFloat theMathy = (17*sinf(thyAngle));
theMathx = theMathx + 100;//these 2 updates just change the position, because arm1's
theMathy = theMathy + 55; //CGpoint is located at ccp(100,57)
return CGPointMake(theMathx, theMathy);
}
Sorry if the code is... bad. I'm relatively new to programming, but everything works except the stupid arm2 likes to rotate really fast in a circle.
I will love whoever solves my problem for the rest of my/their lives.
EDIT:
Per discussion on this thread, it looks like you are using degree where radians should be used, but not where I thought. Try this:
CGFloat theMathx = (17*cosf(CC_DEGREES_TO_RADIANS(thyAngle))); //17 is the length of the arm
CGFloat theMathy = (17*sinf(CC_DEGREES_TO_RADIANS(thyAngle)));
Try using this to add 90 degrees to the rotation. Cocos2D uses radians instead of degrees:
With a little help from the last question regarding drawings in Cocoa i've implemented some basic shapes, as well as dragging / resizing.
So, right now i'm trying to figure out, how to create a effect like in Keynote when a shape is resized and it automatically fits the size of another shape next to it and then "locks" the mouse for a bit of time.
The first attempt is to use a delay function, like
NSDate *future = [NSDate dateWithTimeIntervalSinceNow: 0.5 ];
[NSThread sleepUntilDate:future];
reacting on the desired event (e. g. shape width == height). But this results not in the desired effect, since the whole App freezes for the specified amount of time. In addition to that i think, that the user won't recognize it as something saying "you've reached a special size". Showing guidelines only at the event is not a solution, since the guidelines are shown as soon as the shape is selected.
For snap to guides, I don't think you actually want the cursor to stop. Just that the resizing should stop reacting to the cursor movements, within a small range of your target.
The solution in that other question is more or less what you want, I think. Essentially, when you get close enough to the guide, you just change the point's coordinates to those of the guide. So, building on the sample code I posted in your earlier question, this becomes your view's mouseDragged:, and mouseUp:. You can leave the new checks out of mouseDragged: if you want the point to snap only on mouse up, a different but just as valid behavior.
If you're matching the edges of rectangles, you'll probably find the Foundation Rect Functions, like NSMaxX and NSMaxY, useful.
- (void)mouseDragged:(NSEvent *)event {
if( !currMovingDot ) return;
NSPoint spot = [self convertPoint:[event locationInWindow]
fromView:nil];
spot.x = MAX(0, MIN(spot.x, self.bounds.size.width));
spot.y = MAX(0, MIN(spot.y, self.bounds.size.height));
// Look for Dots whose centerlines are close to
// the current mouse position
for( Dot * dot in dots ){
if (dot == currMovingDot) {
// Don't snap to myself! Leaving this out causes
// "snap to grid" effect.
continue;
}
// Where SNAP_DIST is #define'd somewhere
// something under 10 seems to be a good value
if( abs(spot.x - dot.position.x) <= SNAP_DIST ){
spot.x = dot.position.x;
}
if( abs(spot.y - dot.position.y) <= SNAP_DIST ){
spot.y = dot.position.y;
}
}
currMovingDot.position = spot;
[self setNeedsDisplay:YES];
}