What is the correct way to "zoom out" on your scene.
I have an object that I apply an impulse to fire it across the screen. It for example will fire about 100 px across., this works as expected - increase the force it flys more, increase the density it flys less etc.
The problem i have is zooming, the only way I know to zoom out on a scene is to setScale, and the shrinks all my nodes as expected.
But then instead of the object flying the same amount (just zoomed out) it flys more than double the distance.
When I log the mass / density etc of the object before and after I scale they are the same, as expected.
So why doesn't it fly the same amount ? Tried changing the impulse to match the scale, but it doesnt work, yes it flys less distance - but its not one for one with the scaling.
Tricky question...
Thanks for ideas.
I believe you're not supposed to scale the SKScene (like it hints you if you try setScale method with SKScene). Try resizing it instead.
myScene.scaleMode = SKSceneScaleModeAspectFill;
And then while zooming:
myScene.size = CGSizeMake(myScene.size.width + dx, myScene.size.height + dy);
*Apple documentation says:
Set the scaleMode property to SKSceneScaleModeResizeFill. Sprite Kit automatically resizes the scene so that it always matches the view’s size.
The easy fix (thanks to Chris LaPollo, an author on RW)
[self runAction:[SKAction scaleTo:0.5 duration:0]];
Nothing else needed.
The odd thing is you cannot do
[self setScale:0.5];
As you get this warning, and it doenst work - but running an action does -- weird!!!
SKScene: Setting the scale of a SKScene has no effect.
For those like me who ended up here after a search, changing the scale of the scene to zoom out no longer works.
Instead, encapsulate all your nodes in an empty SKNode and run actions on this one:
self.rootNode = [SKNode node];
// Add your children nodes here to the rootnode.
[self addChild:self.rootNode];
// Zoom out
[self.rootNode runAction:[SKAction scaleBy:2 duration:5]];
// Zoom in
[self.rootNode runAction:[SKAction scaleBy:.5 duration:5]];
self is the SKScene.
I hope this helps.
Related
I'm trying to make a SpriteKit game where the player can drag groups of sprites around, but I can't figure out how to get the sprite to follow the mouse cursor. Using the SpriteKit boilerplate, I get this:
Here is the relevant logic for how I move the "Hello, world!" sprite in the SKNode babies
SKNode *babies;
-(void)mouseDown:(NSEvent *)theEvent {
dragStart = [theEvent locationInWindow];
babiesStart = babies.position;
}
-(void)mouseDragged:(NSEvent *)theEvent {
CGPoint translation = CGPointMake([theEvent locationInWindow].x - dragStart.x,
[theEvent locationInWindow].y - dragStart.y);
float adjust = 1.0;
babies.position = CGPointMake(babiesStart.x + translation.x * adjust,
babiesStart.y + translation.y * adjust);
}
I've tried a number of different methods, such as deltaX and Y on theEvent but I get the same result. The only solution I've found is to play with the adjust variable, but that's clearly a hack.
NSEvent has another method in SpriteKit, - (CGPoint)locationInNode:(SKNode *)node. By using this I was able to get correct offset values for move the SKNode along with the mouse.
please try this [theEvent locationInNode:self];
if you don't accumulate delta in mouseDragged, the loss is inevitable.
In my case the following works quite ok.
mouseDown()
{
previous = event.location(in: self)
}
mouseDragged()
{
current = event.location(in: self)
...
delta += (current - previous)
previous = current
...
}
update()
{
...
use up your delta
delta = 0
}
cheers
My guess is that the issue is with coordinate spaces. You're performing calculations based on -[NSEvent locationInWindow] which is, of course, in the window coordinate system. In what coordinate system is babies.position? It's at least in a view's coordinate system, although maybe SprikeKit also imposes another coordinate space.
To convert the point to the view's coordinate space, you will want to use NSPoint point = [theView convertPoint:[NSEvent locationInWindow] fromView:nil];. To convert the point from the view's coordinate space to the scene's, you'd use CGPoint cgpoint = [theScene convertPointFromView:NSPointToCGPoint(point)];. If babies is not the scene object, then to convert to the coordinate system used by babies.position, you'd do cgpoint = [babies.parent convertPoint:cgpoint fromNode:scene];. You'd then compute translation by taking the difference between babiesStart and cgpoint.
Update: actually, you wouldn't compare the result with babiesStart as such. You'd compare it with the result of the same coordinate transformation done on the original cursor location. So, you'd compute dragStart similar to how you'd compute cgpoint. Later, you'd take the difference between those.
This is normal behavior.
When you take the mouse position as reported by the event, some time passes before the Sprite Kit view is redrawn, at which point the cursor has already moved to a new position.
There isn't much you can do about it, except maybe predict the position for very fast movement by factoring in the previous mouse events distances and thus predicting where the next position is likely going to be, and then take the actual mouse position and adjust it a little according to the most recent general movement direction.
Usually this is overkill though.
How can I accept touch input beyond the scene's bounds, so that no matter what I set self.position to, touches can still be detected?
I'm creating a tile based game from Ray Winderlich on Cocos2d version 3.0. I am at the point of setting the view of the screen to a zoomed in state on my tile map. I have successfully been able to do that although now my touches are not responding since I'm out of the coordinate space the touches used to work on.
This method is called to set the zoomed view to the player's position:
-(void)setViewPointCenter:(CGPoint)position{
CGSize winSize = [CCDirector sharedDirector].viewSizeInPixels;
int x = MAX(position.x, winSize.width/2);
int y = MAX(position.y, winSize.height/2);
x = MIN(x, (_tileMap.mapSize.width * _tileMap.tileSize.width) - winSize.width / 2);
y = MIN(y, (_tileMap.mapSize.height * _tileMap.tileSize.height) - winSize.height / 2);
CGPoint actualPosition = ccp(x, y);
CGPoint centerOfView = ccp(winSize.width/2, winSize.height/2);
NSLog(#"centerOfView%#", NSStringFromCGPoint(centerOfView));
CGPoint viewPoint = ccpSub(centerOfView, actualPosition);
NSLog(#"viewPoint%#", NSStringFromCGPoint(viewPoint));
//This changes the position of the helloworld layer/scene so that
//we can see the portion of the tilemap we're interested in.
//That however makes my touchbegan method stop firing
self.position = viewPoint;
}
This is what the NSLog prints from the method:
2014-01-30 07:05:08.725 TestingTouch[593:60b] centerOfView{512, 384}
2014-01-30 07:05:08.727 TestingTouch[593:60b] viewPoint{0, -832}
As you can see the y coordinate is -800. If i comment out the line self.position = viewPoint then the self.position reads {0, 0} and touches are detectable again but then we don't have a zoomed view on the character. Instead it shows the view on the bottom left of the map.
Here's a video demonstration.
How can I fix this?
Update 1
Here is the github page to my repository.
Update 2
Mark has been able to come up with a temporary solution so far by setting the hitAreaExpansion to a large number like so:
self.hitAreaExpansion = 10000000.0f;
This will cause touches to respond again all over! However, if there is a solution that would not require me to set the property with an absolute number then that would be great!
-edit 3-(tldr version):
setting the contentsize of the scene/layer to the size of the tilemap solves this issue:
[self setContentSize: self.tileMap.contentSize];
original replies below:
You would take the touch coordinate and subtract the layer position.
Generally something like:
touchLocation = ccpSub(touchLocation, self.position);
if you were to scale the layer, you would also need appropriate translation for that as well.
-edit 1-:
So, I had a chance to take another look, and it looks like my 'ridiculous' number was not ridiculous enough, or I had made another change. Anyway, if you simply add
self.hitAreaExpansion = 10000000.0f; // I'll let you find a more reasonable number
the touches will now get registered.
As for the underlying issue, I believe it to be one of content scale that is not set correctly, but again, I'll now leave that to you. I did however find out that when looking through some of the tilemap class, that tilesize is said to be in pixels, not points, which I guess is somehow related to this.
-edit 2-:
It bugged me with the sub-optimal answer, so I looked a little further. Forgive me, I hadn't looked at v3 until I saw this question. :p
after inspecting the base class and observing the scene/layer's value of:
- (BOOL)hitTestWithWorldPos:(CGPoint)pos;
it became obvious that the content size of the scene/layer was being set to the current view size, which in the case of an iPad is (1024, 768)
The position of the layer after the setViewPointCenter call is fully above the initial view's position, hence, the touch was being suppressed. by setting the layer/scene contentSize to the size of the tilemap, the touchable area is now expanded over the entire map, which allows the node to process the touch.
I have a view (it's the view that is zoomed in a scrollview) that I am trying to make only stretch in the x direction when the user pinches or double-taps to zoom. This view is being constantly drawn on: up to 10 times per second, using Core Graphics to draw a graph.
So I override setTransform like so:
- (void)setTransform:(CGAffineTransform)newValue;
{
CGAffineTransform constrainedTransform = CGAffineTransformIdentity;
constrainedTransform.a = newValue.a;
[super setTransform:constrainedTransform];
}
And this generally gives me the behavior I want, except for one big problem. When the view is being drawn on very often and the user double taps to zoom in, the whole view will often just go black. This happens very rarely if I don't override the above method (although it still does happen once in a while). Also interesting is that when the user zooms using a pinch gesture, this does not happen. This is the function triggered by the double tap:
- (void)handleDoubleTap:(UIGestureRecognizer *)gestureRecognizer
{
CGRect frame = [[self tiledScrollView] frame];
float newScale = [[self tiledScrollView] zoomScale] * kZoomStep;
CGRect zoomRect = [self zoomRectForFrame:frame withScale:newScale withCenter:[gestureRecognizer locationInView:[[self tiledScrollView] tileContainerView]]];
[[self tiledScrollView] zoomToRect:zoomRect animated:YES];
}
And the pinch gestures are just handled by UIScrollView's stock pinch recognizer. One thing that does bother me about the above function is that zoomRect is scaled in both the x and y directions even though my view only scales in one direction. I have tried scaling zoomRect in only the x direction and then calling zoomToRect, but then the graph won't zoom.
So it seems that UIScrollView's pinch recognizer and my double tap recognizer scale the view in two different ways, and only the pinch recognizer's way works with my code... Also, this problem becomes very rare when the drawing rate is slowed, and nonexistent when there is no drawing going on. I've tried using queues to make drawing and zooming sequential but I haven't had much luck with that. I suspect that zoomToRect and other zooming methods may dispatch tasks off to other threads or whatever so I don't think they can be sequentialized that way. But is this something I should look into more?
Any help would be greatly appreciated. I've wasted days trying to fix this already -_- Thanks
FUTURE VIEWERS:
I have managed to finish this rotation animation and code with description can be found on tho question. NSImage rotation in NSView is not working
Before you proceed please up vote Duncan C 's answer. As I manage to achieve this rotation from his answer.
I have an image like this,
I want to keep rotating this sync icon, On a different thread. Now I tried using Quartz composer and add the animation to QCView but it is has very crazy effect and very slow too.
Question :
How do I rotate this image continuously with very less processing expense?
Effort
I read CoreAnimation, Quartz2D documentation but I failed to find the way to make it work. The only thing I know so far is, I have to use
CALayer
CAImageRef
AffineTransform
NSAnimationContext
Now, I am not expecting code, but an understanding with pseudo code will be great!
Getting an object to rotate more than 180 degrees is actually a little bit tricky. The problem is that you specify a transformation matrix for the ending rotation, and the system decides to rotate in the other direction.
What I've done is to create a CABasicAnimation of less than 180 degrees, set up to be additive , and with a repeat count. Each step in the animation animates the object more.
The following code is taken from an iOS application, but the technique is identical in Mac OS.
CABasicAnimation* rotate = [CABasicAnimation animationWithKeyPath: #"transform.rotation.z"];
rotate.removedOnCompletion = FALSE;
rotate.fillMode = kCAFillModeForwards;
//Do a series of 5 quarter turns for a total of a 1.25 turns
//(2PI is a full turn, so pi/2 is a quarter turn)
[rotate setToValue: [NSNumber numberWithFloat: -M_PI / 2]];
rotate.repeatCount = 11;
rotate.duration = duration/2;
rotate.beginTime = start;
rotate.cumulative = TRUE;
rotate.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear];
CAAnimation objects operate on layers, so for Mac OS, you'll need to set the "wants layer" property in interface builder, and then add the animation to your view's layer.
To make your view rotate forever, you'd set repeat count to some very large number like 1e100.
Once you've created your animation, you'd add it to your view's layer with code something like this:
[myView.layer addAnimation: rotate forKey: #"rotateAnimation"];
That's about all there is to it.
Update:
I've recently learned of another way to handle rotations of greater than 180 degrees, or continuous rotations.
There is a special object called a CAValueFunction that lets you apply a change to your layer's transform using an arbitrary value, including values that specify multiple full rotations.
You create a CABasicAnimation of your layer's transform property, but then instead of providing a transform, the value you supply is an NSNumber that gives the new rotation angle. If you provide a new angle like 20pi, your layer will rotate 10 full rotations (2pi/rotation). The code looks like this:
//Create a CABasicAnimation object to manage our rotation.
CABasicAnimation *rotation = [CABasicAnimation animationWithKeyPath:#"transform"];
rotation.duration = 10.0;
CGFLOAT angle = 20*M_PI;
//Set the ending value of the rotation to the new angle.
rotation.toValue = #(angle);
//Have the rotation use linear timing.
rotation.timingFunction =
[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear];
/*
This is the magic bit. We add a CAValueFunction that tells the CAAnimation we are
modifying the transform's rotation around the Z axis.
Without this, we would supply a transform as the fromValue and toValue, and
for rotations > a half-turn, we could not control the rotation direction.
By using a value function, we can specify arbitrary rotation amounts and
directions and even rotations greater than 360 degrees.
*/
rotation.valueFunction =
[CAValueFunction functionWithName: kCAValueFunctionRotateZ];
/*
Set the layer's transform to it's final state before submitting the animation, so
it is in it's final state once the animation completes.
*/
imageViewToAnimate.layer.transform =
CATransform3DRotate(imageViewToAnimate.layer.transform, angle, 0, 0, 1.0);
[imageViewToAnimate.layer addAnimation:rotation forKey:#"transform.rotation.z"];
(I Extracted the code above from a working example application, and took out some things that weren't directly related to the subject. You can see this code in use in the project KeyframeViewAnimations (link) on github. The code that does the rotation is in a method called `handleRotate'
I have an application where, in one window, there is an NSImageView. The user should be able to drag and drop ANY FILE/FOLDER (not only images) into the image view, so I subclassed NSImageView class to add support for those types.
The reason why I chose an NSImageView instead of a normal view is because I also wanted to display an animation (say an arrow pointing downwards and going up and down) when the user hovers over with files ready to drop. My question is this: what would be the best way (most efficient, quickest, least CPU usage, etc) to do this?
In fact, I have already done it, but what made me ask this question is the fact that when I set the images to change at a rate below 0.02 sec it starts to lag. Here is how I did it:
In the NSImageView subclass:
have an ivar: NSTimer* animTimer;
override awakeFromNib, calling [super awakeFromNib] and loading the images into an array (about 45 images) using NSImage
whenever user enters with files, start animTimer with frequency = 0.025 (less and it lags), and a selector that sets the next image in the array (called drawNextImage)
whenever the user exits or ends the drag and drop, call [animTimer invalidate] to stop updating images
Here is how I set the image in the subclass:
- (void)drawNextImage
{
currentImageIndex++; // ivar / kNumberDNDImages is a constant defined as 46
if (currentImageIndex >= kNumberDNDImages) { currentImageIndex = 0;}
[super setImage: [imagesArray objectAtIndex: currentImageIndex]]; // imagesArray is ivar
}
So, how would I do this quick enough? I'd like the frequency to be about 0.01 secs, but less than 0.025 lags, so that is what I have set for the moment. Oh, and my images are the correct size (+ or - one pixel or something) and they are in .png (I need the transparency - jpegs, for example, won't do it).
EDIT:
I have tried to follow NSResponder's suggestion, and have updated my method to this:
- (void)drawNextImage
{
currentImageIndex++;
if (currentImageIndex >= kNumberDNDImages) { currentImageIndex = 0;}
NSRect smallImgRect;
smallImgRect.origin = NSMakePoint(kSmallImageWidth * currentImageIndex, [self.bigDNDImage size].height); // Up left corner - ??
smallImgRect.size = NSMakeSize(kSmallImageWidth, [self.bigDNDImage size].height);
// Bottom left corner - ??
NSPoint imgPoint = NSMakePoint(([self bounds].size.width - kSmallImageWidth) / 2, 0);
[bigDNDImage drawAtPoint: imgPoint fromRect: smallImgRect operation: NSCompositeCopy fraction: 1];
}
I have also moved this method and the other drag and drop methods from the NSImageView subclass to an NSView subclass I already had. Everything is exactly the same, except for the superclass and this method. I also modified some constants.
In my early testing of this, I got some error/warning messages that didn't stop execution talking abou NSGraphicsContext or something. These have disappeared now, but just so you know. I have absolutely no idea why they were appearing and what they mean. If they ever appear again I'll worry about them, not now :)
EDIT 2:
This is what I'm doing now:
- (void)drawNextImage
{
currentImageIndex++;
if (currentImageIndex >= kNumberDNDImages) { currentImageIndex = 0;}
[self drawCurrentImage];
}
- (void)drawCurrentImage
{
NSRect smallImgRect;
smallImgRect.origin = NSMakePoint(kSmallImageWidth * currentImageIndex, 0); // Bottom left, for sure
smallImgRect.size = NSMakeSize(kSmallImageWidth, [self.bigDNDImage size].height);
// Bottom left as well
NSPoint imgPoint = NSMakePoint(([self bounds].size.width - kSmallImageWidth) / 2, 0);
[bigDNDImage drawAtPoint: imgPoint fromRect: smallImgRect operation: NSCompositeCopy fraction: 1];
}
And the catch here is to call drawCurrentImage when drawRect is called (see, it actually was easier to solve than I thought).
Now, I must say I haven't tried this with the composite image, because I couldn't find a good and quick way to merge 40+ images the way I wanted (one next to the other). But for the ones ineterested, I modified this to do the same thing as my NSImageView subclass (reading 40+ images from an array and displaying them) and I found no speed bump: NSView is as laggy below 0.025 as NSImageView. Also I found some problems when using core animation (the image is drawn in weird places instead of the place I tell her to) and some warnings talking about NSGraphicsContext, which I don't know how to solve at all (I'm a complete noob when it comes to drawing and such with Objective-C's tools). So for the time being I'm using NSImageView, unless I find a way to merge all those images and try it with NSView.
Core Animation would probably be quickest, since it'll do everything on the GPU. Create a layer for each image, setting each layer's contents to the CGImage you can make from each image, add them all as sublayers of a single top-level layer, host the top-level layer in a plain NSView, and then just toggle each image layer's hidden property in turn.
I'd probably draw all of the component images into one long image, and draw segments into a view using -drawAtPoint:fromRect:operation:fraction:. I'm sure you could make it faster than that by resorting to OpenGL, though.