Troubles Prepositioning Node Using Coordinate System Conversions - objective-c

Okay so I have been trying to preposition a sprite node before adding it to the scene. The only problem is that I need to know the (0, 0.5) or (left, middle) position of the node, in scene coordinates before I can position it properly.
I know about the convertPoint:(CGPoint) toNode/fromNode:(SKSpriteNode *) methods and currently I have worked out the following within the Scene's code:
[node convertPoint:CGPointMake(0,0.5) toNode: self]
I also wasn't sure if it was confusing self (the scene) with self (the node), so I tried
SKScene *scene = self;
[node convertPoint:CGPointMake(0,0.5) toNode: scene]
I am pretty sure that I didn't have to make the distinction but I tried any ways.
The logged result of both attempts was (0,0.5).
The node.position is (50, 100).
In case the above is not clear, I am trying to find the position on the edge of the frame, which should be equal to the nodes width. The reason why I am not using width though is because I am placing it in respect to another node and the two nodes may be rotated.
The theories I am trying to reference are from Apples Spritekit Programming Guide
If there is an easier way to establish a distance between two nodes based on the width of one node, taking into account rotation, feel free to post it for I would love to know, although I still need the node conversion for other methods.
Thank you in advance for all of your help.

You shouldn't change the anchor point once the node has been added as it will inherently change its position. If you are using anchor 0.5, 0.5 to rotate the nodes leave it like that. If you want to get the maxX point from a rotated node you could do something like this:
SKSpriteNode *sprite2 = [SKSpriteNode spriteNodeWithColor:[UIColor blackColor] size:CGSizeMake(50, 50)];
float angle = - sprite.zRotation;
CGPoint dirVect = CGPointMake(cosf(angle), sinf(angle));
CGFloat distance = sprite.frame.size.width/2 + sprite2.frame.size.width/2;
CGPoint destPoint = CGPointMake(sprite.position.x + (dirVect.x * distance),
sprite.position.y + (dirVect.y * distance));
sprite2.position = destPoint;
[self addChild:sprite2];
Where sprite is the node you have rotated and sprite2 the node you want to add respect of the first node. distance should be the distance (excuse the pun) between the anchor points of the two nodes.
Let me know if this is what you are looking for. If not, a screenshot would help :)

Related

How could I make an "iPod Wheel" type control on iPhone?

I want to create a sort of "iPod Wheel" control in a Swift project that I'm doing. I've got everything drawn out, but not it's time to actually make this thing work.
What would be the best way to recognize "spinning" so to speak, or to describe that more clearly, when the user is actively pressing the wheel and spinning his/her thumb around the wheel in a clockwise or counter-clockwise direction.
I will no doubt want to use touchesBegan/touchesMoved/touchesEnded. What's the best way to figure out spinning though?
I'm thinking
a) determine in touchesMoved if the users touch is within circle, by determining the radius from the center point. Center point and radius are easily obtainable. Using these however, how can I determine the outer edge of the circle/wheel... to know whether the user is within the actually circle (their touch could still be in the view, but outside the actual wheel portion)
b) Determine the current angle and how it has changed the previous angle. By that I mean... I would use the center point of the circle as one point, and the users current touch as the second point. This gives me my vector. I would also have a baseline angle. Likely center point to 12 c'clock. I would compare the two vectors (I already have a VectorMath class for this from something else I'm doing) and see my angle is 0. If the users touch were at 3 oclock, and I compared it to our baseline angle... I would see the angle is 90 degrees. I would continually calculate the angle, and perhaps every 5 degrees of change... would warrant a change in the controls output (depending on desired sensitivity).
Does this seem like the best way to do this? I think this would be an ideal way, but am still not sure on how to calculate the circles outer edge, and determine if a users touch is within it.
You are on the right track. I think approach b) will work.
Remember the starting position of the finger at the touchesBegan
event.
Imagine a line from the finger position to the middle of the button
circle.
For the touchesMoved event, again, imagine a virtual line from the
new position to the center of the circle.
Using the formula from
http://mathworld.wolfram.com/Line-LineAngle.html (or some code) you can determine
the angle between the two lines. If it's a negative angle the user
is turning the wheel counter-clockwise, otherwise it's clockwise.
To determine if the touch event was inside the ring, calculate the distance from the center of the circle to the point of touch. It should be between the minimum and the maximum distance (inner circle and outer circle radius). Calculating the distance between to two points is explained at https://www.mathsisfun.com/algebra/distance-2-points.html
I think you're almost there, although I'd do something slightly different on your point b.
If you think about it, when you start "spinning" on your iPod, you don't need to start from a precise position, you start spinning from "where you started", therefore I wouldn't set my "baseline angle" at π/2, I'd set my baseline (or 0°) angle at the point the user taps for the first time, and starting from then, I'd count the offset angles, clockwise and counterclockwise.
I don't think there would be much difference, except maybe from some calculations you'll do on the angles, on the two approaches, practically speaking; it just makes more sense imho to start counting from the first input rater than setting a baseline to π/2 and counting the first angle.
I am answering in parts.
// Get a position based on the angle
float xPosition = center.x + (radiusX * sinf(angleInRadians)) - (CGRectGetWidth([cell frame]) / 2);
float yPosition = center.y + (radiusY * cosf(angleInRadians)) - (CGRectGetHeight([cell frame]) / 2);
float scale = 0.75f + 0.25f * (cosf(angleInRadians) + 1.0);
next
[cell setTransform:CGAffineTransformScale(CGAffineTransformMakeTranslation(xPosition, yPosition), scale, scale)];
// Tweak alpha using the same system as applied for scale, this
// time with 0.3 the minimum and a semicircle range of 0.5
[cell setAlpha:(0.3f + 0.5f * (cosf(angleInRadians) + 1.0))];
and
- (void)spin:(SpinGestureRecognizer *)recognizer
{
CGFloat angleInRadians = -[recognizer rotation];
CGFloat degrees = 180.0 * angleInRadians / M_PI; // Radians to degrees
[self setCurrentAngle:[self currentAngle] + degrees];
[self setAngle:[self currentAngle]];
}
again check the wheelview.m of photowheel in github.

Sprite Kit: SKSpriteNodes hanging off left side of screen despite (0,0) anchor points

I'm adding an array of SKSpriteNodes (640px wide) directly to my SKScene:
for (backgroundTile in backgroundTiles) {
backgroundTile.anchorPoint = CGPointMake(0.0, 0.0);
backgroundTile.position = CGPointMake(BG_TILE_X_POS, tilePlacementPositionY);
[self addChild:backgroundTile];
tilePlacementPositionY += tileHeight;
}
(BG_TILE_X_POS is 0.0)
But despite having an X position of 0.0 and their anchor points being set to (0.0,0.0) they still hang off the left side of the screen by 150px.
I can compensate that by giving them an X position of 150 and have also tried:
self.size = view.bounds.size;
…but that only enlarges the visible parts of the sprites so that they fill the screen; cropping off the top sprite.
I assume I'm making a rookie mistake but, looking through the documentation, nothing's striking me as obvious (which I guess it should be).
So, how do I position the sprites flush to the left edge? Any help would be appreciated.
Thanks.
I'd overlooked the obvious. I'd only created placeholder sprites at x2 resolution but stored them in their image sets as 1x resolution. Finally, with the setting:
self.size = view.bounds.size;
everything behaves as expected.
Silly mistake. I was convinced the problem was with the code, not the assets.

How can I move a CCNode along an arc, with linear animation speed?

I'm trying to animate a CCNode in a semi circle motion and have it constantly move at the same speed. I thought I could achieve this with Bezier animation.
I'm trying to find the correct implementation to run an action with CCActionBezierBy (ref) that will not have an ease rate at all.
CGFloat duration = 5;
// bezierConfig is already set
CGFloat rate = 0.0f;
id action = [CCActionBezierBy actionWithDuration:duration bezier:bezierConfig];
id ease = [CCActionEaseRate actionWithAction:action rate:rate];
id spawn = [CCActionSpawn actions:action, ease, nil];
As I manipulate the rate I can see results, with 0 being the lowest ease animation. But how can I make the animation completely linear?
Place moving node in parent node. Its coordinates from parent root will be moving radius. Then make 2 rotation actions. One rotation of parent with constant speed. And rotation of node itself to the opposite direction.

applyForce(0, 400) - SpriteKit inconsistency

So I have an object that has a physicsBody and gravity affects it. It is also dynamic.
Currently, when the users touches the screen, I run the code:
applyForce(0, 400)
The object moves up about 200 and then falls back down due to gravity. This only happens some of the time. Other times, it results in the object only moving 50ish units in the Y direction.
I can't find a pattern... I put my project on dropbox so it can be opened if anyone is willing to look at it.
https://www.dropbox.com/sh/z0nt79pd0l5psfg/bJTbaS2JpY
EDIT: It seems this happens when the player is bouncing off of the ground slightly for a moment after impact. Is there a way I can make it so the player doesn't bounce at all?
EDIT 2: I tried to solve this using the friction parameter and only allowing the player to "jump" when the friction was = 0 (you would think this would be all cases where the player was airborne) but friction appears to be greater than 0 at all times. How else might I detect if the player is touching an object (other than by using the y location)?
Thanks
Suggested Solution
If you're trying to implement a jump feature, I suggest you look at applyImpulse instead of applyForce. Here's the difference between the two, as described in the Sprite Kit Programming Guide:
You can choose to apply either a force or an impulse:
A force is applied for a length of time based on the amount of simulation time that passes between when you apply the force and when the next frame of the simulation is processed. So, to apply a continuous force to an body, you need to make the appropriate method calls each time a new frame is processed. Forces are usually used for continuous effects.
An impulse makes an instantaneous change to the body’s velocity that is independent of the amount of simulation time that has passed. Impulses are usually used for immediate changes to a body’s velocity.
A jump is really an instantaneous change to a body's velocity, meaning that you should apply an impulse instead of a force. To use the applyImpulse: method, figure out the desired instantaneous change in velocity, multiply by the body's mass, and use that as the impulse parameter into the function. I think you'll see better results.
Explanation for Unexpected Behavior
If you're calling applyForce: outside of your update: function, what's happening is that your force is being multiplied by the amount of time passed between when you apply the force and when the next frame of the simulation is processed. This multiplier is not a constant, so you're seeing a different change in velocity every time you call applyForce: in this manner.
#godel9 has a good suggested solution, although, in my own testing, the explanation given for the unexpected behaviour is not correct.
From the SKPhysicsBody Class Reference:
The force is applied for a single simulation step (one frame).
Referring back to the SKScene Class Reference's section on the -update method:
...it is called exactly once per frame, so long as the scene is presented in a view and is not paused.
So we can assume that calling -applyForce: in SKScene's -update method should not cause a problem. But as observed, the force does not exceed gravity, despite applying an upward force much greater than gravity (400 newtons vs 9.81).
I created a test project that would create two nodes, one that falls naturally, setting affectedByGravity to TRUE, and another that calls -applyForce with the same expected gravity vector (0 newtons in the x direction, and -9.81 in the y direction). I then calculated the difference in velocity of each node in one time step, and the length of time step. From this, I then logged the acceleration (change in velocity / change in time).
Here is a snippet from my SKScene subclass:
- (id)initWithSize:(CGSize)size
{
if (self = [super initWithSize:size])
{
self.backgroundColor = [UIColor purpleColor];
SKShapeNode *node = [[SKShapeNode alloc] init];
node.path = CGPathCreateWithEllipseInRect(CGRectMake(0, 0, 10, 10), nil);
node.name = #"n";
node.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius:5];
node.position = CGPointMake(0, 450);
node.physicsBody.linearDamping = 0;
node.physicsBody.affectedByGravity = NO;
[self addChild:node];
node = [[SKShapeNode alloc] init];
node.path = CGPathCreateWithEllipseInRect(CGRectMake(0, 0, 10, 10), nil);
node.name = #"n2";
node.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius:5];
node.position = CGPointMake(20, 450);
node.physicsBody.linearDamping = 0;
[self addChild:node];
}
return self;
}
- (void)update:(NSTimeInterval)currentTime
{
SKNode *node = [self childNodeWithName:#"n"];
SKNode *node2 = [self childNodeWithName:#"n2"];
CGFloat acc1 = (node.physicsBody.velocity.dy - self.previousVelocity) / (currentTime - self.previousTime);
CGFloat acc2 = (node2.physicsBody.velocity.dy - self.previousVelocity2) / (currentTime - self.previousTime);
[node2.physicsBody applyForce:CGVectorMake(0, node.physicsBody.mass * -150 * self.physicsWorld.gravity.dy)];
NSLog(#"x:%f, y:%f, acc1:%f, acc2:%f", node.position.x, node.position.y, acc1, acc2);
self.previousVelocity = node.physicsBody.velocity.dy;
self.previousTime = currentTime;
self.previousVelocity2 = node2.physicsBody.velocity.dy;
}
The results are unusual. The node that is affected by gravity in the simulation has an acceleration that is consistently multiplied by a factor of 150 when compared to the node whose force was manually applied. I have attempted this with nodes of varying size and density, but the same scalar multiplier exists.
From this I must deduce that SpriteKit internally has a default 'pixel-to-meter' ratio. That is to say that each 'meter' is equal to exactly 150 pixels. This is sometimes useful, as otherwise the scene is often too large, meaning forces react slowly (think watching an airplane from the ground, it is travelling very fast but seemingly moving very slowly).
Sprite Kit documentation frequently suggests that exact physics calculations are not recommended (seen specifically in the section 'Fudging the Numbers'), but this inconsistency took me a long time to pin down. Hope this helps!

Visualizing the Anchor Point of a UIImageView

Is there an easy way of putting a mark (like a cross for example) on the anchor point of an UIImageView? I'm trying to line up several rotating images by their anchor point, and being able to see these points would make the job a lot easier.
Many thanks.
You are asking how to visualize the anchor point within a view but it seem to me that you are asking for it so that you can help align the anchor points. I'll try and answer both questions.
Visualizing the anchor point.
Every view on iOS have an underlying layer that has an anchor point. The anchor point is in unit coordinate space of the layer (x and y goes from 0 to 1). This means that you can multiply x by the width and y by the height to get the position of the anchor point inside the layer in the coordinate space of the view/layer. You can then place a subview/sublayer there to show the location of the anchor point.
In code you could do something like this to display a small black dot where the anchor point is.
CALayer *anchorPointLayer = [CALayer layer];
anchorPointLayer.backgroundColor = [UIColor blackColor].CGColor;
anchorPointLayer.bounds = CGRectMake(0, 0, 6, 6);
anchorPointLayer.cornerRadius = 3;
CGPoint anchor = viewWithVisibleAnchorPoint.layer.anchorPoint;
CGSize size = viewWithVisibleAnchorPoint.layer.bounds.size;
anchorPointLayer.position = CGPointMake(anchor.x * size.width,
anchor.y * size.height);
[viewWithVisibleAnchorPoint.layer addSublayer:anchorPointLayer];
You can see the result in the image below for four different rotations.
Aligning layers by their anchor point
That is cool and all but it's actually easier then that to align anchor points.
The key trick is that the position and the anchorPoint is always the same point, only in two different coordinate spaces. The position is specified in the coordinate space of the super layer. The anchor point is specified in the unit coordinate space of the layer.
The nice thing about this is that views that have their position property aligned will automatically have their anchorPoint aligned. Note that the content is drawn relative to the anchor point. Below is an example of a bunch of views that all have the same y component of their position, thus they are aligned in y.
There really isn't any special code to do this. Just make sure that the position properties are aligned.