Spritekit Simulating fluid currents - objective-c

Is there an easy way to simulate currents within a scene? I know they offer gravity, mass, density, drag and other attributes within spritekit, but what I am not seeing, is how would I simulate a current affecting objects on the scene?
IE say I have an object dropping using gravity and density, from the top of the screen to the bottom as though it was falling through watter, setting this up should be fairly straight forward. However, lets say I want to add an current flowing left to right through the middle of the scene... so as the object falls it begins to encounter the influence of the current and the fall begins to move with the current, and as it gets closer to the center of the current it increases its left/right movement and as it passes through the center of the current and moves away from its center the left/right movement slows down.
Is this possible? Do I need to create invisibe/transparent nodes moving left to right with gravity to simulate this? Or is there a different approach i should use?

This can be handled in your -update method. You can check whether a certain node is located within a certain region and apply a force on it, as if it were being affected by a current.
EDIT: I have posted a way you can differentiate the force based on the center of the current.
-(void)update:(CFTimeInterval)currentTime
{
for (SKNode *node in self.children)
{
if (node.position.y > 300 && node.position.y < 500) //Let's say the current is between these values, modify to your situation.
{
float diff = ABS (node.position.y - 400);//Difference from center of current.
CGVector force = CGVectorMake(2*(100 - diff), 0);//Variable force
[node.physicsBody applyForce:force];
}
}
}

Related

Find out when SCNNode disappears from SCNScene

Does a particular method get called when an SCNNode is removed from the scene?
-(void)removeFromParentNode;
Does not get called on the SCNNode object.
To set the scene
I am using gravity to pull down an object. When an object goes too far down, it automatically disappears and the draw calls and polygon counts decrease. So the SCNNode is definitely being destroyed, but is there a way I could hook into the destruction?
Other answers covered this pretty well already, but to go a bit further:
First, your node isn't being removed from the scene — its content is passing outside the camera's viewing frustum, which means SceneKit knows it doesn't need to issue draw calls to the GPU to render it. If you enumerate the child nodes of the scene (or of whatever parent contains the nodes you're talking about), you'll see that they're still there. You lose some of the rendering performance cost because SceneKit doesn't need to issue draw calls for stuff that it knows won't be visible in the frame.
(As noted in Tanguy's answer, this may be because of your zFar setting. Or it may not — it depends which direction the nodes are falling out of camera in.)
But if you keep adding nodes and letting physics drop them off the screen, you'll accumulate a pre-render performance cost, as SceneKit has to walk the scene graph every frame and figure out which nodes it'll need to issue draw calls for. This cost is pretty small for each node, but it could eventually add up to something you don't want to deal with.
And since you want to have something happen when the node falls out of frame anyway, you just need to find a good opportunity to both deal with that and clean up the disappearing node.
So where to do that? Well, you have a few options. As has been noted, you could put something into the render loop to check the visibility of every node on every frame:
- (void)renderer:(id<SCNSceneRenderer>)renderer didSimulatePhysicsAtTime:(NSTimeInterval)time {
if (![renderer isNodeInsideFrustum:myNode withPointOfView:renderer.pointOfView]) {
// it's gone, remove it from scene
}
}
But that's a somewhat expensive check to be running on every frame (remember, you're targeting 30 or 60 fps here). A better way might be to let the physics system help you:
Create a node with an SCNBox geometry that's big enough to "catch" everything that falls off the screen.
Give that node a static physics body, and set up the category and collision bit masks so that your falling nodes will collide with it.
Position that node just outside of the viewing frustum so that your falling objects hit it soon after they fall out of view.
Implement a contact delegate method to destroy the falling nodes:
- (void)physicsWorld:(SCNPhysicsWorld *)world didBeginContact:(SCNPhysicsContact *)contact {
if (/* sort out which node is which */) {
[fallingNode removeFromParentNode];
// ... and do whatever else you want to do when it falls offscreen.
}
}
Your object will disappear if it goes further than the ZFar property of your active camera. (default value is 100.0)
As said by David Rönnqvist in comments, your Node is not destroyed and you can still modify its property.
If you want to hook-up to your Node's geometry disappearance, you can calculate the distance between your active camera and your Node and check it every frame in your rendering loop to trigger an action if it gets higher than 100.
If you want to render your Node at a greater distance, you can just modify the ZFar property of your camera.

Box2d - mouse joint set to a specific point on an object

I have a Box2d iphone app (using cocos2d) that has a mouse joint in it, and whenever you use the mouse joint, it applies the force to the object at whatever point you clicked. I was wondering if there is a way to always have the force be applied to a certain point on the object, for example the center of it. The reason is that when my shape is a box, for instance, when I pick it up using the mouse joint, it spins so that wherever I clicked ends up toward the top. I would like to always pick up the object from, say the center of the object. I've tried amping up the inertial dampening, but wasn't quite getting the effect I wanted. Here is my current code:
b2Vec2 locationWorld = b2Vec2(newtouch.x/PTM_RATIO, newtouch.y/PTM_RATIO);
b2MouseJointDef md;
md.bodyA = physicsLayer.groundBody;
md.bodyB = touchedObject.body;
md.target = locationWorld;
md.maxForce = 2000;
b2World* w = [physicsLayer getWorld];
mouseJoint = (b2MouseJoint*)w->CreateJoint(&md);
touchedObject is the object I'm moving and it stores it's body as a property. This code works fine as is, it just has the nasty spinning.
You could change the target point to be the center of mass of the box:
md.target = touchedObject.body->GetWorldCenter();
This will make the joint pull the center of the box to where your finger is, instead of the point you touched. It will not stop it from spinning if it's already spinning though, so you might still want to use some angular damping on the body to make it feel a bit more like the user is actually touching it, rather than holding a pin stuck through the center of it :)

draw with opengl less than every frame

Is their any way, in cocos2d, to have a cclayer draw via opengl less frequently than every frame? I have tried:
-(void) draw
{
glEnable(GL_LINE_SMOOTH);
if (iShouldUpdate) {
ccDrawLine(ccp(50,50), ccp(200,200));
iShouldUpdate = false;
}
}
-(void) updateTheMap
{
iShouldUpdate = true;
}
and then call: updateTheMap whenever needed, but it just displays for 1 frame.
Thanks.
Yes and no.
Normally the frame contents are cleared before a new frame renders. Every frame begins at a clean state. Now if you were to disable clearing of the screen, you could draw something once and it would stay on screen. But then you wouldn't be able to move what you've drawn without clearing exactly just the parts that you've drawn.
Since this gets overly complex and error prone really quickly, the standard way for games has been ever since there are game engines to clear the frame contents before beginning to draw a new frame. One notable exception being DooM, where it was assumed by the engine developer that every pixel on screen would be updated every frame - unless there were missing textures in which case you could see the famous Halls of Mirrors (HOM) effect that occurs when you're not clearing the framebuffer every frame:
So the standard is to draw the entire screen contents again every frame. Because of that, you can't draw something just every couple of frames because if you do that, then whatever you've drawn will be visible on screen for one frame and then it'll be gone.
In summation: you have to draw everything that should be visible on screen repeatedly every frame. That's the way almost all game engines work.

How to code a random movement in limited area

I have a limited area (screen) populated with a few moving objects (3-20 of them, so it's not like 10.000 :). Those objects should be moving with a constant speed and into random direction. But, there are a few limitation to it:
objects shouldn't exit the area - so if it's close to the edge, it should move away from it
objects shouldn't bump onto each other - so when one is close to another one it should move away (but not get too close to different one).
On the image below I have marked the allowed moves in this situation - for example object D shouldn't move straight up, as it would bring it to the "wall".
What I would like to have is a way to move them (one by one). Is there any simple way to achieve it, without too much calculations?
The density of objects in the area would be rather low.
There are a number of ways you might programmatically enforce your desired behavior, given that you have such a small number of objects. However, I'm going to suggest something slightly different.
What if you ran the whole thing as a physics simulation? For instance, you could set up a Box2D world with no gravity, no friction, and perfectly elastic collisions. You could model your enclosed region and populate it with objects that are proportionally larger than their on-screen counterparts so that the on-screen versions never get too close to each other (because the underlying objects in the physics simulation will collide and change direction before that can happen), and assign each object a random initial position and velocity.
Then all you have to do is step the physics simulation, and map its current state into your UI. All the tricky stuff is handled for you, and the result will probably be more believable/realistic than what you would get by trying to come up with your own movement algorithm (or if you wanted it to appear more random and less believable, you could also just periodically apply a random impulse to a random object to keep things changing unpredictably).
You can use the hitTest: method of UIView
UIView* touchedView=[self.superview hitTest:currentOrigin withEvent:nil];
In This method you have to pass the current origin of the ball and in second argument you can pass nil.
that method will return the view with which the ball is hited.
If there is any hit view you just change the direction of the ball.
for border you can set the condition for the frame of the ball if the ball go out of the boundary just change the direction of the ball.

How to set/corrent the resting position of UIScrollView contents after it has scrolled

I am implementing a view similar to a Table View which contains rows of data. What I am trying to do is that after scrolling, each row snaps to a set of correct positions so the boundaries of the top and bottom row are completely visible - and not trimmed as it normally happens. Is there a way to get the scroll destination before the scrolling starts? This way I will be able to correct the final y-position, for example, in multiples of row height.
I asked the same question a couple of weeks ago.
There is definitely no public API to determine the final resting Y offset of a scroll deceleration. After researching it further, I wasn't able to figure out Apple's formula for how they manage deceleration. I gathered a bunch of data from scrolling events, recording the beginning velocity and how far the deceleration traveled, and from that made some rough estimates of where it was likely to stop.
My goal was to predict well in advance where it would stop, and to convert the deceleration into a specific move to an offset. The problem with this technique is that scrollRectToVisible:animated: always occurs over a set period of time, so instead of the velocity the user expects from a flick gesture, it's either much faster or much slower, depending on the strength of the flick.
Another choice is to observe the deceleration and wait until it slows down to some threshold, then call scrollRectToVisible:animated:, but even this is difficult to get "just right."
A third choice is to wait until the deceleration completes on its own, check to see if it happened to stop at your desired offset multiple, and then adjust if not. I don't care for this personally, as you either coast to a stop and then speed up or coast to a stop and reverse direction.