I have a moving mars lander which I want to detect when it hits the top of a refuelling station in a game I'm making. I want to know the best way to know exactly when they hit.
What I am doing at the moment is I have 3 timers controlling horizontal, vertical and gravitational movement. Each timer makes the mars lander move a bit so I put the crash detection code at the top and bottom of each of these but since each one is fireing every 50 milliseconds a) it wont detect exactly when it collides and b) it will call the crash detection code a lot of times.
This is what one of the timers looks like and its pretty much the same for all 3 (I am making the game in vb 2008, this is just some pseudo-code):
gravity timer:
detectCrash()
Move ship
detectCrash()
End timer
what would be a more accurate way to see if they are colliding in terms of response time from the collide to the code which that calls.
how would I call the detection a lower amount of times?
I could possibly make another timer that fires every 10 milliseconds or so and check the collision but then this will run the code an awful lot of times (100 times a second) wont it?
I am also curious as to how large games would handle something like this which most likely happens many times a second.
Thanks.
One thing you could do is just use one timer to control all movement at 16 or 17 milliseconds (roughly 60 fps), but even that may be too fast, you should be fine with maybe 40 milliseconds (25 fps). In this timer event, calculate the new position of your object before moving, see if there is going to be a collision on the new position, and if not move the object.
At least that is how I would do it in vb.net. I would probably go with XNA however.
You don't want to have separate timers for moving the object and for detecting collisions. This approach has a couple of obvious problems:
It doesn't scale at all; imagine adding more objects. Would you have two timers for each (one each for the X and Y planes)? Would you add yet another timer to detect collisions for each object?
It's possible for the timers to get out of sync - imagine your code for moving your ship in the X plane takes much longer to run than the code for the Y plane. Suddenly your ship is moving vertically more often than it's moving horizontally.
'Real' games have a single loop, which eliminates most of your problems. In pseudo-code:
Check for user input
Work out new X and Y position of ship
Check for collisions
Draw result on screen
Goto 1
Obviously, there's more to it than that for anything more than a simple game! But this is the approach you want, rather than trying to have separate loops for everything you've got going on.
Related
For some time now, I've been thinking about how games calculate physics. Take as an example the game TrackMania. There are special routes where you only have to accelerate from the beginning to get to the finish. As an example, I take the following YouTube video (https://www.youtube.com/watch?v=uK7Y7zyP_SY). Unfortunately, I'm not a specialist in game development, but I know roughly how an engine works.
Most engines use a game loop, which means they use the delta value between the last call and the current call. This delta value is used to move objects, detect collisions and so on. The higher the delta value, the farther the object must have moved. The principle works fine with many games, but not with TrackMania.
A PC that can only display 25 FPS would calculate the physics differently than a PC with 120 FPS, because the collision detection is more accurate (impact is detected earlier, speed adjusted accordingly, ...). Now you can assume that the delta value is always the same (as with Super Mario Maker, at least that's my assumption), then this would work. But that would cause problems similar to old games (https://superuser.com/questions/630769/why-do-some-old-games-run-much-to-quickly-on-modern-hardware/).
Now my question, why do such maps work on every PC and why is the physics always exactly the same? Did I miss any aspect of game development / engine development?
The answer is simple, first the physic of the game is predictable, based on the input the result will always be the same.
Then the physic loop is not the same as the render, the game ensure the physic loop will be call with exactly the same period every time during the whole execution. So, yes a delta is needed for the render part, but the physic as a constant time in ms between each iteration.
One last think : you wont find "Press Forward" maps on the multiplayer, these kind of maps will not work correctly, this is directly linked to specificities in the physic to avoid TAS (Tool Assisted Speedrun).
I know about runtime code execution fundamentals in Flash Player and Air Debugger. But I want to know how Timer code execution is done.
Would it be better to use Timer rather than enterFrame event for similar actions? Which one is better to maximize optimization?
It depends on what you want to use it for. Most will vehemently say to use Event.ENTER_FRAME. In most instances, this is what you want. It will only be called once every frame begins construction. If your app is running at 24fps (the default), that code will run once every 41.7ms, assuming no dropped frames. For almost all GUI related cases, you do not want to run that code more often than this because it is entirely unnecessary (you can run it more often, sure, but it won't do any good since the screen is only updated that often).
There are times when you need code to run more often, however, mostly in non-GUI related cases. This can range from a systems check that needs to happen in the background to something that needs to be absolutely precise, such as an object that needs to be displayed/updated on an exact interval (Timer is accurate to the ms, I believe, whereas ENTER_FRAME is only accurate to the 1000/framerate ms).
In the end, it doesn't make much sense to use Timer for anything less than ENTER_FRAME would be called. Any more than that and you risk dropping frames. ENTER_FRAME is ideal for nearly everything graphics related, other than making something appear/update at a precise time. And even then, you should use ENTER_FRAME as well since it would only be rendered in the next frame anyway.
You need to evaluate each situation on a case-by-case basis and determine which is best for that particular situation because there is no best answer for all cases.
EDIT
I threw together a quick test app to see when Timer fires. Framerate is 24fps (42ms) and I set the timer to run every 10ms. Here is a selection of times it ran at.
2121
2137
2154
2171
2188
2203
2221
2237
As you can see, it is running every 15-18ms instead of the 10ms I wanted it to. I also tested 20ms, 100ms, 200ms, and 1000ms. In the end, each one fired within about 10ms of when it should have. So this is not nearly as precise as I had originally thought it was.
I've got a countdown clock, which appears above an object in my game that has been planted/started to be built.
I'm using the Sparrow Framework.
Right now I'm just running a method in the onEnterFrame, that checks every frame (I've actually set it to every other frame but anyway) and updates the countdown clock, but I'm wondering if this is just a slow and inefficient way to do this.
Is there a way to update my counter, without it clogging things up?
I should add there will be a number of these countdown clocks on the game screen.
A different approach would be to use a NSTimer that ticks at the same interval as you want your countdown counter to decrease at. Each tick you decrement the counter and call setNeedsDisplay.
If all you are doing is checking if the value of the countdown has changed onEnterFrame, and redrawing if the counter has changed, then you should be fine.
premature optimization.. you dont know its slow you think it might be
anyway, I dont follow. are you worried about drawing? if so. all view only do setNeedsDisplay and are refresh in batch mode every runloop iteration
The least threads you run, the smoother it should go. A counter is merely a variable, it has a very minimal overhead, which you really shouldn't worry about.
Rendering operations, loading images, creating new objects, destroying objects and other things are far more resource consuming, so increasing a counter shouldn't be a performance worry at all.
I am trying to write a simple game, but I'm stuck on what I think is simple physics. I have an object that at point 0,0,0 and is travelling at say 1 unit per second. If I give an instruction, that the object must turn 15 degrees per second , for 6 seconds (so it ends up 90 degrees right of it's starting position), and accelerate at 1 unit per second for 4 seconds (so it's final speed is 5 units per second), how do I calculate it's end point?
I think I know how to answer this for an object that isn't accelerating, because it's just a circle. In the example above, I know that the circumference of the circle is 4 * distance (because it is traversing 1/4 of a circle), and from that I can calculate the radius and angles and use simple trig to solve the answer.
However, because at any given moment in time the object is travelling slightly faster than it was in the previous moment, my end result wouldn't be a circle, it would be some sort of arc. I suppose I could estimate the end point by looping through each step (say 60 steps per second), but this sounds error prone and inefficient.
Could anybody point me in the right direction?
Your notion of stepping through is exactly what you do.
Almost all games operate under what's known as a "game tick". There are actually a number of different ticks that could be going on.
"Game tick" - each game tick, a set of requests are fired, AI is re-evaluated, and overall the game state has changed.
"physics tick" - each physics tick, each physical object is subject to a state change based on its current physical state.
"graphics tick" - also known as a rendering loop, this is simply drawing the game state to the screen.
The game tick and physics tick often, but do not need to, coincide with each other. You could have a physics tick that moves objects at their current speed along their current movement vector, and also applied gravity to it if necessary (altering its speed,) while adding additional acceleration (perhaps via rocket boosters?) in a completely separate loop. With proper multi-threading care, it would fit together nicely. The more de-coupled they are, the easier it will be to swap them out with better implementations later anyway.
Simulating via a time-step is how almost all physics are done in real-time gaming. I even used to do thermal modeling for the department of defense, and that's how we did our physics modelling there too (we just got to use bigger computers :-) )
Also, this allows you to implement complex rotations in your physics engine. The less special cases you have in your physics engine, the less things will break.
What you're asking is actually a Mathematical Rate of change question. Every object that is in motion has position locations of (x,y,z). If you are able to break down the component velocity and accelerations into their individual planes, your final end point would be (x1, y1, z1) which is the respective outcome of your equations in that plane.
Hope it helps (:
This is what happens:
The drawGL function is called at the exact end of the frame thanks to a usleep, as suggested. This already maintains a steady framerate.
The actual presentation of the renderbuffer takes place with drawGL(). Measuring the time it takes to do this, gives me fluctuating execution times, resulting in a stutter in my animation. This timer uses mach_absolute_time so it's extremely accurate.
At the end of my frame, I measure timeDifference. Yes, it's on average 1 millisecond, but it deviates a lot, ranging from 0.8 milliseconds to 1.2 with peaks of up to more than 2 milliseconds.
Example:
// Every something of a second I call tick
-(void)tick
{
drawGL();
}
- (void)drawGL
{
// startTime using mach_absolute_time;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
// endTime using mach_absolute_time;
// timeDifference = endTime - startTime;
}
My understanding is that once the framebuffer has been created, presenting the renderbuffer should always take the same effort, regardless of the complexity of the frame? Is this true? And if not, how can I prevent this?
By the way, this is an example for an iPhone app. So we're talking OpenGL ES here, though I don't think it's a platform specific problem. If it is, than what is going on? And shouldn't this be not happening? And again, if so, how can I prevent this from happening?
The deviations you encounter maybe be caused by a lot of factors, including OS scheduler that kicks in and gives cpu to another process or similar issues. In fact normal human won't tell a difference between 1 and 2 ms render times. Motion pictures run at 25 fps, which means each frame is shown for roughly 40ms and it looks fluid for human eye.
As for animation stuttering you should examine how you maintain constant animation speed. Most common approach I've seen looks roughly like this:
while(loop)
{
lastFrameTime; // time it took for last frame to render
timeSinceLastUpdate+= lastFrameTime;
if(timeSinceLastUpdate > (1 second / DESIRED_UPDATES_PER_SECOND))
{
updateAnimation(timeSinceLastUpdate);
timeSinceLastUpdate = 0;
}
// do the drawing
presentScene();
}
Or you could just pass lastFrameTime to updateAnimation every frame and interpolate between animation states. The result will be even more fluid.
If you're already using something like the above, maybe you should look for culprits in other parts of your render loop. In Direct3D the costly things were calls for drawing primitives and changing render states, so you might want to check around OpenGL analogues of those.
My favorite OpenGL expression of all times: "implementation specific". I think it applies here very well.
A quick search for mach_absolute_time results in this article: Link
Looks like precision of that timer on an iPhone is only 166.67 ns (and maybe worse).
While that may explain the large difference, it doesn't explain that there is a difference at all.
The three main reasons are probably:
Different execution paths during renderbuffer presentation. A lot can happen in 1ms and just because you call the same functions with the same parameters doesn't mean the exact same instructions are executed. This is especially true if other hardware is involved.
Interrupts/other processes, there is always something else going on that distracts the CPU. As far as I know the iPhone OS is not a real-time OS and so there's no guarantee that any operation will complete within a certain time limit (and even a real-time OS will have time variations).
If there are any other OpenGL calls still being processed by the GPU that might delay presentRenderbuffer. That's the easiest to test, just call glFinish() before getting the start time.
It is best not to rely on a high constant frame rate for a number of reasons, the most important being that the OS may do something in the background that slows things down. Better to sample a timer and work out how much time has passed each frame, this should ensure smooth animation.
Is it possible that the timer is not accurate to the sub ms level even though it is returning decimals 0.8->2.0?