I've got a countdown clock, which appears above an object in my game that has been planted/started to be built.
I'm using the Sparrow Framework.
Right now I'm just running a method in the onEnterFrame, that checks every frame (I've actually set it to every other frame but anyway) and updates the countdown clock, but I'm wondering if this is just a slow and inefficient way to do this.
Is there a way to update my counter, without it clogging things up?
I should add there will be a number of these countdown clocks on the game screen.
A different approach would be to use a NSTimer that ticks at the same interval as you want your countdown counter to decrease at. Each tick you decrement the counter and call setNeedsDisplay.
If all you are doing is checking if the value of the countdown has changed onEnterFrame, and redrawing if the counter has changed, then you should be fine.
premature optimization.. you dont know its slow you think it might be
anyway, I dont follow. are you worried about drawing? if so. all view only do setNeedsDisplay and are refresh in batch mode every runloop iteration
The least threads you run, the smoother it should go. A counter is merely a variable, it has a very minimal overhead, which you really shouldn't worry about.
Rendering operations, loading images, creating new objects, destroying objects and other things are far more resource consuming, so increasing a counter shouldn't be a performance worry at all.
Related
For example, I need a timer on the page. So I can have an action every 100 ms
type Action = Tick Time
and I have field time in my Model. Model could be big, but I need to recreate it and whole view every 100 ms because of time field. I think it would not be effective performance-wise.
Is there another approach or I shouldn't worry about such thing?
The whole view isn't necessarily being recreated every time. Elm uses a Virtual DOM and does diffs to change only the bare minimum of the actual DOM. If large parts of your view are actually changing on every 100ms tick, then that could obviously cause problems, but I'm guessing you're only making smaller adjustments at every 100ms, and you probably have nothing to worry about. Take a look at your developer tools to see the whether the process utilization is spiking.
Your model isn't being recreated every 100ms either. There are optimizations around the underlying data structures (see this related conversation about foldp internals) that let you think in terms of immutability and pureness, but are optimized under the hood.
I'm working on an IRC bot in VB.net 2012. I know I know, but it's not a botnet, lol
I wanted to ask your advise on how to manage to many timers. Right now I have a timer that rewards points at a user specified interval, a timer that autosaves those points at a user specified interval. I also have one that displays advertisements, and broadcast a collection of responses at specified intervals.
I feel like it's getting out of hand and would love to know if there is a way I could do all of these thing with a single or at least less timers.
Please keep in mind I learn as I go and don't always understand all the terminology, but I am a quick learner if you have the patience to explain. :)
Thank you.
Yes, of course you can do them with a single timer. In fact, that is what you should do—timers are a limited resource, and there's hardly ever a reason for a single application to use more than one of them.
What you do is create a single timer that ticks at the most frequent interval required by all of your logic. Then, inside of that timer's Tick event handler, you set/check flags that indicate how much time has elapsed. Depending on which interval has expired, you perform the appropriate action and update the state of the flags. By "flags", I mean module-level variables that keep track of the various intervals you want to track—the ones you're tracking now with different timers.
It is roughly the same way that you keep track of time using a clock. You don't use separate clocks for every task you want to time, you use a single clock that ticks every second. You operate off of displacements from this tick—60 of these ticks is 1 minute, 3600 of these ticks is 1 hour, etc.
However, I strongly recommend figuring out a way to do as many of these things as possible in response to events, rather than at regular intervals tracked by a timer. You could, for example, reward points in response to specific user actions. This type of polling is very resource-intensive and rarely the best solution.
I know about runtime code execution fundamentals in Flash Player and Air Debugger. But I want to know how Timer code execution is done.
Would it be better to use Timer rather than enterFrame event for similar actions? Which one is better to maximize optimization?
It depends on what you want to use it for. Most will vehemently say to use Event.ENTER_FRAME. In most instances, this is what you want. It will only be called once every frame begins construction. If your app is running at 24fps (the default), that code will run once every 41.7ms, assuming no dropped frames. For almost all GUI related cases, you do not want to run that code more often than this because it is entirely unnecessary (you can run it more often, sure, but it won't do any good since the screen is only updated that often).
There are times when you need code to run more often, however, mostly in non-GUI related cases. This can range from a systems check that needs to happen in the background to something that needs to be absolutely precise, such as an object that needs to be displayed/updated on an exact interval (Timer is accurate to the ms, I believe, whereas ENTER_FRAME is only accurate to the 1000/framerate ms).
In the end, it doesn't make much sense to use Timer for anything less than ENTER_FRAME would be called. Any more than that and you risk dropping frames. ENTER_FRAME is ideal for nearly everything graphics related, other than making something appear/update at a precise time. And even then, you should use ENTER_FRAME as well since it would only be rendered in the next frame anyway.
You need to evaluate each situation on a case-by-case basis and determine which is best for that particular situation because there is no best answer for all cases.
EDIT
I threw together a quick test app to see when Timer fires. Framerate is 24fps (42ms) and I set the timer to run every 10ms. Here is a selection of times it ran at.
2121
2137
2154
2171
2188
2203
2221
2237
As you can see, it is running every 15-18ms instead of the 10ms I wanted it to. I also tested 20ms, 100ms, 200ms, and 1000ms. In the end, each one fired within about 10ms of when it should have. So this is not nearly as precise as I had originally thought it was.
I have a moving mars lander which I want to detect when it hits the top of a refuelling station in a game I'm making. I want to know the best way to know exactly when they hit.
What I am doing at the moment is I have 3 timers controlling horizontal, vertical and gravitational movement. Each timer makes the mars lander move a bit so I put the crash detection code at the top and bottom of each of these but since each one is fireing every 50 milliseconds a) it wont detect exactly when it collides and b) it will call the crash detection code a lot of times.
This is what one of the timers looks like and its pretty much the same for all 3 (I am making the game in vb 2008, this is just some pseudo-code):
gravity timer:
detectCrash()
Move ship
detectCrash()
End timer
what would be a more accurate way to see if they are colliding in terms of response time from the collide to the code which that calls.
how would I call the detection a lower amount of times?
I could possibly make another timer that fires every 10 milliseconds or so and check the collision but then this will run the code an awful lot of times (100 times a second) wont it?
I am also curious as to how large games would handle something like this which most likely happens many times a second.
Thanks.
One thing you could do is just use one timer to control all movement at 16 or 17 milliseconds (roughly 60 fps), but even that may be too fast, you should be fine with maybe 40 milliseconds (25 fps). In this timer event, calculate the new position of your object before moving, see if there is going to be a collision on the new position, and if not move the object.
At least that is how I would do it in vb.net. I would probably go with XNA however.
You don't want to have separate timers for moving the object and for detecting collisions. This approach has a couple of obvious problems:
It doesn't scale at all; imagine adding more objects. Would you have two timers for each (one each for the X and Y planes)? Would you add yet another timer to detect collisions for each object?
It's possible for the timers to get out of sync - imagine your code for moving your ship in the X plane takes much longer to run than the code for the Y plane. Suddenly your ship is moving vertically more often than it's moving horizontally.
'Real' games have a single loop, which eliminates most of your problems. In pseudo-code:
Check for user input
Work out new X and Y position of ship
Check for collisions
Draw result on screen
Goto 1
Obviously, there's more to it than that for anything more than a simple game! But this is the approach you want, rather than trying to have separate loops for everything you've got going on.
This is what happens:
The drawGL function is called at the exact end of the frame thanks to a usleep, as suggested. This already maintains a steady framerate.
The actual presentation of the renderbuffer takes place with drawGL(). Measuring the time it takes to do this, gives me fluctuating execution times, resulting in a stutter in my animation. This timer uses mach_absolute_time so it's extremely accurate.
At the end of my frame, I measure timeDifference. Yes, it's on average 1 millisecond, but it deviates a lot, ranging from 0.8 milliseconds to 1.2 with peaks of up to more than 2 milliseconds.
Example:
// Every something of a second I call tick
-(void)tick
{
drawGL();
}
- (void)drawGL
{
// startTime using mach_absolute_time;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
// endTime using mach_absolute_time;
// timeDifference = endTime - startTime;
}
My understanding is that once the framebuffer has been created, presenting the renderbuffer should always take the same effort, regardless of the complexity of the frame? Is this true? And if not, how can I prevent this?
By the way, this is an example for an iPhone app. So we're talking OpenGL ES here, though I don't think it's a platform specific problem. If it is, than what is going on? And shouldn't this be not happening? And again, if so, how can I prevent this from happening?
The deviations you encounter maybe be caused by a lot of factors, including OS scheduler that kicks in and gives cpu to another process or similar issues. In fact normal human won't tell a difference between 1 and 2 ms render times. Motion pictures run at 25 fps, which means each frame is shown for roughly 40ms and it looks fluid for human eye.
As for animation stuttering you should examine how you maintain constant animation speed. Most common approach I've seen looks roughly like this:
while(loop)
{
lastFrameTime; // time it took for last frame to render
timeSinceLastUpdate+= lastFrameTime;
if(timeSinceLastUpdate > (1 second / DESIRED_UPDATES_PER_SECOND))
{
updateAnimation(timeSinceLastUpdate);
timeSinceLastUpdate = 0;
}
// do the drawing
presentScene();
}
Or you could just pass lastFrameTime to updateAnimation every frame and interpolate between animation states. The result will be even more fluid.
If you're already using something like the above, maybe you should look for culprits in other parts of your render loop. In Direct3D the costly things were calls for drawing primitives and changing render states, so you might want to check around OpenGL analogues of those.
My favorite OpenGL expression of all times: "implementation specific". I think it applies here very well.
A quick search for mach_absolute_time results in this article: Link
Looks like precision of that timer on an iPhone is only 166.67 ns (and maybe worse).
While that may explain the large difference, it doesn't explain that there is a difference at all.
The three main reasons are probably:
Different execution paths during renderbuffer presentation. A lot can happen in 1ms and just because you call the same functions with the same parameters doesn't mean the exact same instructions are executed. This is especially true if other hardware is involved.
Interrupts/other processes, there is always something else going on that distracts the CPU. As far as I know the iPhone OS is not a real-time OS and so there's no guarantee that any operation will complete within a certain time limit (and even a real-time OS will have time variations).
If there are any other OpenGL calls still being processed by the GPU that might delay presentRenderbuffer. That's the easiest to test, just call glFinish() before getting the start time.
It is best not to rely on a high constant frame rate for a number of reasons, the most important being that the OS may do something in the background that slows things down. Better to sample a timer and work out how much time has passed each frame, this should ensure smooth animation.
Is it possible that the timer is not accurate to the sub ms level even though it is returning decimals 0.8->2.0?