I know about runtime code execution fundamentals in Flash Player and Air Debugger. But I want to know how Timer code execution is done.
Would it be better to use Timer rather than enterFrame event for similar actions? Which one is better to maximize optimization?
It depends on what you want to use it for. Most will vehemently say to use Event.ENTER_FRAME. In most instances, this is what you want. It will only be called once every frame begins construction. If your app is running at 24fps (the default), that code will run once every 41.7ms, assuming no dropped frames. For almost all GUI related cases, you do not want to run that code more often than this because it is entirely unnecessary (you can run it more often, sure, but it won't do any good since the screen is only updated that often).
There are times when you need code to run more often, however, mostly in non-GUI related cases. This can range from a systems check that needs to happen in the background to something that needs to be absolutely precise, such as an object that needs to be displayed/updated on an exact interval (Timer is accurate to the ms, I believe, whereas ENTER_FRAME is only accurate to the 1000/framerate ms).
In the end, it doesn't make much sense to use Timer for anything less than ENTER_FRAME would be called. Any more than that and you risk dropping frames. ENTER_FRAME is ideal for nearly everything graphics related, other than making something appear/update at a precise time. And even then, you should use ENTER_FRAME as well since it would only be rendered in the next frame anyway.
You need to evaluate each situation on a case-by-case basis and determine which is best for that particular situation because there is no best answer for all cases.
EDIT
I threw together a quick test app to see when Timer fires. Framerate is 24fps (42ms) and I set the timer to run every 10ms. Here is a selection of times it ran at.
2121
2137
2154
2171
2188
2203
2221
2237
As you can see, it is running every 15-18ms instead of the 10ms I wanted it to. I also tested 20ms, 100ms, 200ms, and 1000ms. In the end, each one fired within about 10ms of when it should have. So this is not nearly as precise as I had originally thought it was.
Related
If real time planning and daemon mode is enabled, when an update or addition of planning entity is to be made a problem fact change must be invoked.
So let say the average rate of change will be 1/sec, so for every second a problem fact change must be called resulting to restarting the solver every second.
Do we just invoke or schedule a problem fact change every second resulting to restarting the solver every second or if we know that there will be huge amount of changes, stop the solver first, apply changes then start the solver?
In the scenario you describe, the solver will be likely restarted every time. It's not a complete restart as if you just call the Solver.solve() with the updated last known solution, but the ScoreDirector, a component responsible for score calculation, is restarted each time a problem change is applied.
If problem changes come faster, they might be processed in a batch. The solver checks problem changes between the evaluation of individual moves, so if multiple changes come before the solver finishes the evaluation of the current move, they are all applied and the solver restarts just once. In the opposite case, when there are seldom changes coming, the restart doesn't matter much, as there is enough time for the solver to improve the solution.
But the rate of 1 change/sec will likely lead to frequent solver restarts and will affect its ability to produce better solutions.
The solver does not know if there is going to be a bigger amount of changes in the next second. The current behavior may be improved by processing the problem changes periodically in a predefined time interval rather than between move evaluations.
Of course, the periodic grouping of problem changes can be done outside the solver as well.
For example, I need a timer on the page. So I can have an action every 100 ms
type Action = Tick Time
and I have field time in my Model. Model could be big, but I need to recreate it and whole view every 100 ms because of time field. I think it would not be effective performance-wise.
Is there another approach or I shouldn't worry about such thing?
The whole view isn't necessarily being recreated every time. Elm uses a Virtual DOM and does diffs to change only the bare minimum of the actual DOM. If large parts of your view are actually changing on every 100ms tick, then that could obviously cause problems, but I'm guessing you're only making smaller adjustments at every 100ms, and you probably have nothing to worry about. Take a look at your developer tools to see the whether the process utilization is spiking.
Your model isn't being recreated every 100ms either. There are optimizations around the underlying data structures (see this related conversation about foldp internals) that let you think in terms of immutability and pureness, but are optimized under the hood.
In my app I have a worker thread which sits around doing a lot of processing. While it's processing, it sends updates to the main thread which uses the information to update GUI elements. This is done with performSelectorOnMainThread. For simplicity in the code, there are no restrictions on these updates and they get sent at a high rate (hundreds or thousands per second), and waitUntilDone is false. The methods called simply take the variable and copy it to a private member of the view controller. Some of them update the GUI directly (because I'm lazy!). Once every few seconds, the worker thread calls performSelectorOnMainThread with waitUntilDone set to true (this is related to saving the output of the current calculation batch).
My question: is this a safe use of performSelectorOnMainThread? I ask because I recently encountered a problem where my displayed values stopped updating, despite the background thread continuing to work without issues (and produce the correct output). Since they are fed values this way, I wondered if it might have hit a limit in the number of messages. I already checked the usual suspects (overflows, leaks, etc) and everything's clean. I haven't been able to reproduce the problem, however.
For simplicity in the code, there are no restrictions on these updates
and they get sent at a high rate (hundreds or thousands per second),
and waitUntilDone is false.
Yeah. Don't do that. Not even for the sake of laziness in an internal only application.
It can cause all kinds of potential problems beyond making the main run loop unresponsive.
Foremost, it will starve your worker thread for CPU cycles as your main thread is constantly spinning trying to update the UI as rapidly as the messages arrive. Given that drawing is oft done in a secondary thread, this will likely cause yet more thread contention, slowing things down even more.
Secondly, all those messages consume resources. Potentially lots of them and potentially ones that are relatively scarce, depending on implementation details.
While there shouldn't be a limit, there may likely be a practical limit that, when exceeded, things stop working. If this is the case, it would be a bug in the system, but one that is unlikely to be fixed beyond a console log that says "Too many messages, too fast, make fewer.".
It may also be a bug in your code, though. Transfer of state between threads is an area rife with pitfalls. Are you sure your cross-thread-communication code is bullet proof? (And, if it is bulletproof, it is quite likely a huge performance cost for your thousands/sec update notifications).
It isn't hard to throttle updates. While the commented suggestions are all reasonable, it can be done much more easily (NSNotificationQueue is fantastic, but likely overkill unless you are updating the main thread from many different places in your computation).
create an NSDate whenever you notify the main thread and store date in an ivar
next time you go to notify main thread, check if more than N seconds have passed
if they have, update your ivar
[bonus performance] if all that date comparison is too expensive, consider revisiting your algorithm to move the "update now" trigger to somewhere less frequent. Barring that, create an int ivar counter and only check the date every N iterations
I have instrumented my application with "stop watches". There is usually one such stop watch per (significant) function. These stop watches measure real time, thread time (and process time, but process time seems less useful) and call count. I can obviously sort the individual stop watches using either of the four values as a key. However that is not always useful and requires me to, e.g., disregard top level functions when looking for optimization opportunities, as top level functions/stop watches measure pretty much all of the application's run time.
I wonder if there is any research regarding any kind of score or heuristics that would point out functions/stop watches that are worthy looking at and optimizing?
The goal is to find code worth optimizing, and that's good, but
the question presupposes what many people think, which is that they are looking for "slow methods".
However there are other ways for programs to spend time unnecessarily than by having certain methods that are recognizably in need of optimizing.
What's more, you can't ignore them, because however much time they take will become a larger and larger fraction of the total if you find and fix other problems.
In my experience performance tuning, measuring time can tell if what you fixed helped, but it is not much use for telling you what to fix.
For example, there are many questions on SO from people trying to make sense of profiler output.
The method I rely on is outlined here.
This is what happens:
The drawGL function is called at the exact end of the frame thanks to a usleep, as suggested. This already maintains a steady framerate.
The actual presentation of the renderbuffer takes place with drawGL(). Measuring the time it takes to do this, gives me fluctuating execution times, resulting in a stutter in my animation. This timer uses mach_absolute_time so it's extremely accurate.
At the end of my frame, I measure timeDifference. Yes, it's on average 1 millisecond, but it deviates a lot, ranging from 0.8 milliseconds to 1.2 with peaks of up to more than 2 milliseconds.
Example:
// Every something of a second I call tick
-(void)tick
{
drawGL();
}
- (void)drawGL
{
// startTime using mach_absolute_time;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
// endTime using mach_absolute_time;
// timeDifference = endTime - startTime;
}
My understanding is that once the framebuffer has been created, presenting the renderbuffer should always take the same effort, regardless of the complexity of the frame? Is this true? And if not, how can I prevent this?
By the way, this is an example for an iPhone app. So we're talking OpenGL ES here, though I don't think it's a platform specific problem. If it is, than what is going on? And shouldn't this be not happening? And again, if so, how can I prevent this from happening?
The deviations you encounter maybe be caused by a lot of factors, including OS scheduler that kicks in and gives cpu to another process or similar issues. In fact normal human won't tell a difference between 1 and 2 ms render times. Motion pictures run at 25 fps, which means each frame is shown for roughly 40ms and it looks fluid for human eye.
As for animation stuttering you should examine how you maintain constant animation speed. Most common approach I've seen looks roughly like this:
while(loop)
{
lastFrameTime; // time it took for last frame to render
timeSinceLastUpdate+= lastFrameTime;
if(timeSinceLastUpdate > (1 second / DESIRED_UPDATES_PER_SECOND))
{
updateAnimation(timeSinceLastUpdate);
timeSinceLastUpdate = 0;
}
// do the drawing
presentScene();
}
Or you could just pass lastFrameTime to updateAnimation every frame and interpolate between animation states. The result will be even more fluid.
If you're already using something like the above, maybe you should look for culprits in other parts of your render loop. In Direct3D the costly things were calls for drawing primitives and changing render states, so you might want to check around OpenGL analogues of those.
My favorite OpenGL expression of all times: "implementation specific". I think it applies here very well.
A quick search for mach_absolute_time results in this article: Link
Looks like precision of that timer on an iPhone is only 166.67 ns (and maybe worse).
While that may explain the large difference, it doesn't explain that there is a difference at all.
The three main reasons are probably:
Different execution paths during renderbuffer presentation. A lot can happen in 1ms and just because you call the same functions with the same parameters doesn't mean the exact same instructions are executed. This is especially true if other hardware is involved.
Interrupts/other processes, there is always something else going on that distracts the CPU. As far as I know the iPhone OS is not a real-time OS and so there's no guarantee that any operation will complete within a certain time limit (and even a real-time OS will have time variations).
If there are any other OpenGL calls still being processed by the GPU that might delay presentRenderbuffer. That's the easiest to test, just call glFinish() before getting the start time.
It is best not to rely on a high constant frame rate for a number of reasons, the most important being that the OS may do something in the background that slows things down. Better to sample a timer and work out how much time has passed each frame, this should ensure smooth animation.
Is it possible that the timer is not accurate to the sub ms level even though it is returning decimals 0.8->2.0?