Is there any sort of "sleep" method that is more accurate than the stopwatch? Or, is there a way to make the stopwatch class more accurate? It doesn't have to be in .NET, it can be in c++, but whatever language it is in has to have exactly 1ms accuracy; I don't need more then that. Say if I want my program to "sleep" for 300ms, I would like it to sleep for 300ms at least most of the time.
Currently I use:
Dim StopWatch As New StopWatch
StopWatch.Start
Do
Loop Until StopWatch.ELapsed.Milliseconds >= 300
StopWatch.Stop
My results running it 5 times were: 306, 305, 315, 327, 304.
It stayed like that if I ran it more.
I put my thread and process priority on "Realtime" / "High".
The Stopwatch class has a property IsHighResolution. If it returns ´true´ you are using the High Performance Event Timer (HPET) - availability depends on hardware and OS. Using this, you can measure times very accurate. BUT! Windows (as usual Linuxes) is NOT a realtime OS, but uses preemptive multitasking. Whenever the OS thinks, that it needs to, it will put your current thread on hold to do other work and after some time, it will return to your thread and let it continue. If this switch happens somewhere inside your loop, you still measure the correct time, but it contains an amount of inactivty time.
Since a time slice under Windows is something between 15 and 30 ms, you(r thread) might be suspended after 299 ms and 15-30 ms later you will get back. And that's the effect you see. The Stopwatch IS accurate. It just measures stuff you didn't expect.
How to overcome: You can't. As said: Windoes IS NOT a realtime OS! Even if you assign priority "realtime" to your process.
What you are seeing is completely normal. Delay will never be exactly 300ms, it will always be more than that. Sleep itself is accurate, but the actual delay depends on your operating system, and other processes running in parallel to yours.
If you want a more accurate timer, you need to use the current date and time as a reference. Here is a simple equation that you can run every millisecond:
currentTime - startTime = elapsedTime
...where currentTime is System.DateTime.Now, startTime is the time that the timer was started, and elapsedTime is a System.DateTime.TimeSpan.
For more details on how to do this, check out the source of a program I made in VB.Net, E-Tech Timer: http://etechtimer.codeplex.com
Related
I am currently working on an app that requires a sound file to be played at exact intervals, which are variable in duration.
I seem to remember being told that NSTimer simply places an operation onto the stack after the specified duration, rather than the operation being run after the specified duration. This would mean that if there were lots of other operations on the stack before it, it would not be called on time.
I would like to know if this is indeed correct, and if so, is there any way of getting an operation to be guaranteed to run after a specified duration.
Before any makes the comment that the sound files may be delayed the first time, they are already pre-loaded to avoid this.
In the first paragraph of Apple's NSTimer documentation you'll find this:
A timer is not a real-time mechanism; it fires only when one of the
run loop modes to which the timer has been added is running and able
to check if the timer’s firing time has passed. Because of the various
input sources a typical run loop manages, the effective resolution of
the time interval for a timer is limited to on the order of 50-100
milliseconds. If a timer’s firing time occurs during a long callout or
while the run loop is in a mode that is not monitoring the timer, the
timer does not fire until the next time the run loop checks the timer.
Therefore, the actual time at which the timer fires potentially can be
a significant period of time after the scheduled firing time.
So you will definitely run (or play your sound, in your case) after your scheduled firing time, but not necessarily exactly at your time.
I am writing an application for OS X (Obj-C/Cocoa) that runs a simulation and displays the results to the user. In one case, I want the simulation to run in "real-time" so that the user can watch it go by at the same speed it would happen in real life. The simulation is run with a specific timestep, dt. Right now, I am using mach_absolute_time() to slow down the simulation. When I profile this code, I see that by far, most of my CPU time is spent in mach_absolute_time() and my CPU is pegged at 100%. Am I doing this right? I figured that if I'm slowing down the simulation such that the program isn't simulating anything most of the time then CPU usage should be down but mach_absolute_time() obviously isn't a "free call" so I feel like there might be a better way?
double nextT = mach_absolute_time();
while (runningSimulation)
{
if (mach_absolute_time() >= nextT)
{
nextT += dt_ns;
// Compute the next "frame" of the simulation
// ....
}
}
Do not spin at all.
That is the first rule of writing GUI apps where battery life and app responsiveness matter.
sleep() or nanosleep() can be made to work, but only if used on something other than the main thread.
A better solution is to use any of the time based constructs in GCD as that'll make more efficient use of system resources.
If you want the simulation to appear smooth to the user, you'll really want to lock the slowed version to the refresh rate of the screen. On iOS, there is CADisplayLink. I don't know of a direct equivalent on the Mac.
You are doing busy spinning. If there is lot of time before you need to simulate again considering sleeping instead.
But any sleep won't guarantee that it would sleep exactly for the duration specified. Depending on how accurate you want it to be, you can sleep for a little less and afterwards spin for the rest.
I'm writing a Cocoa OS X (Leopard 10.5+) end-user program that's using timestamps to calculate statistics for how long something is being displayed on the screen. Time is calculated periodically while the program runs using a repeating NSTimer. [NSDate date] is used to capture timestamps, Start and Finish. Calculating the difference between the two dates in seconds is trivial.
A problem occurs if an end-user or ntp changes the system clock. [NSDate date] relies on the system clock, so if it's changed, the Finish variable will be skewed relative to the Start, messing up the time calculation significantly. My question:
1. How can I accurately calculate the time between Start and Finish, in seconds, even when the system clock is changed mid-way?
I'm thinking that I need a non-changing reference point in time so I can calculate how many seconds has passed since then. For example, system uptime. 10.6 has - (NSTimeInterval)systemUptime, part of NSProcessInfo, which provides system uptime. However, this won't work as my app must work in 10.5.
I've tried creating a time counter using NSTimer, but this isn't accurate. NSTimer has several different run modes and can only run one at a time. NSTimer (by default) is put into the default run mode. If a user starts manipulating the UI for a long enough time, this will enter NSEventTrackingRunLoopMode and skip over the default run mode, which can lead to NSTimer firings being skipped, making it an inaccurate way of counting seconds.
I've also thought about creating a separate thread (NSRunLoop) to run a NSTimer second-counter, keeping it away from UI interactions. But I'm very new to multi-threading and I'd like to stay away from that if possible. Also, I'm not sure if this would work accurately in the event the CPU gets pegged by another application (Photoshop rendering a large image, etc...), causing my NSRunLoop to be put on hold for long enough to mess up its NSTimer.
I appreciate any help. :)
Depending on what's driving this code, you have 2 choices:
For absolute precision, use mach_absolute_time(). It will give the time interval exactly between the points at which you called the function.
But in a GUI app, this is often actually undesirable. Instead, you want the time difference between the events that started and finished your duration. If so, compare [[NSApp currentEvent] timestamp]
Okay so this is a long shot, but you could try implementing something sort of like NSSystemClockDidChangeNotification available in Snow Leopard.
So bear with me here, because this is a strange idea and is definitely non-derterministic. But what if you had a watchdog thread running through the duration of your program? This thread would, every n seconds, read the system time and store it. For the sake of argument, let's just make it 5 seconds. So every 5 seconds, it compares the previous reading to the current system time. If there's a "big enough" difference ("big enough" would need to definitely be greater than 5, but not too much greater, to account for the non-determinism of process scheduling and thread prioritization), post a notification that there has been a significant time change. You would need to play around with fuzzing the value that constitutes "big enough" (or small enough, if the clock was reset to an earlier time) for your accuracy needs.
I know this is kind of hacky, but barring any other solution, what do you think? Might that, or something like that, solve your issue?
Edit
Okay so you modified your original question to say that you'd rather not use a watchdog thread because you are new to multithreading. I understand the fear of doing something a bit more advanced than you are comfortable with, but this might end up being the only solution. In that case, you might have a bit of reading to do. =)
And yeah, I know that something such as Photoshop pegging the crap out of the processor is a problem. Another (even more complicated) solution would be to, instead of having a watchdog thread, have a separate watchdog process that has top priority so it is a bit more immune to processor pegging. But again, this is getting really complicated.
Final Edit
I'm going to leave all my other ideas above for completeness' sake, but it seems that using the system's uptime will also be a valid way to deal with this. Since [[NSProcessInfo processInfo] systemUptime] only works in 10.6+, you can just call mach_absolute_time(). To get access to that function, just #include <mach/mach_time.h>. That should be the same value as returned by NSProcessInfo.
I figured out a way to do this using the UpTime() C function, provided in <CoreServices/CoreServices.h>. This returns Absolute Time (CPU-specific), which can easily be converted into Duration Time (milliseconds, or nanoseconds). Details here: http://www.meandmark.com/timingpart1.html (look under part 3 for UpTime)
I couldn't get mach_absolute_time() to work properly, likely due to my lack of knowledge on it, and not being able to find much documentation on the web about it. It appears to grab the same time as UpTime(), but converting it into a double left me dumbfounded.
[[NSApp currentEvent] timestamp] did work, but only if the application was receiving NSEvents. If the application went into the foreground, it wouldn't receive events, and [[NSApp currentEvent] timestamp] would simply continue to return the same old timestamp again and again in an NSTimer firing method, until the end-user decided to interact with the app again.
Thanks for all your help Marc and Mike! You both definitely sent me in the right direction leading to the answer. :)
I have a snippet of code I want to execute repeatedly, but with the ability to pause and resume it. To do this, I have utilised an NSTimer, which I can stop and start as required.
Within the snippet, I use a sleep command to wait for something to update (0.3 seconds). The timer is firing every 0.5 seconds.
What would be ideal is to keep the stop and start functionality, and be firing every 0.3 seconds, but to not have to explicitly say I want it to fire every x seconds. The 0.5 is completely arbitrary, and is just set to be > 0.3.
If I set the timer to fire every 0.01 seconds, but keep the sleep command within the code fired to 0.3 seconds, will I get the desired behaviour? Or will the timer back up all the events, with unexpected results (for example multiple firings after I have stopped it)? This way I could make the 0.3 sec sleep a variable, and not have to change the timer whenever I increase it over 0.5.
Or is there a better way to get this functionality?
One of the biggest problems with this I see is that NSTimer will be consuming time on whatever thread it's registered with.
In most cases for me this is my main thread, sleeping inside of your main thread would be a bad thing as far as I can see.
A few alternative designs that may be better for you.
Don't sleep. Have the selector called by the timer do a 'am I paused' check and exit early if it is.
Invalidate the NSTimer when pausing and recreate it when unpausing (Note: you can't just reschedule the old timer).
Spawn a background thread with it's own RunLoop or even your own while loop that just handles the rescheduling with sleeping.
Those are also roughly in the order I'd do them, as No. 1 seems the cleanest to me, followed by No.2. No.3 while viable could introduce a lot of potential nastiness (threading issues, clean shutdowns, etc)
I need an accurate timer to interface a Windows application to a piece of lab equipment.
I used System.Timers.Timer() to create a timer that ticks every 10 msec, but this clock runs slow. For example 1000 ticks with an interval of 10 msec should take 10 wall-clock seconds, but it actually takes more like 20 wall-clock sec (on my PC). I am guessing this is because System.Timers.Timer() is an interval timer that is reset every time it elapses. Since it will always take some time between when the timer elapses and when it is reset (to another 10msec) the clock will run slow. This probably fine if the interval is large (seconds or minutes) but unacceptable for very short intervals.
Is there a function on Windows that will trigger a procedure every time the system clock crosses a 10 msec (or whatever) boundary?
This is a simple console application.
Thanks
Norm
UPDATE: System.Timers.Timer() is extremely inaccurate for small intervals.
I wrote a simple program that counted 10 seconds several ways:
Interval=1, Count=10000, Run time = 160 sec, msec per interval=16
Interval=10, Count=1000, Run time = 16 sec, msec per interval=15
Interval=100, Count=100, Run time = 11 sec, msec per interval=110
Interval=1000, Count=10, Run time = 10 sec, msec per interval=1000
It seems like System.Timers.Timer() cannot tick faster that about 15 msec, regardless
of the interval setting.
Note that none of these tests seemed to use any measurable CPU time, so the limit is not the CPU, just a .net limitation (bug?)
For now I think I can live with an inaccurate timer that triggers a routine every 15 msec or so and the routine gets an accurate system time. Kinda strange, but...
I also found a shareware product ZylTimer.NET that claims to be a much more accurate .net timer (resolution of 1-2 msec). This may be what I need. If there is one product there are likely others.
Thanks again.
You need to use a high resolution timer such as QueryPerformanceCounter
On surface of it the answer is something like "high resolution timer" however this is incorrect. The answer requires a regular tick generation and the windows high res performance counter API does not generate such a tick.
I know this is not answer inself but the popular answer to this question so far is wrong enough for me to feel that a simple comment on it is not enough.
The limitation is given by the systems heartbeat. This typically defaults to 64 beats/s which is 15.625 ms. However there are ways to modify these system wide settings to achieve timer resolutions down to 1 ms or even to 0.5 ms on newer platforms:
Going for 1 ms resolution by means of the multimedia timer interface (timeBeginPeriod()):
See Obtaining and Setting Timer Resolution.
Going to 0.5 ms resolution by means of NtSetTimerResolution():
See Inside Windows NT High Resolution Timers.
You may obtain 0.5 ms resolution by means of the hidden API NtSetTimerResolution().
I've given all the details in this SO answer.
In System.Diagnostics, you can use the Stopwatch class.
Off the top of my head, I could suggest running a thread that mostly sleeps, but when it wakes, it checks a running QueryPerformanceCounter and occasionally triggers your procedure.
There's a nice write up at the MSDN: Implement a Continuously Updating, High-Resolution Time Provider for Windows
Here's the sample source code for the article (C++).