I need an accurate timer to interface a Windows application to a piece of lab equipment.
I used System.Timers.Timer() to create a timer that ticks every 10 msec, but this clock runs slow. For example 1000 ticks with an interval of 10 msec should take 10 wall-clock seconds, but it actually takes more like 20 wall-clock sec (on my PC). I am guessing this is because System.Timers.Timer() is an interval timer that is reset every time it elapses. Since it will always take some time between when the timer elapses and when it is reset (to another 10msec) the clock will run slow. This probably fine if the interval is large (seconds or minutes) but unacceptable for very short intervals.
Is there a function on Windows that will trigger a procedure every time the system clock crosses a 10 msec (or whatever) boundary?
This is a simple console application.
Thanks
Norm
UPDATE: System.Timers.Timer() is extremely inaccurate for small intervals.
I wrote a simple program that counted 10 seconds several ways:
Interval=1, Count=10000, Run time = 160 sec, msec per interval=16
Interval=10, Count=1000, Run time = 16 sec, msec per interval=15
Interval=100, Count=100, Run time = 11 sec, msec per interval=110
Interval=1000, Count=10, Run time = 10 sec, msec per interval=1000
It seems like System.Timers.Timer() cannot tick faster that about 15 msec, regardless
of the interval setting.
Note that none of these tests seemed to use any measurable CPU time, so the limit is not the CPU, just a .net limitation (bug?)
For now I think I can live with an inaccurate timer that triggers a routine every 15 msec or so and the routine gets an accurate system time. Kinda strange, but...
I also found a shareware product ZylTimer.NET that claims to be a much more accurate .net timer (resolution of 1-2 msec). This may be what I need. If there is one product there are likely others.
Thanks again.
You need to use a high resolution timer such as QueryPerformanceCounter
On surface of it the answer is something like "high resolution timer" however this is incorrect. The answer requires a regular tick generation and the windows high res performance counter API does not generate such a tick.
I know this is not answer inself but the popular answer to this question so far is wrong enough for me to feel that a simple comment on it is not enough.
The limitation is given by the systems heartbeat. This typically defaults to 64 beats/s which is 15.625 ms. However there are ways to modify these system wide settings to achieve timer resolutions down to 1 ms or even to 0.5 ms on newer platforms:
Going for 1 ms resolution by means of the multimedia timer interface (timeBeginPeriod()):
See Obtaining and Setting Timer Resolution.
Going to 0.5 ms resolution by means of NtSetTimerResolution():
See Inside Windows NT High Resolution Timers.
You may obtain 0.5 ms resolution by means of the hidden API NtSetTimerResolution().
I've given all the details in this SO answer.
In System.Diagnostics, you can use the Stopwatch class.
Off the top of my head, I could suggest running a thread that mostly sleeps, but when it wakes, it checks a running QueryPerformanceCounter and occasionally triggers your procedure.
There's a nice write up at the MSDN: Implement a Continuously Updating, High-Resolution Time Provider for Windows
Here's the sample source code for the article (C++).
Related
I am creating an application for Mac, in Objective C, which will run in the Menu-bar and do periodic Desktop operations (such as changing the wallpaper). I am creating the application so that it stays in the Menu bar at all times, allowing easy access to configuration options and other information. My main concern is how to schedule my app to run every X minutes to do the desktop operations.
The most common solution I have seen is using NSTimer, however, I am concerned that it will not be memory efficient (after reading the following page on Apple Developer docs. Using an NSTimer will prevent the laptop from going to sleep, and will need an always-running thread to check for when the NSTimer has elapsed. Is there a more memory-efficient way of using NSTimer to schedule these operations?
Alternately, is there a way to use LaunchD to initiate a call to my application (which is in the Menu bar) so that it can handle the event and do the desktop operations. I think that the second way is better, but am not sure if it is possible.
First, excellent instincts on keeping this low-impact. But you're probably over-worried in this particular case.
When they say "waking the system from an idle state" they don't mean system-level "sleep" where the screen goes black. They mean idle state. The CPU can take little mini-naps for fractions of a second when there isn't work that immediately needs to be done. This can dramatically reduce power requirements, even while the system is technically "awake."
The problem with having lots of timers flying around isn't so much their frequencies as their tolerances. Say one you have 10 timers with a 1 second frequency, but they're offset from each other by 100ms (just by chance of what time it was when they happened to start). That means the longest possible "gap" is 100ms. But if they were configured at 1 second with a 0.9 second tolerance (i.e. between 1s and 1.9s), then the system could schedule them all together, do a bunch of work, and spend most of the second idle. That's much better for power.
To be a good timer citizen, you should first set your timer at the interval you really want to do work. If it is common for your timer to fire, but all you do is check some condition and reschedule the timer, then you're wasting power. (Sounds like you already have this in hand.) And the second thing you should do is set a reasonable tolerance. The default is 0 which is a very small tolerance (it's not actually "0 tolerance," but it's very small compared to a minutes). For your kind of problem, I'd probably use a tolerance of at least 1s.
I highly recommend the Energy Best Practices talk from WWDC 2013. You may also be interested in the later Writing Energy Efficient Code sessions from 2014 and Achieving All-day Battery Life from 2015.
It is possible of course to do with with launchd, but it adds a lot of complexity, especially on installation. I don't recommend it for the problem you're describing.
What's the framerate in GM games and how often is the game code executed per frame? I can't find the answer anywhere explained so that I would understand it. Are there ways to change these? I'm used to solid 60 fps and game code executed once per frame. This is important since I'm used to programming in frame timing, meaning that a one frame is the smallest unit of time that can be used, and counters are incremented (or decremented) once per frame. This also means that drops in framerate will create slowdown instead of frames begin skipped. The game code in the game I've been programming on another program basically runs the game code once an then waits for a VBlank to happen before running the game code again.
Let me explain how GM works. It's complicated.
When the game starts, the game start event from every object is called. I believe this happens AFTER the create event.
Once per (virtual) frame, the step event is called. You see, GM doesn't lock itself down to whatever the room_speed is, it will run as fast as it can. (If not compiled.) fps_real shows you how many frames per second the engine is ACTUALLY pumping though in any given second.
So every second, assuming the processor and GPU can keep up with the room_speed, room_speed amount of step events occur. Take this situation for example:
room_speed is 60
fps is 60
fps_real is 758
Let's assume there is an ObjPlayer with a step event.
In this situation, ObjPlayer's step event will be run 60 times in that second.
This is a problem, however. Let's say, if the keyboard space button is held down, the player moves 3 pixels to the left. So assuming the game is running at full speed (room_speed), the player will move 3 * 60, or in this case, 180 pixels in any given second.
However, let's say that the CPU and/or GPU can't keep up with 60 FPS. Let's say it's holding steady at 30. That would mean that the player will move a mere 3 * 30, or 90 pixels a second, making the game look much slower than it should. But if you have played a AAA game like Hitman Absolution, you will notice that the game looks just as fast at 30 FPS as it does at 120 FPS. How? Delta time.
Using delta time, you set the room_speed at max, (9999) and every time you call upon a pixels-per-frame speed, multiply it by a delta time that has been worked over to work with 60 FPS. I am explaining it terribly, and it's a lot easier than I make it out to be. Check out this guide on the GMC to see how to do it.
When you have delta time, you don't need to worry (as much) about the FPS -- it looks the same in terms of speed no matter what.
But as for refresh rate, I am not 100% sure, but I believe that GM games are 60hz, I could be wrong, but that's what I've heard. All GM games are also 32 bit.
The variables room_speed and fps may be what you are looking for. There is no point in increasing fps to anything higher than room_speed, which can be modified during program execution or statically through the room editor and is 30 by default.
You can change the room speed in the room settings, this is the number of steps per second. This property can also be accessed and changed mid-game with the room_speed global.
You can also access the actual screen update speed with the fps read-only global. Note: screen fps is totally independent from the draw event, which happens at the end of each step.
On a normal hardware today this likely does not hurt ever, but on a Raspberry PI it is a bit annoying that the CPU is woken up every 50 milliseconds even for a java application which currently does absolutely nothing.
I verify with strace, that the "VM Periodic Task Thread" is active every 50 milliseconds. A rough answer of what it does is given here, but can I tune the 50 milliseconds somehow?
try setting -XX:PerfDataSamplingInterval=xxx, the default is 50 and performance sampling matches the description you linked, so that might be it.
Is there any sort of "sleep" method that is more accurate than the stopwatch? Or, is there a way to make the stopwatch class more accurate? It doesn't have to be in .NET, it can be in c++, but whatever language it is in has to have exactly 1ms accuracy; I don't need more then that. Say if I want my program to "sleep" for 300ms, I would like it to sleep for 300ms at least most of the time.
Currently I use:
Dim StopWatch As New StopWatch
StopWatch.Start
Do
Loop Until StopWatch.ELapsed.Milliseconds >= 300
StopWatch.Stop
My results running it 5 times were: 306, 305, 315, 327, 304.
It stayed like that if I ran it more.
I put my thread and process priority on "Realtime" / "High".
The Stopwatch class has a property IsHighResolution. If it returns ´true´ you are using the High Performance Event Timer (HPET) - availability depends on hardware and OS. Using this, you can measure times very accurate. BUT! Windows (as usual Linuxes) is NOT a realtime OS, but uses preemptive multitasking. Whenever the OS thinks, that it needs to, it will put your current thread on hold to do other work and after some time, it will return to your thread and let it continue. If this switch happens somewhere inside your loop, you still measure the correct time, but it contains an amount of inactivty time.
Since a time slice under Windows is something between 15 and 30 ms, you(r thread) might be suspended after 299 ms and 15-30 ms later you will get back. And that's the effect you see. The Stopwatch IS accurate. It just measures stuff you didn't expect.
How to overcome: You can't. As said: Windoes IS NOT a realtime OS! Even if you assign priority "realtime" to your process.
What you are seeing is completely normal. Delay will never be exactly 300ms, it will always be more than that. Sleep itself is accurate, but the actual delay depends on your operating system, and other processes running in parallel to yours.
If you want a more accurate timer, you need to use the current date and time as a reference. Here is a simple equation that you can run every millisecond:
currentTime - startTime = elapsedTime
...where currentTime is System.DateTime.Now, startTime is the time that the timer was started, and elapsedTime is a System.DateTime.TimeSpan.
For more details on how to do this, check out the source of a program I made in VB.Net, E-Tech Timer: http://etechtimer.codeplex.com
how we can work on timer deals with milliseconds (0.001) how we could divide second as we want ?? how we could we deal with the second itself ???
http://computer.howstuffworks.com/question319.htm
In your computer (as well as other
gadgets), the battery powers a chip
called the Real Time Clock (RTC) chip.
The RTC is essentially a quartz watch
that runs all the time, whether or not
the computer has power. The battery
powers this clock. When the computer
boots up, part of the process is to
query the RTC to get the correct time
and date. A little quartz clock like
this might run for five to seven years
off of a small battery. Then it is
time to replace the battery.
Your PC will have a hardware clock, powered by a battery so that it keeps ticking even while the computer is switched off. The PC knows how fast its clock runs, so it can determine when a second goes by.
Initially, the PC doesn't know what time it is (i.e. it just starts counting from zero), so it must be told what the current time is - this can be set in the BIOS settings and is stored in the CMOS, or can be obtained via the Internet (e.g. by synchronizing with the clocks at NIST).
Some recap, and some more info:
1) The computer reads the Real-Time-Clock during boot-up, and uses that to set it's internal clock
2) From then on, the computer uses it's CPU clock only - it does not re-read the RTC (normally).
3) The computer's internal clock is subject to drift - due to thermal instability, power fluctuations, inaccuracies in finding an exact divisor for seconds, interrupt latency, cosmic rays, and the phase of the moon.
4) The magnitude of the clock drift could be in the order of seconds per day (tens or hundreds of seconds per month).
5) Most computers are capable of connecting to a time server (over the internet) to periodically reset their clock.
6) Using a time server can increase the accuracy to within tens of milliseconds (normally). My computer updates every 15 minutes.
Computers know the time because, like you, they have a digital watch they look at from time to time.
When you get a new computer or move to a new country you can set that watch, or your computer can ask the internet what the time is, which helps to stop it form running slow, or fast.
As a user of the computer, you can ask the current time, or you can ask the computer to act as an alarm clock. Some computers can even turn themselves on at a particular time, to back themselves up, or wake you up with a favourite tune.
Internally, the computer is able to tell the time in milliseconds, microseconds or sometimes even nanoseconds. However, this is not entirely accurate, and two computers next to each other would have different ideas about the time in nanoseconds. But it can still be useful.
The computer can set an alarm for a few milliseconds in the future, and commonly does this so it knows when to stop thinking about your e-mail program and spend some time thinking about your web browser. Then it sets another alarm so it knows to go back to your e-mail a few milliseconds later.
As a programmer you can use this facility too, for example you could set a time limit on a level in a game, using a 'timer'. Or you could use a timer to tell when you should put the next frame of the animation on the display - perhaps 25 time a second (ie every 40 milliseconds).
To answer the main question, the BIOS clock has a battery on your motherboard, like Jian's answer says. That keeps time when the machine is off.
To answer what I think your second question is, you can get the second from the millisecond value by doing an integer division by 1000, like so:
second = (int) (milliseconds / 1000);
If you're asking how we're able to get the time with that accuracy, look at Esteban's answer... the quartz crystal vibrates at a certain time period, say 0.00001 seconds. We just make a circuit that counts the vibrations. When we have reached 100000 vibrations, we declare that a second has passed and update the clock.
We can get any accuracy by counting the vibrations this way... any accuracy thats greater than the period of vibration of the crystal we're using.
The motherboard has a clock that ticks. Every tick represents a unit of time.
To be more precise, the clock is usually a quartz crystal that oscilates at a given frequency; some common CPU clock frequencies are 33.33 and 40 MHz.
Absolute time is archaically measured using a 32-bit counter of seconds from 1970. This can cause the "2038 problem," where it simply overflows. Hence the 64-bit time APIs used on modern Windows and Unix platforms (this includes BSD-based MacOS).
Quite often a PC user is interested in time intervals rather than the absolute time since a profound event took place. A common implementation of a computer has things called timers that allow just that to happen. These timers might even run when the PC isn't with purpose of polling hardware for wake-up status, switching sleep modes or coming out of sleep. Intel's processor docs go into incredible detail about these.