is it possible to synchronize two computers at better than 1 ms accuracy using any internet protocols? - gps

Say I have two computers - one located in Los Angeles and another located in Boston. Are there any known protocols or linux commands that could synchronize both of those computer clocks to better than 1 ms and NOT use GPS at all? I am under the impression the answer is no (What's the best way to synchronize times to millisecond accuracy AND precision between machines?)
Alternatively, are there any standard protocols (like NTP) in which the relative time difference could be known accurately between these two computers even if the absolutely time synchronization is off?
I am just wondering if there are any free or inexpensive ways to get better than 1 ms time accuracy without having to resort to GPS.

I don't know of any known protocol (perhaps there is) but can offer a method similar to the way scientists measure speed near that of light:
Have a process on both servers "pinging" the other server and waiting for a response, and timing the time it took for a response. Then start pinging periodically exactly when you expect the last ping to come in. Averaging (and discarding any far off samples) you will have the two servers after a while "thumping away" at the same rhythm. The two can also tell how much time between each "beat" is taking for each of them at a very high accuracy, by dividing the count of beats in the (long) period of time.
After the "rhythm" is established, if you know that one of the server's time is correct, or you want to use its time as the base, then you know what time it is when your server's signal reaches the other server. Together with the response it sends you what time IT has. You can then use that time to establish synchronization with your time system.
Last but not least, most operating systems give the non-kernel user the ability to act only in at least 32 milliseconds of accuracy, that is: you cannot expect something to happen exactly within less milliseconds than that. The only way to overcome that is to have a "native" DLL that can react and run with the clock. That too will give you only a certain speed of reaction, depending on the system (hardware and software).
Read about Real-Time systems and the "server" you are talking about (Windows? Linux? Embedded software on a microchip? Something else?)

Related

how to prevent cpu usage from changing timing in labview?

I'm trying to write a code in which every 1 ms a number plused one , should be replaced the old number . (something like a chronometer ! ) .
the problem is whenever the cpu usage increases because of some other programs running on the pc, this 1 milliseconds is also increased and timing in my program changes !
is there any way to prevent cpu load changes affecting timing in my program ?
It sounds as though you are trying to generate an analogue output waveform with a digital-to-analogue converter card using software timing, where your software is responsible for determining what value should be output at any given time and updating the output accordingly.
This is OK for stationary or low-speed signals but you are trying to do it at 1 ms intervals, in other words to output 1000 samples per second or 1 ks/s. You cannot do this reliably on a desktop operating system - there are too many other processes going on which can use CPU time and block your program from running for many milliseconds (or even seconds, e.g. for network access).
Here are a few ways you could solve this:
Use buffered, hardware-clocked output if your analogue output device supports it. Instead of writing one sample at a time, you send the device a waveform or array of samples and it outputs them at regular intervals using a timing signal generated in hardware. Unfortunately, low-end DAQ devices often don't support hardware-clocked output.
Instead of expecting the loop that writes your samples to the AO to run every millisecond, read LabVIEW's Tick Count (ms) value in the loop and use that as an index to your array of samples: rather than trying to output every sample, your code will now say 'what time is it now, and therefore what should the output be?' That won't give you a perfect signal out but at least now it should keep the correct frequency rather than be 'slowed down' - instead you will see glitches imposed on the signal whenever the loop can't keep up. This is easy to test and maybe it will be adequate for your needs.
Use a real-time operating system instead of a desktop OS. In the case of LabVIEW this would mean using the Real-Time software module and either a National Instruments hardware device that supports RT, such as the CompactRIO series, or installing the RT OS on a dedicated PC if the hardware is compatible. This is not a cheap option, obviously (unless it's strictly for personal, home use). In any case you would need to have an RT-compatible driver for your output device.
Use your computer's sound output as the output device. LabVIEW has functions for buffered sound output and you should be able to get reliable results. You'll need to upsample your signal to one of the sound output's available sample rates, probably 44.1 ks/s. The drawbacks are that the output level is limited in range and is not calibrated, and will probably be AC-coupled so you can't output a DC or very low-frequency signal. However if the level is OK for what you want to connect it to, or you can add suitable signal conditioning, this could be a neat solution. If you need the output level to be calibrated you could simultaneously measure it with your DAQ card and scale the sound waveform you're outputting to keep it correct.
The answer to your question is "not on a desktop computer." This is why products like LabVIEW Real-Time and dedicated deterministic hardware exist: you need a computer built around dedication to a particular process in order to consistently serve that process. Every application in a regular Windows/Mac/Linux desktop system has the problem you are seeing of potentially being interrupted by other system processes, particularly in its UI layer.
There is no way to prevent cpu load changes from affecting timing in your program unless the computer has a realtime clock.
If it doesn't have a realtime clock, there is no reason to expect it to behave deterministically. Do you need for your program to run at that pace?

Is it possible to change the guest wall clock speed in a virtualized environment?

We're undertaking a large project that is focused on delivering automated testing of the software that we produce.
We have a lot of "events" that trigger certain behavior at specific times. Ideally, we would be able to exercise these tests in an automated fashion without the need to move the system clock in intervals to specific points in time.
To that end, I'm wondering if there is a way (with VMWare, or any other virtualization software) to increase the speed of the system clock of the guest operating system. I'm not interested in measuring performance in these tests, only functionality.
Is there anything out there that would allow for this behavior?
It works for VirtualBox:
VBoxManage setextradata "VM name" "VBoxInternal/TM/WarpDrivePercentage" x
where x is the percentage you want (for instance, 200 is doubling, 50 is halving)
You can also more information here, on the section "Accelerate or slow down the guest clock". Regards.
I was able to work around this using the Win32 API SetSystemTimeAdjustment()
This allows you to increase the amount of time added to the system clock for each OS tick interval. It's meant generally for addressing clock skew, but can be used outside of that particular context.
I don't see what the benefits are of testing this in a fast-forwarding VM instead of unit testing the event trigger using a mock implementation of the date/time dependency.
The only thing you "gain" by testing this in a fast-forwarding VM is that you test both the system's and the programming language's date/time implementation, which I think you are save to trust because it is used, developed and tested by so many for such a long time.

How to synchronize host's and client's games in an online RTS game? (VB.NET)

So my project is an online RTS (real-time strategy) game (VB.NET) where - after a matchmaking server matched 2 players - one player is assigned as host and the other as client in a socketcommunication (TPC).
My design is that server and client only record and send information about input, and the game takes care of all the animations/movements according to the information from the inputs.
This works well, except when it comes to having both the server's and client's games run exacly at the same speed. It seems like the operations in the games are handled at different speeds, regardless if I have both client and server on the same computer, or if I use different computers as server/client. This is obviously crucial for the game to work.
Since the game at times have 300-500 units, I thought it would be too much to send an entire gamestate from server/client 10 times a second.
So my questions are:
How to synchronize the games while sending only inputs? (if possible)
What other designs are doable in this case, and how do they work?
(in VB.NET i use timers for operations, such that every 100ms(timer interval) a character moves and changes animation, and stuff like that)
Thanks in advance for any help on this issue, my project really depends on it!
Timers are not guaranteed to tick at exactly the set rate. The speed at which they tick can be affected by how busy the system and, in particular, your process are. However, while you cannot get a timer to tick at an accurate pace, you can calculate, exactly, how long it has been since some particular point in time, such as the point in time when you started the timer. The simplest way to measure the time, like that, is to use the Stopwatch class.
When you do it this way, that means that you don't have a smooth order of events where, for instance, something moves one pixel per timer tick, but rather, you have a path where you know something is moving from one place to another over a specified span of time and then, given your current exact time according to the stopwatch, you can determine the current position of that object along that path. Therefore, on a system which is running faster, the events would appear more smooth, but on a system which is running slower, the events would occur in a more jumpy fashion.

Accurate way to detect rendering speed

I'm currently brainstorming for an idea of mine that involves a p2p render farm, somewhat like renderfarm.fi but in the difference that you pay for the service and contributors to the processing pool get paid.
Currently renderfarms measure price based on GHZ/h, but when the computers rendering are untrusted is there a good way to measure the equivilant GHZ/h of a computer, considering the computers could be partially loaded with other programs slowing down true time spent rendering, etc?
Because your worker process can ask the OS counters how much execution time they've received and that can be matched up with progress on the work package you can pay out based upon work-units-completed, but charge in GHz/h. You know you can't trust the user's clock (or anything else for that matter), but you can verify the work units returned and approximate their computational complexity by combining the program counters from multiple peers.
You have no way to know for sure that the system is or is not particularly loaded, but you do know if work went out and came back. However you will have to verify that the work was done correctly. Probably means over-provisioning and running every render twice on two different machines to ensure someone isn't inserting garbage results that are faster to compute.
Good luck. I don't know how you'll be able to beat the likes of Amazon with them charging ~$0.10 per GHz/h.
The operating system can, and most likely will measure the actual CPU time taken up by the process. As such, that can be used as a measure of how much practical time the process itself has spent running on the machine's CPU. The CPU time doesn't get skewed in any direction due to other processes running on the background, so it's very ideal for this purpose.
The CPU time itself is the resource such rendering services sell, as such it's logical to measure it per user/client basis and then price the service accordingly by the CPU time spent by the user/client of the render farm.

Testing Real Time Operating System for Hardness

I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.