how does the function time() tell the current time and even when the computer has been powered off earlier? - hardware

how we can work on timer deals with milliseconds (0.001) how we could divide second as we want ?? how we could we deal with the second itself ???

http://computer.howstuffworks.com/question319.htm
In your computer (as well as other
gadgets), the battery powers a chip
called the Real Time Clock (RTC) chip.
The RTC is essentially a quartz watch
that runs all the time, whether or not
the computer has power. The battery
powers this clock. When the computer
boots up, part of the process is to
query the RTC to get the correct time
and date. A little quartz clock like
this might run for five to seven years
off of a small battery. Then it is
time to replace the battery.

Your PC will have a hardware clock, powered by a battery so that it keeps ticking even while the computer is switched off. The PC knows how fast its clock runs, so it can determine when a second goes by.
Initially, the PC doesn't know what time it is (i.e. it just starts counting from zero), so it must be told what the current time is - this can be set in the BIOS settings and is stored in the CMOS, or can be obtained via the Internet (e.g. by synchronizing with the clocks at NIST).

Some recap, and some more info:
1) The computer reads the Real-Time-Clock during boot-up, and uses that to set it's internal clock
2) From then on, the computer uses it's CPU clock only - it does not re-read the RTC (normally).
3) The computer's internal clock is subject to drift - due to thermal instability, power fluctuations, inaccuracies in finding an exact divisor for seconds, interrupt latency, cosmic rays, and the phase of the moon.
4) The magnitude of the clock drift could be in the order of seconds per day (tens or hundreds of seconds per month).
5) Most computers are capable of connecting to a time server (over the internet) to periodically reset their clock.
6) Using a time server can increase the accuracy to within tens of milliseconds (normally). My computer updates every 15 minutes.

Computers know the time because, like you, they have a digital watch they look at from time to time.
When you get a new computer or move to a new country you can set that watch, or your computer can ask the internet what the time is, which helps to stop it form running slow, or fast.
As a user of the computer, you can ask the current time, or you can ask the computer to act as an alarm clock. Some computers can even turn themselves on at a particular time, to back themselves up, or wake you up with a favourite tune.
Internally, the computer is able to tell the time in milliseconds, microseconds or sometimes even nanoseconds. However, this is not entirely accurate, and two computers next to each other would have different ideas about the time in nanoseconds. But it can still be useful.
The computer can set an alarm for a few milliseconds in the future, and commonly does this so it knows when to stop thinking about your e-mail program and spend some time thinking about your web browser. Then it sets another alarm so it knows to go back to your e-mail a few milliseconds later.
As a programmer you can use this facility too, for example you could set a time limit on a level in a game, using a 'timer'. Or you could use a timer to tell when you should put the next frame of the animation on the display - perhaps 25 time a second (ie every 40 milliseconds).

To answer the main question, the BIOS clock has a battery on your motherboard, like Jian's answer says. That keeps time when the machine is off.
To answer what I think your second question is, you can get the second from the millisecond value by doing an integer division by 1000, like so:
second = (int) (milliseconds / 1000);
If you're asking how we're able to get the time with that accuracy, look at Esteban's answer... the quartz crystal vibrates at a certain time period, say 0.00001 seconds. We just make a circuit that counts the vibrations. When we have reached 100000 vibrations, we declare that a second has passed and update the clock.
We can get any accuracy by counting the vibrations this way... any accuracy thats greater than the period of vibration of the crystal we're using.

The motherboard has a clock that ticks. Every tick represents a unit of time.
To be more precise, the clock is usually a quartz crystal that oscilates at a given frequency; some common CPU clock frequencies are 33.33 and 40 MHz.

Absolute time is archaically measured using a 32-bit counter of seconds from 1970. This can cause the "2038 problem," where it simply overflows. Hence the 64-bit time APIs used on modern Windows and Unix platforms (this includes BSD-based MacOS).
Quite often a PC user is interested in time intervals rather than the absolute time since a profound event took place. A common implementation of a computer has things called timers that allow just that to happen. These timers might even run when the PC isn't with purpose of polling hardware for wake-up status, switching sleep modes or coming out of sleep. Intel's processor docs go into incredible detail about these.

Related

ALSA Sample Rate drift vs monotonic clock

I have an application that samples audio at 8Khz using ALSA. This is set via snd_pcm_hw_params() and can be confirmed by looking at /proc:
cat /proc/asound/card1/pcm0c/sub0/hw_params
access: MMAP_INTERLEAVED
format: S32_LE
subformat: STD
channels: 12
rate: 8000 (8000/1)
period_size: 400
buffer_size: 1200
The count of samples read over time is effectively a monotonic clock.
If I compare the number of samples read with the system monotonic clock I note there is a drift over time. The sample clock appears to lose 1s roughly every 5 hours relative to the monotonic clock.
I have code to compensate for this at the application level (i.e. to correctly map sample counts to wall clock times) but I am wondering if we can or why we can't do better at a lower level?
Both clocks are based on oscillators of some kind which may have some small error. So likely we are sampling at 7999.5Khz rather than 8Khz and the error builds up over time. Equally the system clock may have some small error in it.
The system clocks are corrected periodically by NTP so perhaps can permit more error but even so this deviation seems much larger than I would intuitively effect.
However, see for example http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm
In theory NTP can generate a drift file which you could use to see the drift rate of your system clock.
I would have thought that knowing that there is some small error. Something would try to autocorrect itself either by swapping between two differently wrong sample rates e.g. 8000.5Khz & 7999.5Khz or dropping the occasional sample. In fact I thought this kind of thing was done at the hardware or firmware level in order to stabilize the average frequency given a crystal with a known error.
Also I would have thought quartz crystals are put in circuits these days with at least temperature compensation.

is it possible to synchronize two computers at better than 1 ms accuracy using any internet protocols?

Say I have two computers - one located in Los Angeles and another located in Boston. Are there any known protocols or linux commands that could synchronize both of those computer clocks to better than 1 ms and NOT use GPS at all? I am under the impression the answer is no (What's the best way to synchronize times to millisecond accuracy AND precision between machines?)
Alternatively, are there any standard protocols (like NTP) in which the relative time difference could be known accurately between these two computers even if the absolutely time synchronization is off?
I am just wondering if there are any free or inexpensive ways to get better than 1 ms time accuracy without having to resort to GPS.
I don't know of any known protocol (perhaps there is) but can offer a method similar to the way scientists measure speed near that of light:
Have a process on both servers "pinging" the other server and waiting for a response, and timing the time it took for a response. Then start pinging periodically exactly when you expect the last ping to come in. Averaging (and discarding any far off samples) you will have the two servers after a while "thumping away" at the same rhythm. The two can also tell how much time between each "beat" is taking for each of them at a very high accuracy, by dividing the count of beats in the (long) period of time.
After the "rhythm" is established, if you know that one of the server's time is correct, or you want to use its time as the base, then you know what time it is when your server's signal reaches the other server. Together with the response it sends you what time IT has. You can then use that time to establish synchronization with your time system.
Last but not least, most operating systems give the non-kernel user the ability to act only in at least 32 milliseconds of accuracy, that is: you cannot expect something to happen exactly within less milliseconds than that. The only way to overcome that is to have a "native" DLL that can react and run with the clock. That too will give you only a certain speed of reaction, depending on the system (hardware and software).
Read about Real-Time systems and the "server" you are talking about (Windows? Linux? Embedded software on a microchip? Something else?)

How to synchronize host's and client's games in an online RTS game? (VB.NET)

So my project is an online RTS (real-time strategy) game (VB.NET) where - after a matchmaking server matched 2 players - one player is assigned as host and the other as client in a socketcommunication (TPC).
My design is that server and client only record and send information about input, and the game takes care of all the animations/movements according to the information from the inputs.
This works well, except when it comes to having both the server's and client's games run exacly at the same speed. It seems like the operations in the games are handled at different speeds, regardless if I have both client and server on the same computer, or if I use different computers as server/client. This is obviously crucial for the game to work.
Since the game at times have 300-500 units, I thought it would be too much to send an entire gamestate from server/client 10 times a second.
So my questions are:
How to synchronize the games while sending only inputs? (if possible)
What other designs are doable in this case, and how do they work?
(in VB.NET i use timers for operations, such that every 100ms(timer interval) a character moves and changes animation, and stuff like that)
Thanks in advance for any help on this issue, my project really depends on it!
Timers are not guaranteed to tick at exactly the set rate. The speed at which they tick can be affected by how busy the system and, in particular, your process are. However, while you cannot get a timer to tick at an accurate pace, you can calculate, exactly, how long it has been since some particular point in time, such as the point in time when you started the timer. The simplest way to measure the time, like that, is to use the Stopwatch class.
When you do it this way, that means that you don't have a smooth order of events where, for instance, something moves one pixel per timer tick, but rather, you have a path where you know something is moving from one place to another over a specified span of time and then, given your current exact time according to the stopwatch, you can determine the current position of that object along that path. Therefore, on a system which is running faster, the events would appear more smooth, but on a system which is running slower, the events would occur in a more jumpy fashion.

iPhone hardware & user timers getting out of sync over time

I'm trying to detect if the user advances their clock while an app is running. I'm currently doing this by comparing how two timers change: [NSDate timeIntervalSinceReferenceDate] and mach_absolute_time.
The basic algorithm is this:
At the start of the app, save startUserClock (timeIntervalSinceReferenceDate) and startSystemClock (mach_absolute_time converted to seconds)
Periodically diff the current values of the timers against their respective start values.
If the diff's are different (within some margin of error), we should know that the timers are out of sync, which indicates a clock change - theoretically the only time this should be possible is if the user has modified their clock.
However, it seems that mach_absolute_time is growing at a slightly faster rate than timeIntervalSinceReferenceDate. In the short term, this isn't a huge issue, but over time the difference adds up and we start seeing a lot of false positives.
This problem appears to be hardware dependent. I don't see it at all on the iPad 1 I have, but a colleague sees it on his iPad 2 and I see it in the simulator.
I've confirmed that there isn't a problem with my conversion of mach_absolute_time to seconds by replacing it with CACurrentMediaTime (which uses mach_absolute_time under the hood). I've tried changing timeIntervalSinceReferenceDate to other timing methods (ie: CFAbsoluteTimeGetCurrent).
Is there something wrong with my base assumption that the timers should grow at the same rate? My thinking was that unless something is fundamentally wrong they shouldn't get so out of sync - they're both determining time, just starting from different points.
Is there a better way to do this? I need a completely offline solution - we can't assume an internet connection.
You could try using gettimeofday to get the value of the current system time. This returns the time since the epoch.
In general, timers aren't guaranteed to be in synch. Perhaps they're derived from different sources. Perhaps one is synchronized against a time server.
You only care about sudden changes in the difference between the clocks, not large drifts over time. So compute the rate of change of the difference between the clocks:
T_Diff := initial difference between clocks
T_Time := time as determined by one clock
Repeat forever:
T_Diff' := new difference between clocks
T_Time' := new time as determined by one clock
# Check if time changed
if abs((T_Diff'-T_Diff)/(T_Time'-T_Time)) > threshold:
time_changed()
# Update state
T_Diff := T_Diff'
T_Time := T_Time'
wait_for_next_update()

iOS: recreating countdown timers after clock change?

I'm working on an iOS app that uses an NSTimer for a countdown. This is prone to user tampering: if, for example, the user switches out of the app, closes the app manually, changes the device clock, and comes back in, the timer will have to be recreated. Another scenario: the user locks the device, it goes into low-power mode (which requires timers to be recreated), and the clock auto-sets before the game is opened again. If that happens, I won't have an accurate way of determining how much time has passed since the app was closed, since the device clock has changed.
Tl;dr: countdown timers sometimes have to be recreated after a device clock change. How is this problem usually handled?
Any time you're relying on the system clock for accurate timing you're going to have troubles, even if the user isn't deliberately tampering with the clock. Typically clock drift is corrected by slightly increasing or decreasing the length of a second to allow the clock to drift back into alignment over a period of minutes. If you need accurate timing, you can either use something like mach_absolute_time() which is related to the system uptime rather than the system clock, or you can use Grand Central Dispatch. The dispatch_after() function takes a dispatch_time_t which can either be expressed using wall time (e.g. system clock) or as an offset against DISPATCH_TIME_NOW (which ignores wall clock).
For future reference, in regard to different systems of timekeeping in OSX (and consequently iOS):
One way to measure the speed of any operation, including launch times,
is to use system routines to get the current time at the beginning and
end of the operation. Once you have the two time values, you can take
the difference and log the results.
The advantage of this technique is that it lets you measure the
duration of specific blocks of code. Mac OS X includes several
different ways to get the current time:
mach_absolute_time reads the CPU time base register and is the
basis for other time measurement functions.
The Core Services UpTime function provides nanosecond resolution
for time measurements.
The BSD gettimeofday function (declared in <sys/time.h>) provides
microsecond resolution. (Note, this function incurs some overhead but
is still accurate for most uses.)
In Cocoa, you can create an NSDate object with the current time at
the beginning of the operation and then use the
timeIntervalSinceDate: method to get the time difference.
Source: Launch Time Performance Guidelines, "Gathering Launch Time Metrics".