I have been looking for an answer to this one but no clear documentation yet.
The CLOCK_MONOTONIC from clock_gettime man page says that it's affected by incremental adjustments performed by adjtime and NTP
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time since some unspecified
starting point. This clock is not affected by discontinuous jumps in the system
time (e.g., if the system administrator manually changes the clock),
but is affected by the incremental adjustments performed by adjtime(3) and NTP
What is not clear to me is that is it affected by all sorts of adjustments made by NTP or just the small adjustements ?
Say if there is a big time jump made by NTP, if the system was way off the clock, will CLOCK_MONOTONIC reflect that ?
I am not sure which of the following system calls NTP makes to adjust the time on my Cent OS system
A quick test showed no change in monotonic clock output even though NTP made the system jump by 10 hours.
Even making the time jump by clock_settime call didn't affect MONOTONIC time
Related
Running a BeagleBone Black with a Debian distro, and I need to use ntp to set time since timesyncd causes jumps in system time. The system clock is slow enough so that ntp errors out after a few seconds. The system clock is slow by about 3.18 seconds every minute - WAY beyond the 500ppm that ntp can deal with.
The ntptime man page hints at a -f option to set a frequency offset, but the effect seems to be limited to the +/- 500ppm range.
Is there a way to adjust the system clock tick so that it's close enough for ntp to work?
What is the mechanism used to account for cpu time, including that spent in-kernel (sys in the output of top)?
I'm thinking about limitations here because I remember reading about processes being able avoid showing up their cpu usage, if they yield before completing their time slice.
Context
Specifically, I'm working on some existing code in KVM virtualization.
if (guest_tsc < tsc_deadline)
__delay(tsc_deadline - guest_tsc);
The code is called with interrupts disabled. I want to know if Linux will correctly account for long busy-waits with interrupts disabled.
If it does, it would help me worry less about certain edge case configurations which might cause long, but bounded busy-waits. System administrators could at least notice if it was bad enough to degrade throughput (though necessarily latency), and identify the specific process responsible (in this case, QEMU, and the process ID would allow identifying the specific virtual machine).
In Linux 4.6, I believe process times are still accounted by sampling in the timer interrupt.
/*
* Called from the timer interrupt handler to charge one tick to current
* process. user_tick is 1 if the tick is user time, 0 for system.
*/
void update_process_times(int user_tick)
So it may indeed be possible for a process to game this approximation.
In answer to my specific query, it looks like CPU time spent with interrupts disabled will not be accounted to the specific process :(.
I'm working on an iOS app that uses an NSTimer for a countdown. This is prone to user tampering: if, for example, the user switches out of the app, closes the app manually, changes the device clock, and comes back in, the timer will have to be recreated. Another scenario: the user locks the device, it goes into low-power mode (which requires timers to be recreated), and the clock auto-sets before the game is opened again. If that happens, I won't have an accurate way of determining how much time has passed since the app was closed, since the device clock has changed.
Tl;dr: countdown timers sometimes have to be recreated after a device clock change. How is this problem usually handled?
Any time you're relying on the system clock for accurate timing you're going to have troubles, even if the user isn't deliberately tampering with the clock. Typically clock drift is corrected by slightly increasing or decreasing the length of a second to allow the clock to drift back into alignment over a period of minutes. If you need accurate timing, you can either use something like mach_absolute_time() which is related to the system uptime rather than the system clock, or you can use Grand Central Dispatch. The dispatch_after() function takes a dispatch_time_t which can either be expressed using wall time (e.g. system clock) or as an offset against DISPATCH_TIME_NOW (which ignores wall clock).
For future reference, in regard to different systems of timekeeping in OSX (and consequently iOS):
One way to measure the speed of any operation, including launch times,
is to use system routines to get the current time at the beginning and
end of the operation. Once you have the two time values, you can take
the difference and log the results.
The advantage of this technique is that it lets you measure the
duration of specific blocks of code. Mac OS X includes several
different ways to get the current time:
mach_absolute_time reads the CPU time base register and is the
basis for other time measurement functions.
The Core Services UpTime function provides nanosecond resolution
for time measurements.
The BSD gettimeofday function (declared in <sys/time.h>) provides
microsecond resolution. (Note, this function incurs some overhead but
is still accurate for most uses.)
In Cocoa, you can create an NSDate object with the current time at
the beginning of the operation and then use the
timeIntervalSinceDate: method to get the time difference.
Source: Launch Time Performance Guidelines, "Gathering Launch Time Metrics".
Can anyone tell me whether we should enable or disable watch dog during the startup/boot code executes? My friend told me that we usually disable watch dog in the boot code. Can anyone one tell me what is the advantage or disadvantage of doing so?
It really depends on your project. The watchdog is there to help you ensure that your program won't get "stuck" while executing code. -- If there is a chance that your program may hang during the boot-procedure, it may make sense to incorporate the watchdog there too.
That being said, I generally start the watchdog at the end of my boot-up procedures.
Usually the WD (watchdog) is enabled after the boot-up procedure, because this is when the program enters its "loop" and periodically kicks the WD. During boot-up, by which I suppose you mean linear initialization of hardware and peripherals, there's much less periodicity in your code and hard to insert a WD kicking cycle.
Production code should always enable the watchdog. Hobby and/or prototype projects are obviously a special case that may not require the watchdog.
If the watchdog is enabled during boot, there is a special case which must be considered. Erasing and writing memory take a long time (erasing an entire device may take seconds to complete). So you must insure that your erase and write routines periodically service the watchdog to prevent a reset.
If you're debugging, you want it off or the device will reboot on your when you try to step through code. Otherwise it's up to you. I've seen watchdogs save projects' butts and I've seen watchdogs lead to inadvertent reboot loops that cause customers to clog up the support lines and thus cost the company a ton.
You make the call.
The best practice would be to have the watchdog activate automatically on power up. If your hardware is not designed for that then switch it on as soon as possible. Generally I set the watchdog up for long duration during boot up but once I am past boot up I go for a short time out and service the watchdog regularly.
You might not always be around to reset a board that hanged after a plant shut down and restart at a remote location. Or the board is located in a inaccessible basement crawl space and it did not restart after a power dip. Lab easy practices is not real world best practices.
Try and design your hardware so that your software can check the reset cause at boot up and report. If you get a watchdog timeout you need to know because it is a failure in your system and ignoring it can cause problems later.
It is easier to debug with the watchdog off but during development regularly test with the watchdog on to ensure everything is on track.
I always have it enabled. What is the advantage of disabling it? So what if I have to reset it during the bootup code?
Watchdogs IMHO serve three two, but distinct, primary purposes, along with a third, less-strongly-related purpose: (1) Ensure that in all cases where the system is knocked out of whack, it will recover, eventually; (2) Ensure that when hardware is enabled which must not go too long without service, anything that would prevent such servicing shuts down the system, reasonably quickly; (3) Provide a means by which a system can go to sleep for awhile, without sleeping forever.
While disabling a watchdog during a boot loader may not interfere with purpose #2, it may interfere with purpose #1. My preference is to leave watchdogs enabled during a boot loader, and have the boot loader hit the watchdog any time something happens to indicate that the system is really supposed to be in the boot loader (e.g. every time it receives a valid boot-loader-command packet). On one project where I didn't do this, and just had the boot loader blindly feed the watchdog, static zaps could sometimes knock units into bootloader mode where they would sit, forever. Having watchdog kick the system out of the boot loader when no actual boot-loading is going on alleviates that problem.
Incidentally, if I were designing my 'ideal' embedded-watchdog circuit, I would have a hardware-configurable parameter for maximum watchdog time, and would have software settings for 'requested watchdog time' and 'maximum watchdog time'. Initially, both software settings would be set to maximum; any time the watchdog is fed, the time would be set to the minimum of the three settings. Software could change the 'requested watchdog time' any time, to any value; the 'maximum watchdog time' setting could be decreased at any time, but could only be increased via system reset.
BTW, I might also include a "periodic reset" timer, which would force the system to unconditionally reset at some interval. Software would not be able to override the behavior of this timer, but would be able to query it and request a reset early. Even systems which try to do everything right with a watchdog can still fall into states which are 'broken' but the watchdog gets fed just fine. If periodic scheduled downtime is acceptable, periodic resets can avoid such issues. One may minimize the effect of such resets on system usefulness by performing them early whenever it wouldn't disrupt some action in progress which would be disrupted. For example, if the reset interval is set to seven hours one could, any time the clock got down to one hour, ask that no further actions be requested, wait a few seconds to see if anyone tried to send an action just as they were asked to stop, and if no actions were requested, reset, and then invite further requests. A request which would have been sent just as the system was about to reset would be delayed until after the reset occurred, but provided no requests would take longer than an hour to complete, no requests would be lost or disrupted.
Fewer transistors switching, I suppose, so minuscule power savings. Depending on how much you sleep, this might actually be a big savings. Your friend might be referring to the practice of turning off the WDT when you're actually doing something, then turning it on when you sleep. There's a nice little point that Microchip gives about their PICs:
"If the WDT is disabled during normal operation (FWDTEN = 0), then the SWDTEN bit (RCON<5>) can be used to turn on the WDT just before entering Sleep mode"
how we can work on timer deals with milliseconds (0.001) how we could divide second as we want ?? how we could we deal with the second itself ???
http://computer.howstuffworks.com/question319.htm
In your computer (as well as other
gadgets), the battery powers a chip
called the Real Time Clock (RTC) chip.
The RTC is essentially a quartz watch
that runs all the time, whether or not
the computer has power. The battery
powers this clock. When the computer
boots up, part of the process is to
query the RTC to get the correct time
and date. A little quartz clock like
this might run for five to seven years
off of a small battery. Then it is
time to replace the battery.
Your PC will have a hardware clock, powered by a battery so that it keeps ticking even while the computer is switched off. The PC knows how fast its clock runs, so it can determine when a second goes by.
Initially, the PC doesn't know what time it is (i.e. it just starts counting from zero), so it must be told what the current time is - this can be set in the BIOS settings and is stored in the CMOS, or can be obtained via the Internet (e.g. by synchronizing with the clocks at NIST).
Some recap, and some more info:
1) The computer reads the Real-Time-Clock during boot-up, and uses that to set it's internal clock
2) From then on, the computer uses it's CPU clock only - it does not re-read the RTC (normally).
3) The computer's internal clock is subject to drift - due to thermal instability, power fluctuations, inaccuracies in finding an exact divisor for seconds, interrupt latency, cosmic rays, and the phase of the moon.
4) The magnitude of the clock drift could be in the order of seconds per day (tens or hundreds of seconds per month).
5) Most computers are capable of connecting to a time server (over the internet) to periodically reset their clock.
6) Using a time server can increase the accuracy to within tens of milliseconds (normally). My computer updates every 15 minutes.
Computers know the time because, like you, they have a digital watch they look at from time to time.
When you get a new computer or move to a new country you can set that watch, or your computer can ask the internet what the time is, which helps to stop it form running slow, or fast.
As a user of the computer, you can ask the current time, or you can ask the computer to act as an alarm clock. Some computers can even turn themselves on at a particular time, to back themselves up, or wake you up with a favourite tune.
Internally, the computer is able to tell the time in milliseconds, microseconds or sometimes even nanoseconds. However, this is not entirely accurate, and two computers next to each other would have different ideas about the time in nanoseconds. But it can still be useful.
The computer can set an alarm for a few milliseconds in the future, and commonly does this so it knows when to stop thinking about your e-mail program and spend some time thinking about your web browser. Then it sets another alarm so it knows to go back to your e-mail a few milliseconds later.
As a programmer you can use this facility too, for example you could set a time limit on a level in a game, using a 'timer'. Or you could use a timer to tell when you should put the next frame of the animation on the display - perhaps 25 time a second (ie every 40 milliseconds).
To answer the main question, the BIOS clock has a battery on your motherboard, like Jian's answer says. That keeps time when the machine is off.
To answer what I think your second question is, you can get the second from the millisecond value by doing an integer division by 1000, like so:
second = (int) (milliseconds / 1000);
If you're asking how we're able to get the time with that accuracy, look at Esteban's answer... the quartz crystal vibrates at a certain time period, say 0.00001 seconds. We just make a circuit that counts the vibrations. When we have reached 100000 vibrations, we declare that a second has passed and update the clock.
We can get any accuracy by counting the vibrations this way... any accuracy thats greater than the period of vibration of the crystal we're using.
The motherboard has a clock that ticks. Every tick represents a unit of time.
To be more precise, the clock is usually a quartz crystal that oscilates at a given frequency; some common CPU clock frequencies are 33.33 and 40 MHz.
Absolute time is archaically measured using a 32-bit counter of seconds from 1970. This can cause the "2038 problem," where it simply overflows. Hence the 64-bit time APIs used on modern Windows and Unix platforms (this includes BSD-based MacOS).
Quite often a PC user is interested in time intervals rather than the absolute time since a profound event took place. A common implementation of a computer has things called timers that allow just that to happen. These timers might even run when the PC isn't with purpose of polling hardware for wake-up status, switching sleep modes or coming out of sleep. Intel's processor docs go into incredible detail about these.