I am trying to test cpu consumption of an agent / daemon process written in Java. To avoid getting skewed by garbage collection, I keep trying longer periods for each profiling run. In the beginning I tried 15 minutes, then later arrived at 2 hours. Yet I just found out that, even with 2 hour runs, I can get very inconsistent results. - One run of 2 hours gave me cpu of 6%, another of 2 hours gave me cpu of 12%.
Any suggestions to get consistent results?
Are you controlling for CPU frequency? If there isn't much work to do, the OS (or CPU itself) might reduce the clock frequency to save power. With an aggressive power-management strategy, the CPU will always run at max when it's running at all, so looking CPU% can be meaningful.
On Linux on a Skylake or later CPU, you might set the EPP for each core to performance, to get it to run at max speed whenever it's running at all.
sudo sh -c 'for i in /sys/devices/system/cpu/cpufreq/policy[0-9]*/energy_performance_preference;do echo performance > "$i";done'
Otherwise maybe measure in core clock cycles (like Linux perf stat java ...) instead of CPU %, or at least look at average clock speed while it was running. (Lower clock speed relative to DRAM can skew things, since a cache miss stall for fewer cycles.)
I have been looking for an answer to this one but no clear documentation yet.
The CLOCK_MONOTONIC from clock_gettime man page says that it's affected by incremental adjustments performed by adjtime and NTP
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time since some unspecified
starting point. This clock is not affected by discontinuous jumps in the system
time (e.g., if the system administrator manually changes the clock),
but is affected by the incremental adjustments performed by adjtime(3) and NTP
What is not clear to me is that is it affected by all sorts of adjustments made by NTP or just the small adjustements ?
Say if there is a big time jump made by NTP, if the system was way off the clock, will CLOCK_MONOTONIC reflect that ?
I am not sure which of the following system calls NTP makes to adjust the time on my Cent OS system
A quick test showed no change in monotonic clock output even though NTP made the system jump by 10 hours.
Even making the time jump by clock_settime call didn't affect MONOTONIC time
I have an application that samples audio at 8Khz using ALSA. This is set via snd_pcm_hw_params() and can be confirmed by looking at /proc:
cat /proc/asound/card1/pcm0c/sub0/hw_params
access: MMAP_INTERLEAVED
format: S32_LE
subformat: STD
channels: 12
rate: 8000 (8000/1)
period_size: 400
buffer_size: 1200
The count of samples read over time is effectively a monotonic clock.
If I compare the number of samples read with the system monotonic clock I note there is a drift over time. The sample clock appears to lose 1s roughly every 5 hours relative to the monotonic clock.
I have code to compensate for this at the application level (i.e. to correctly map sample counts to wall clock times) but I am wondering if we can or why we can't do better at a lower level?
Both clocks are based on oscillators of some kind which may have some small error. So likely we are sampling at 7999.5Khz rather than 8Khz and the error builds up over time. Equally the system clock may have some small error in it.
The system clocks are corrected periodically by NTP so perhaps can permit more error but even so this deviation seems much larger than I would intuitively effect.
However, see for example http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm
In theory NTP can generate a drift file which you could use to see the drift rate of your system clock.
I would have thought that knowing that there is some small error. Something would try to autocorrect itself either by swapping between two differently wrong sample rates e.g. 8000.5Khz & 7999.5Khz or dropping the occasional sample. In fact I thought this kind of thing was done at the hardware or firmware level in order to stabilize the average frequency given a crystal with a known error.
Also I would have thought quartz crystals are put in circuits these days with at least temperature compensation.
When we switch off our pc, even after that when we switch it on again then it shows correct system time. How that happens without power?
The CMOS battery will keep the clock running.
There is a battery inside of your motherboard to maintain correct time and some settings.
You can google images about it
how we can work on timer deals with milliseconds (0.001) how we could divide second as we want ?? how we could we deal with the second itself ???
http://computer.howstuffworks.com/question319.htm
In your computer (as well as other
gadgets), the battery powers a chip
called the Real Time Clock (RTC) chip.
The RTC is essentially a quartz watch
that runs all the time, whether or not
the computer has power. The battery
powers this clock. When the computer
boots up, part of the process is to
query the RTC to get the correct time
and date. A little quartz clock like
this might run for five to seven years
off of a small battery. Then it is
time to replace the battery.
Your PC will have a hardware clock, powered by a battery so that it keeps ticking even while the computer is switched off. The PC knows how fast its clock runs, so it can determine when a second goes by.
Initially, the PC doesn't know what time it is (i.e. it just starts counting from zero), so it must be told what the current time is - this can be set in the BIOS settings and is stored in the CMOS, or can be obtained via the Internet (e.g. by synchronizing with the clocks at NIST).
Some recap, and some more info:
1) The computer reads the Real-Time-Clock during boot-up, and uses that to set it's internal clock
2) From then on, the computer uses it's CPU clock only - it does not re-read the RTC (normally).
3) The computer's internal clock is subject to drift - due to thermal instability, power fluctuations, inaccuracies in finding an exact divisor for seconds, interrupt latency, cosmic rays, and the phase of the moon.
4) The magnitude of the clock drift could be in the order of seconds per day (tens or hundreds of seconds per month).
5) Most computers are capable of connecting to a time server (over the internet) to periodically reset their clock.
6) Using a time server can increase the accuracy to within tens of milliseconds (normally). My computer updates every 15 minutes.
Computers know the time because, like you, they have a digital watch they look at from time to time.
When you get a new computer or move to a new country you can set that watch, or your computer can ask the internet what the time is, which helps to stop it form running slow, or fast.
As a user of the computer, you can ask the current time, or you can ask the computer to act as an alarm clock. Some computers can even turn themselves on at a particular time, to back themselves up, or wake you up with a favourite tune.
Internally, the computer is able to tell the time in milliseconds, microseconds or sometimes even nanoseconds. However, this is not entirely accurate, and two computers next to each other would have different ideas about the time in nanoseconds. But it can still be useful.
The computer can set an alarm for a few milliseconds in the future, and commonly does this so it knows when to stop thinking about your e-mail program and spend some time thinking about your web browser. Then it sets another alarm so it knows to go back to your e-mail a few milliseconds later.
As a programmer you can use this facility too, for example you could set a time limit on a level in a game, using a 'timer'. Or you could use a timer to tell when you should put the next frame of the animation on the display - perhaps 25 time a second (ie every 40 milliseconds).
To answer the main question, the BIOS clock has a battery on your motherboard, like Jian's answer says. That keeps time when the machine is off.
To answer what I think your second question is, you can get the second from the millisecond value by doing an integer division by 1000, like so:
second = (int) (milliseconds / 1000);
If you're asking how we're able to get the time with that accuracy, look at Esteban's answer... the quartz crystal vibrates at a certain time period, say 0.00001 seconds. We just make a circuit that counts the vibrations. When we have reached 100000 vibrations, we declare that a second has passed and update the clock.
We can get any accuracy by counting the vibrations this way... any accuracy thats greater than the period of vibration of the crystal we're using.
The motherboard has a clock that ticks. Every tick represents a unit of time.
To be more precise, the clock is usually a quartz crystal that oscilates at a given frequency; some common CPU clock frequencies are 33.33 and 40 MHz.
Absolute time is archaically measured using a 32-bit counter of seconds from 1970. This can cause the "2038 problem," where it simply overflows. Hence the 64-bit time APIs used on modern Windows and Unix platforms (this includes BSD-based MacOS).
Quite often a PC user is interested in time intervals rather than the absolute time since a profound event took place. A common implementation of a computer has things called timers that allow just that to happen. These timers might even run when the PC isn't with purpose of polling hardware for wake-up status, switching sleep modes or coming out of sleep. Intel's processor docs go into incredible detail about these.