Explain 'Idle Time' in Tensorboard (TF 2) - tensorflow2.0

I am profiling using TensorBoard the new(ish) profiling tools. One option in"Tensorflow_stats" is a prompt to include IDLE time or not:
When I Yes, Enable, IDLE time jumps to the top of the "Top Operations" which lists those times which are slowest. However,the time given is much bigger than wall time measured on a watch. What is included and how is this "IDLE" time determined?

Related

Java daemon process performance testing not getting consistent results

I am trying to test cpu consumption of an agent / daemon process written in Java. To avoid getting skewed by garbage collection, I keep trying longer periods for each profiling run. In the beginning I tried 15 minutes, then later arrived at 2 hours. Yet I just found out that, even with 2 hour runs, I can get very inconsistent results. - One run of 2 hours gave me cpu of 6%, another of 2 hours gave me cpu of 12%.
Any suggestions to get consistent results?
Are you controlling for CPU frequency? If there isn't much work to do, the OS (or CPU itself) might reduce the clock frequency to save power. With an aggressive power-management strategy, the CPU will always run at max when it's running at all, so looking CPU% can be meaningful.
On Linux on a Skylake or later CPU, you might set the EPP for each core to performance, to get it to run at max speed whenever it's running at all.
sudo sh -c 'for i in /sys/devices/system/cpu/cpufreq/policy[0-9]*/energy_performance_preference;do echo performance > "$i";done'
Otherwise maybe measure in core clock cycles (like Linux perf stat java ...) instead of CPU %, or at least look at average clock speed while it was running. (Lower clock speed relative to DRAM can skew things, since a cache miss stall for fewer cycles.)

calculating burst time in operating system

how burst time calculated for process in os in real life scenario.say computer is having 5 processes to execute and is using shortest job first algorithm.then how OS will know in advance the burst time of each process??
Since an operation system can not guess the burst time of a process a priori, usually one of the two following approaches is used:
The process is annoted by the developer (e.g., by using methods of critical execution path analysis)
The OS uses the execution time of a former process instance of the very same code the do an estimation.
However, in "real life", SJF is rarely used, at least not as pure algorithm. It is more of theoretical interest.

how does CLOCK control events order?

how does clock control various events(operations) from being occurred in desired sequence?what is the significance of a clock cycle time(i've heard that many operations can be issued in a single clock cycle)?
or simply,how does CPU controls operation ordering?
CPUs have various processing units (float, vector, integer), and pipelines of different lengths for each unit.
The clock determines at which speed it will go through all operations in a pipeline, each operation being a tick. Once it gets to the end, the result is sent back to cache/memory.
Multiple pipelines can be active at the same time.
That's all I can tell you..
Ars Technica used to have great articles about this, such as this one:
Understanding the Microprocessor
The clock does not control the sequence of instructions. The clock controls the amount of times per second that the CPU "ticks." Each time is referred as a cycle and consequently each cycle takes some time to complete.
The sequence of instructions is dictated by the running program. Modern CPUs also include optimisations that influence the exact sequence.
These optimisations also make the clock speed (= amount of cycles per second) less significant. For example a dual core CPU is able to execute two instructions in the same cycle.
Yes usually instructions complete in a couple of cycles and compilers optimise the programs to use costly instructions less.

how does the function time() tell the current time and even when the computer has been powered off earlier?

how we can work on timer deals with milliseconds (0.001) how we could divide second as we want ?? how we could we deal with the second itself ???
http://computer.howstuffworks.com/question319.htm
In your computer (as well as other
gadgets), the battery powers a chip
called the Real Time Clock (RTC) chip.
The RTC is essentially a quartz watch
that runs all the time, whether or not
the computer has power. The battery
powers this clock. When the computer
boots up, part of the process is to
query the RTC to get the correct time
and date. A little quartz clock like
this might run for five to seven years
off of a small battery. Then it is
time to replace the battery.
Your PC will have a hardware clock, powered by a battery so that it keeps ticking even while the computer is switched off. The PC knows how fast its clock runs, so it can determine when a second goes by.
Initially, the PC doesn't know what time it is (i.e. it just starts counting from zero), so it must be told what the current time is - this can be set in the BIOS settings and is stored in the CMOS, or can be obtained via the Internet (e.g. by synchronizing with the clocks at NIST).
Some recap, and some more info:
1) The computer reads the Real-Time-Clock during boot-up, and uses that to set it's internal clock
2) From then on, the computer uses it's CPU clock only - it does not re-read the RTC (normally).
3) The computer's internal clock is subject to drift - due to thermal instability, power fluctuations, inaccuracies in finding an exact divisor for seconds, interrupt latency, cosmic rays, and the phase of the moon.
4) The magnitude of the clock drift could be in the order of seconds per day (tens or hundreds of seconds per month).
5) Most computers are capable of connecting to a time server (over the internet) to periodically reset their clock.
6) Using a time server can increase the accuracy to within tens of milliseconds (normally). My computer updates every 15 minutes.
Computers know the time because, like you, they have a digital watch they look at from time to time.
When you get a new computer or move to a new country you can set that watch, or your computer can ask the internet what the time is, which helps to stop it form running slow, or fast.
As a user of the computer, you can ask the current time, or you can ask the computer to act as an alarm clock. Some computers can even turn themselves on at a particular time, to back themselves up, or wake you up with a favourite tune.
Internally, the computer is able to tell the time in milliseconds, microseconds or sometimes even nanoseconds. However, this is not entirely accurate, and two computers next to each other would have different ideas about the time in nanoseconds. But it can still be useful.
The computer can set an alarm for a few milliseconds in the future, and commonly does this so it knows when to stop thinking about your e-mail program and spend some time thinking about your web browser. Then it sets another alarm so it knows to go back to your e-mail a few milliseconds later.
As a programmer you can use this facility too, for example you could set a time limit on a level in a game, using a 'timer'. Or you could use a timer to tell when you should put the next frame of the animation on the display - perhaps 25 time a second (ie every 40 milliseconds).
To answer the main question, the BIOS clock has a battery on your motherboard, like Jian's answer says. That keeps time when the machine is off.
To answer what I think your second question is, you can get the second from the millisecond value by doing an integer division by 1000, like so:
second = (int) (milliseconds / 1000);
If you're asking how we're able to get the time with that accuracy, look at Esteban's answer... the quartz crystal vibrates at a certain time period, say 0.00001 seconds. We just make a circuit that counts the vibrations. When we have reached 100000 vibrations, we declare that a second has passed and update the clock.
We can get any accuracy by counting the vibrations this way... any accuracy thats greater than the period of vibration of the crystal we're using.
The motherboard has a clock that ticks. Every tick represents a unit of time.
To be more precise, the clock is usually a quartz crystal that oscilates at a given frequency; some common CPU clock frequencies are 33.33 and 40 MHz.
Absolute time is archaically measured using a 32-bit counter of seconds from 1970. This can cause the "2038 problem," where it simply overflows. Hence the 64-bit time APIs used on modern Windows and Unix platforms (this includes BSD-based MacOS).
Quite often a PC user is interested in time intervals rather than the absolute time since a profound event took place. A common implementation of a computer has things called timers that allow just that to happen. These timers might even run when the PC isn't with purpose of polling hardware for wake-up status, switching sleep modes or coming out of sleep. Intel's processor docs go into incredible detail about these.

Accurate Windows timer? System.Timers.Timer() is limited to 15 msec

I need an accurate timer to interface a Windows application to a piece of lab equipment.
I used System.Timers.Timer() to create a timer that ticks every 10 msec, but this clock runs slow. For example 1000 ticks with an interval of 10 msec should take 10 wall-clock seconds, but it actually takes more like 20 wall-clock sec (on my PC). I am guessing this is because System.Timers.Timer() is an interval timer that is reset every time it elapses. Since it will always take some time between when the timer elapses and when it is reset (to another 10msec) the clock will run slow. This probably fine if the interval is large (seconds or minutes) but unacceptable for very short intervals.
Is there a function on Windows that will trigger a procedure every time the system clock crosses a 10 msec (or whatever) boundary?
This is a simple console application.
Thanks
Norm
UPDATE: System.Timers.Timer() is extremely inaccurate for small intervals.
I wrote a simple program that counted 10 seconds several ways:
Interval=1, Count=10000, Run time = 160 sec, msec per interval=16
Interval=10, Count=1000, Run time = 16 sec, msec per interval=15
Interval=100, Count=100, Run time = 11 sec, msec per interval=110
Interval=1000, Count=10, Run time = 10 sec, msec per interval=1000
It seems like System.Timers.Timer() cannot tick faster that about 15 msec, regardless
of the interval setting.
Note that none of these tests seemed to use any measurable CPU time, so the limit is not the CPU, just a .net limitation (bug?)
For now I think I can live with an inaccurate timer that triggers a routine every 15 msec or so and the routine gets an accurate system time. Kinda strange, but...
I also found a shareware product ZylTimer.NET that claims to be a much more accurate .net timer (resolution of 1-2 msec). This may be what I need. If there is one product there are likely others.
Thanks again.
You need to use a high resolution timer such as QueryPerformanceCounter
On surface of it the answer is something like "high resolution timer" however this is incorrect. The answer requires a regular tick generation and the windows high res performance counter API does not generate such a tick.
I know this is not answer inself but the popular answer to this question so far is wrong enough for me to feel that a simple comment on it is not enough.
The limitation is given by the systems heartbeat. This typically defaults to 64 beats/s which is 15.625 ms. However there are ways to modify these system wide settings to achieve timer resolutions down to 1 ms or even to 0.5 ms on newer platforms:
Going for 1 ms resolution by means of the multimedia timer interface (timeBeginPeriod()):
See Obtaining and Setting Timer Resolution.
Going to 0.5 ms resolution by means of NtSetTimerResolution():
See Inside Windows NT High Resolution Timers.
You may obtain 0.5 ms resolution by means of the hidden API NtSetTimerResolution().
I've given all the details in this SO answer.
In System.Diagnostics, you can use the Stopwatch class.
Off the top of my head, I could suggest running a thread that mostly sleeps, but when it wakes, it checks a running QueryPerformanceCounter and occasionally triggers your procedure.
There's a nice write up at the MSDN: Implement a Continuously Updating, High-Resolution Time Provider for Windows
Here's the sample source code for the article (C++).