Why does Performance Counter for System Uptime need 2 calls to NextValue? - performancecounter

This is how you obtain the Performance Counter for System Uptime:
public TimeSpan GetSystemUptime(){
PerformanceCounter upTime = new PerformanceCounter("System", "System Up Time");
upTime.NextValue();
return TimeSpan.FromSeconds(upTime.NextValue());
}
NextValue has to be called twice, because it is 0 on the first call.
But I don't understand WHY it is using a Counter that has to be read twice.
I understand that something like CPU Usage [new PerformanceCounter("Processor Information", "% Processor Time", "_Total")] would Need 2 values, because it is calculating an average over time.
but why would you Need to measure 2 times, when calculating System Uptime?
Wouldn't you just have to measure the current time and compare it with the booting time?

Related

Create a variable to count from 1 to n in AnyLogic

I am looking to add a variable to count from 1 to 217 every hour in AnyLogic, in order to use as a choice condition to set a parameters row reference.
I am assuming I either need to use an event or a state chart however I am really struggling with the exact and cannot find anything online.
If you have any tips please let me know, any help would be appreciated
Thank you,
Tash
A state machine isn't necessary in this case as this can be achieve using a calculation or a timed event. AnyLogic has time() function which returns time since model start as a double in model time units of measurements.
For example: if model time units is seconds and it has been running for 2hr 2min 10sec then time(SECOND) will return 7330.0 (it is always a double value). 1/217th of an hour corresponds to about 3600/217 = 16.58 seconds. Also, java has a handy function Math.floor() which rounds down a double value, so Math.floor(8.37) = 8.0.
Assembling it all together:
// how many full hours have elapsed from the start of the model
double fullHrsFromStart = Math.floor(time(HOUR));
// how many seconds have elapsed in the current model hour
double secondsInCurrentHour = time(SECOND) - fullHrsFromStart * 3600.0;
// how many full 16.58 (1/217th of an hour) intervals have elapsed
int fullIntervals = (int)(secondsInCurrentHour / 16.58);
This can be packaged into a function and called any time and it is pretty fast.
Alternatively: an Event can be created which increments some count by 1 every 16.58 seconds and ten resets it back to 0 when the count reaches 217.

How to get the time elapsed time after using the reset of the "Elapsed Time" component?

I am calculating the capacitance of a circuit using LabVieW. I have tried to get the time after the voltage across it reaches 2.5V. I am giving a supply of 5V. I used a logic operator and connected it to the reset to the elapsed time component. But I get zero as the time gets reseted. I want to get the actual time elapsed.
The block diagram of the circuit:
Try this code: Do not wire something on the reset and place a False on the Auto Reset Input
Block Diagram

How intensive is getting time?

Here's a low level question. How CPU intensive is getting system time?
What is the source of the time? I know there is a hardware clock on the bios chip but I'm thinking that getting data from outside the CPU and RAM will need some hardware synchronization which may delay the read so I'm guessing the CPU may have its own clock. Feel free to correct me if I'm wrong in any way.
Does getting time incur a heavy system function call or is it in any way dependent on the used programming language?
I have just tested it using a C++ program:
clock_t started = clock();
clock_t endClock = started + CLOCKS_PER_SEC;
long itera = 0;
for (; clock() < endClock; itera++)
{
}
I get about 23 million iterations per second (Windows 7, 32bit, Visual Studio 2015, 2.6 GHz CPU). In terms of your question, I would not call this intensive.
In debug mode, I measured 18 million iterations per second.
In case the time is transformed into a localized timestamp, complicated calendar calculations (timezone, daylight saving time, ...) might significantly slow down the loop.
It is not easy to tell what happens inside the clock() call. For my system, it calls QueryPerfomanceCounter, but this recurs to other system functions as explained here.
Tuning
To reduce the time measurement overhead even further, you can measure in every 10th, 100th ... iteration.
The following measures once in 1024 iterations:
for (; (itera & 0x03FF) || (clock() < endClock); itera++)
{
}
This brings up the loop per second count to some 500 million.
Tuning with Timer Thread
The following yields a further improvement of some 10% paid with additional complexity:
std::atomic<bool> processing = true;
// launch a timer thread to clear the processing flag after 1s
std::thread t([&processing]() {
std::this_thread::sleep_for(std::chrono::seconds(1));
processing = false;
});
for (; (itera & 0x03FF) || processing; itera++)
{
}
t.join();
An extra thread is started which sleeps for one second and then sets a control variable. The main thread executes the loop until the timer threads signals the end of processing.

Inherent time in jprofiler

Consider the below method template:
methodA()
{
Print (abc); // Instruction 1
Calculate(a+b+c); // Instruction 2
Call methodB();// Instruction 3
Call methodC();// Instruction 4
Print(abcd); // Instruction 5
for(; ;) // Instruction 6
{
. ..
}
}
Inherent time for methodA() in JProfiler shows the total time taken by methodA() alone. Is this inherent time the sum of CPU time + I/O wait time or is it just CPU time?
The time type depends on the thread state selector in the top-right corner of the call tree view. If it is set to "Runnable", the displayed times measure the time when the CPU was in the runnable state. If it set to "All states", it includes I/O, waiting and blocking.
As per this page http://resources.ej-technologies.com/jprofiler/help/doc/index.html
The inherent time is defined as the total time of a method minus the
time of its child nodes.

How can I (reasonably) precisely perform an action every N milliseconds?

I have a machine which uses an NTP client to sync up to internet time so it's system clock should be fairly accurate.
I've got an application which I'm developing which logs data in real time, processes it and then passes it on. What I'd like to do now is output that data every N milliseconds aligned with the system clock. So for example if I wanted to do 20ms intervals, my oututs ought to be something like this:
13:15:05:000
13:15:05:020
13:15:05:040
13:15:05:060
I've seen suggestions for using the stopwatch class, but that only measures time spans as opposed to looking for specific time stamps. The code to do this is running in it's own thread, so should be a problem if I need to do some relatively blocking calls.
Any suggestions on how to achieve this to a reasonable (close to or better than 1ms precision would be nice) would be very gratefully received.
Don't know how well it plays with C++/CLR but you probably want to look at multimedia timers,
Windows isn't really real-time but this is as close as it gets
You can get a pretty accurate time stamp out of timeGetTime() when you reduce the time period. You'll just need some work to get its return value converted to a clock time. This sample C# code shows the approach:
using System;
using System.Runtime.InteropServices;
class Program {
static void Main(string[] args) {
timeBeginPeriod(1);
uint tick0 = timeGetTime();
var startDate = DateTime.Now;
uint tick1 = tick0;
for (int ix = 0; ix < 20; ++ix) {
uint tick2 = 0;
do { // Burn 20 msec
tick2 = timeGetTime();
} while (tick2 - tick1 < 20);
var currDate = startDate.Add(new TimeSpan((tick2 - tick0) * 10000));
Console.WriteLine(currDate.ToString("HH:mm:ss:ffff"));
tick1 = tick2;
}
timeEndPeriod(1);
Console.ReadLine();
}
[DllImport("winmm.dll")]
private static extern int timeBeginPeriod(int period);
[DllImport("winmm.dll")]
private static extern int timeEndPeriod(int period);
[DllImport("winmm.dll")]
private static extern uint timeGetTime();
}
On second thought, this is just measurement. To get an action performed periodically, you'll have to use timeSetEvent(). As long as you use timeBeginPeriod(), you can get the callback period pretty close to 1 msec. One nicety is that it will automatically compensate when the previous callback was late for any reason.
Your best bet is using inline assembly and writing this chunk of code as a device driver.
That way:
You have control over instruction count
Your application will have execution priority
Ultimately you can't guarantee what you want because the operating system has to honour requests from other processes to run, meaning that something else can always be busy at exactly the moment that you want your process to be running. But you can improve matters using timeBeginPeriod to make it more likely that your process can be switched to in a timely manner, and perhaps being cunning with how you wait between iterations - eg. sleeping for most but not all of the time and then using a busy-loop for the remainder.
Try doing this in two threads. In one thread, use something like this to query a high-precision timer in a loop. When you detect a timestamp that aligns to (or is reasonably close to) a 20ms boundary, send a signal to your log output thread along with the timestamp to use. Your log output thread would simply wait for a signal, then grab the passed-in timestamp and output whatever is needed. Keeping the two in separate threads will make sure that your log output thread doesn't interfere with the timer (this is essentially emulating a hardware timer interrupt, which would be the way I would do it on an embedded platform).
CreateWaitableTimer/SetWaitableTimer and a high-priority thread should be accurate to about 1ms. I don't know why the millisecond field in your example output has four digits, the max value is 999 (since 1000 ms = 1 second).
Since as you said, this doesn't have to be perfect, there are some thing that can be done.
As far as I know, there doesn't exist a timer that syncs with a specific time. So you will have to compute your next time and schedule the timer for that specific time. If your timer only has delta support, then that is easily computed but adds more error since the you could easily be kicked off the CPU between the time you compute your delta and the time the timer is entered into the kernel.
As already pointed out, Windows is not a real time OS. So you must assume that even if you schedule a timer to got off at ":0010", your code might not even execute until well after that time (for example, ":0540"). As long as you properly handle those issues, things will be "ok".
20ms is approximately the length of a time slice on Windows. There is no way to hit 1ms kind of timings in windows reliably without some sort of RT add on like Intime. In windows proper I think your options are WaitForSingleObject, SleepEx, and a busy loop.