Timer interrrupts vs Dummy loops [closed] - embedded

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What are the advantages of using timer interrupts instead of dummy loops to implement time delays in terms of differentiating architectures, programming issues and operating systems? Any help will be appreciated. Thanks in advance.

Once the hardware timer has been setup, it counts independently from whatever code the CPU is executing. The CPU can run another task, service interrupts, or maybe even go to sleep to conserve power while the timer is running. Then when the timer interrupt occurs the CPU will wake up and/or switch back to the waiting task to service the expiration of the timer. The duration of the timer is unaffected by whatever the CPU does while the timer is counting.
In a dummy loop the CPU is busy counting so it can't switch to another task or go to sleep. And if the dummy loop is interrupted then the period will increase by the amount of time it takes to service the interrupt. In other words the dummy loop is paused while the interrupt is being serviced.
The duration of the dummy loop can be affected by compiler and/or linker options. For example if you change the level of compiler optimizations then the speed of the dummy loop could change. Or if the dummy loop function gets located in different memory with a different number of wait states then the speed of the dummy loop could change. The hardware timer would be immune to these changes.

Dummy loops require the CPU constantly working (increasing a counter, or comparing a threshold to the system timer).
That CPU time is spent doing nothing, thus the name Dummy Loop.
On a multitasking OS, it's bad, because is time that could've been spent doing something else.
And if you're on a single task / don't have anything else to do. It's time that could have been spent on low energy mode ( Besides being power friendly, it's very important on battery powered devices).

Dummy loops depend on processor speed. When you use timer interrupts , you will handle faster response to an event and you will provide power consumption.

Related

Is it possible to set a Timer with condition in Mongoose OS?

I am familiar with using mgos_msleep(value) or mgos_usleep(value). However, using sleep is not good for the device.
Can someone suggest a better approach?
It depends on the architecture of the system and use case with you.
The sleep call in mongoose OS does not cause a busy wait. The sleep call conveys to OS to not schedule the particular process until the sleep duration is over. The 'mgos_usleep' function is a sleep with greater resolution (microseconds) which should have very less impact, however this in turn is based on your requirement / use case.
With respect to timer, the Mongoose OS supports both software timers and hardware timers. The decision to use either software timer or hardware timer is in turn based on the application requirement with you.
The mgos_set_timer sets up a software timer with milliseconds timeout and the respective callback. The software timer frequency is specified in milliseconds and the number of software timers is not limited.The software timer callback is executed in mongoose task context. This timer seems to be with fairly low accuracy and high jitter.
The mgos_set_hw_timer sets up a hardware timer with microseconds timeout and the respective callback. The hardware timer callback is executed in the ISR context and hence the actions are limited due it. The hardware timers or counters shall be available as per the type of processor that you use, hence you may need to have a look at the datasheet. Accordingly, the number of hardware timers is limited and the frequency is specified in microseconds.

What mechanism is used to account CPU usage for a process, particularly `sys` (time spent in kernel)

What is the mechanism used to account for cpu time, including that spent in-kernel (sys in the output of top)?
I'm thinking about limitations here because I remember reading about processes being able avoid showing up their cpu usage, if they yield before completing their time slice.
Context
Specifically, I'm working on some existing code in KVM virtualization.
if (guest_tsc < tsc_deadline)
__delay(tsc_deadline - guest_tsc);
The code is called with interrupts disabled. I want to know if Linux will correctly account for long busy-waits with interrupts disabled.
If it does, it would help me worry less about certain edge case configurations which might cause long, but bounded busy-waits. System administrators could at least notice if it was bad enough to degrade throughput (though necessarily latency), and identify the specific process responsible (in this case, QEMU, and the process ID would allow identifying the specific virtual machine).
In Linux 4.6, I believe process times are still accounted by sampling in the timer interrupt.
/*
* Called from the timer interrupt handler to charge one tick to current
* process. user_tick is 1 if the tick is user time, 0 for system.
*/
void update_process_times(int user_tick)
So it may indeed be possible for a process to game this approximation.
In answer to my specific query, it looks like CPU time spent with interrupts disabled will not be accounted to the specific process :(.

How does OS works - multitasking [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
it's probably really stupid question, but I'm learning about how Operating System works and I am confused. If OS perform multitasking by switching from one process to another than what about OS itself? It's also a process, isn't it?
Thank You very much in advance!
The operating system kernel usually is not a process but rather is code that executes kernel mode while running a process.
One sequence for switching processes might be:
Timer interrupt goes off while running process P.
The timer interrupt handler gets executed in kernel mode by P.
The interrupt handler invokes the scheduler that determines process Q should execute.
The scheduler executes a change save process context instruction, saving the state of P.
The scheduler executes a load process context instruction, loading the state of Q. As soon as that instruction finishes executing Q is the running process.
The interrupt handler exits, returning control to Q where it was last executing.
An Operating System has a component called the Scheduler that performs the function of switching among the application and other system threads (tasks). The Scheduler is almost always part of the OS kernel image which typically runs on a dedicated hardware thread of the processor once the OS has been loaded into memory by the Bootloader.
After the Scheduler releases a task to execute, it waits for a signal from its interrupt-controller hardware to tell it when to preempt (stop) the running task and release another task for execution. The details of how this occurs depend on the scheduling algorithm (e.g. Round-Robin, Time-Slicing, Earliest-Deadline-First, etc.) that the OS designer chose to implement. An OS with a time-slicing kernel, for example, will use interrupts from a hardware timer as the wake-up call for its Scheduler.

What are some factors that could affect program runtime?

I'm doing some work on profiling the behavior of programs. One thing I would like to do is get the amount of time that a process has run on the CPU. I am accomplishing this by reading the sum_exec_runtime field in the Linux kernel's sched_entity data structure.
After testing this with some fairly simple programs which simply execute a loop and then exit, I am running into a peculiar issue, being that the program does not finish with the same runtime each time it is executed. Seeing as sum_exec_runtime is a value represented in nanoseconds, I would expect the value to differ within a few microseconds. However, I am seeing variations of several milliseconds.
My initial reaction was that this could be due to I/O waiting times, however it is my understanding that the process should give up the CPU while waiting for I/O. Furthermore, my test programs are simply executing loops, so there should be very little to no I/O.
I am seeking any advice on the following:
Is sum_exec_runtime not the actual time that a process has had control of the CPU?
Does the process not actually give up the CPU while waiting for I/O?
Are there other factors that could affect the actual runtime of a process (besides I/O)?
Keep in mind, I am only trying to find the actual time that the process spent executing on the CPU. I do not care about the total execution time including sleeping or waiting to run.
Edit: I also want to make clear that there are no branches in my test program aside from the loop, which simply loops for a constant number of iterations.
Thanks.
Your question is really broad, but you can incur context switches for various reasons. Calling most system calls involves at least one context switch. Page faults cause contexts switches. Exceeding your time slice causes a context switch.
sum_exec_runtime is equal to utime + stime from /proc/$PID/stat, but sum_exec_runtime is measured in nanoseconds. It sounds like you only care about utime which is the time your process has been scheduled in user mode. See proc(5) for more details.
You can look at nr_switches both voluntary and involuntary which are also part of sched_entity. That will probably account for most variation, but I would not expect successive runs to be identical. The exact time that you get for each run will be affected by all of the other processes running on the system.
You'll also be affected by the amount of file system cache used on your system and how many file system cache hits you get in successive runs if you are doing any IO at all.
To give a very concrete and obvious example of how other processes can affect the run time of the current process, think about if you are exceeding your physical RAM constraints. If your program asks for more RAM, then the kernel is going to spend more time swapping. That time swapping will be accounted in stime but will vary depending on how much RAM you need and how much RAM is available. There are lot's of other ways that other processes can affect your process's run time. This is just one example.
To answer your 3 points:
sum_exec_runtime is the actual time the scheduler ran the process including system time
If you count switching to the kernel as the process giving up the CPU, then yes, but it does not necessarily mean a different user process may get the CPU back once the kernel is done.
I think I've already answered this question that there are lot's of factors.

An infinite loop executing in a single processor system

A process P1 is executing in infinite loop in a system which has only a single CPU. There are also other processes like P2, P3 which are waiting to gain the CPU, but are in wait queue as P1 is already executing.
The program is, something like:
int main( )
{
while(1);
}
So, what will be the end result? Will the system crash?
Probable answer is, the CPU won't crash and other processes can execute in the CPU as because every process has a specific time slice, so after the time slice of P1 expires, other waiting processes can gain the CPU.
But again, how will the kernel (O/S) check that the time slice has expired, as because there is only one CPU and the process is running in infinite loop? Because if checking has to happen, it needs CPU to do that, and the CPU is already occupied by process P1 which is executing in infinite loop.
So what happens in this case?
It really depends on what operating system and hardware you are using. Interrupts can transfer the execution of code to another location (interrupt handler). These interrupts can be software (code in the program can call these interrupt handlers) or hardware (the cpu receives a signal on one of its pins). On a motherboard you have something called a Programmable Interrupt Controller (PIC) which can generate a steady stream of interrupts (timer interrupts). The OS can use the timer interrupt handler to stop a running process and continue another one.
Again, it really depends on the OS, the hardware you are working with,... and without more specifics it's too general a question to give a definite answer.
Processor has something called interutps. OS (like windows) tell processor:
- Use this process for X time and then tell me
So procesor starts timer and works on process. When time pass procesor sends interupt and tell OS that time passed. OS now will decide wwich process will work next.
Hope this answers your question.