How does OS works - multitasking [closed] - process

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
it's probably really stupid question, but I'm learning about how Operating System works and I am confused. If OS perform multitasking by switching from one process to another than what about OS itself? It's also a process, isn't it?
Thank You very much in advance!

The operating system kernel usually is not a process but rather is code that executes kernel mode while running a process.
One sequence for switching processes might be:
Timer interrupt goes off while running process P.
The timer interrupt handler gets executed in kernel mode by P.
The interrupt handler invokes the scheduler that determines process Q should execute.
The scheduler executes a change save process context instruction, saving the state of P.
The scheduler executes a load process context instruction, loading the state of Q. As soon as that instruction finishes executing Q is the running process.
The interrupt handler exits, returning control to Q where it was last executing.

An Operating System has a component called the Scheduler that performs the function of switching among the application and other system threads (tasks). The Scheduler is almost always part of the OS kernel image which typically runs on a dedicated hardware thread of the processor once the OS has been loaded into memory by the Bootloader.
After the Scheduler releases a task to execute, it waits for a signal from its interrupt-controller hardware to tell it when to preempt (stop) the running task and release another task for execution. The details of how this occurs depend on the scheduling algorithm (e.g. Round-Robin, Time-Slicing, Earliest-Deadline-First, etc.) that the OS designer chose to implement. An OS with a time-slicing kernel, for example, will use interrupts from a hardware timer as the wake-up call for its Scheduler.

Related

What is "Interrupt" for transition of a process from running to ready?

Here is process state diagram from Modern Operating Systems. Transition from running to ready happens when the scheduler picks another process.
Here is process state diagram from Operating System Concepts.
What does "Interrupt" mean for transition from running to ready? Is it the same as "the scheduler picks another process" in the above?
Thanks.
There are two ways for a process to transition from the running state to the ready state depending on the OS implements multitasking:
With preemptive multitasking, the OS uses timer interrupts (there is one timer for each core or processor in the system) to regularly interrupt whatever process is currently running. The interrupt handler then invokes the OS scheduler to determine whether to schedule another process or continue running the same process. If the scheduler decided to run another process, then the current process transition from the running state to the ready state.
With cooperative multitasking, the OS does not use interrupts to scheduler processes. Instead, a running process should voluntarily yield control to the scheduler to allow it to schedule another process. So processes do not transition between the running and ready states using interrupts, but only voluntarily.
It seems to me that the figure from the Modern Operating Systems book applies to both multitasking methods while the figure from the Operating System Concepts is specifically about preemptive multitasking. Although by changing the word "interrupt" to something more inclusive like "yield," then the other figure would also apply to cooperative multitasking.

OS Context Switch in ISR

I am just eager to know how OS actually does context switch when some asynchronous event raise ISR that make higher priority task ready to run. As far as I know when CPU enter ISR it puts some of register values to the hardware stack, so how scheduler retreives those values and puts it to the task stack ? Does it access hardware stack in order to copy values that are allready preserved ? I hope I was clear.
Thanks in advance.
On a Cortex-M3 processor you have the MSP (Main Stack Pointer - which is your hardware stack) and the PSP (Process Stack Pointer - which is your task stack).
On entry to an exception the stack frame is stored on the current PSP stack (in normal, non nested operation). The exception handler then switches to the MSP stack, however it can still access the PSP stack so it can store any remaining registers etc on that same PSP stack as well as any other task information it needs.
The exception can then selected the new high priority task and switch the PSP to this tasks stack and restoring the registers that is needs. It then leaves the PSP in exactly the same state as when the task was suspended so that on return from exception the rest of the stack is correctly restored.
It is more complex than this in certain situations but that is the basic operation (On ARM Cortex-M). It will be different on other processors.
I would recommend downloading FreeRTOS and looking at the various different port layers. There is a port for pretty much everything there, and the low level task switching stuff in the "portable" directories is fairly small and straightforward.
As I'm not quite sure what the scope of your question is, I'll try and summarize some concepts of preemptive scheduling:
There's one stack per task. For each stack, there's a stack pointer pointing to it. So basically, for the task switch, the current stack pointer is saved and the next task's stack pointer is loaded. Interestingly, the return from OS to the task's code is then done via a RETURN instruction, and not a JUMP or CALL like one might expect.
When an ISR interrupts a running task, it will not run another task itself. As you correctly said, it only makes a task runnable (taking it out of waiting state), so that, in the next scheduling cycle, the OS can consider the now-ready task for further execution. (If and when that task runs depends on his assigned priority; if it has a very high priority, the OS may try and make sure it runs before any other, lower prio task gets switched to.)
The actual task switching only occurs after the ISR finished and returned, so there's no need to copy anything from one stack to another.
In 'simple' implementations, the ISR may just return to the task it interrupted, so that no early, 'out-of-order' context switch will occur.
Another, more complex implementation can have the ISR return to the OS instead of the interrupted task. A function like yield() would thus be called, giving the OS the chance to do a task switch immediately if necessary.
This, however, may require that affected ISRs get special exit instructions appended replacing the normal compiler-generated ISR code.

Timer interrrupts vs Dummy loops [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What are the advantages of using timer interrupts instead of dummy loops to implement time delays in terms of differentiating architectures, programming issues and operating systems? Any help will be appreciated. Thanks in advance.
Once the hardware timer has been setup, it counts independently from whatever code the CPU is executing. The CPU can run another task, service interrupts, or maybe even go to sleep to conserve power while the timer is running. Then when the timer interrupt occurs the CPU will wake up and/or switch back to the waiting task to service the expiration of the timer. The duration of the timer is unaffected by whatever the CPU does while the timer is counting.
In a dummy loop the CPU is busy counting so it can't switch to another task or go to sleep. And if the dummy loop is interrupted then the period will increase by the amount of time it takes to service the interrupt. In other words the dummy loop is paused while the interrupt is being serviced.
The duration of the dummy loop can be affected by compiler and/or linker options. For example if you change the level of compiler optimizations then the speed of the dummy loop could change. Or if the dummy loop function gets located in different memory with a different number of wait states then the speed of the dummy loop could change. The hardware timer would be immune to these changes.
Dummy loops require the CPU constantly working (increasing a counter, or comparing a threshold to the system timer).
That CPU time is spent doing nothing, thus the name Dummy Loop.
On a multitasking OS, it's bad, because is time that could've been spent doing something else.
And if you're on a single task / don't have anything else to do. It's time that could have been spent on low energy mode ( Besides being power friendly, it's very important on battery powered devices).
Dummy loops depend on processor speed. When you use timer interrupts , you will handle faster response to an event and you will provide power consumption.

An infinite loop executing in a single processor system

A process P1 is executing in infinite loop in a system which has only a single CPU. There are also other processes like P2, P3 which are waiting to gain the CPU, but are in wait queue as P1 is already executing.
The program is, something like:
int main( )
{
while(1);
}
So, what will be the end result? Will the system crash?
Probable answer is, the CPU won't crash and other processes can execute in the CPU as because every process has a specific time slice, so after the time slice of P1 expires, other waiting processes can gain the CPU.
But again, how will the kernel (O/S) check that the time slice has expired, as because there is only one CPU and the process is running in infinite loop? Because if checking has to happen, it needs CPU to do that, and the CPU is already occupied by process P1 which is executing in infinite loop.
So what happens in this case?
It really depends on what operating system and hardware you are using. Interrupts can transfer the execution of code to another location (interrupt handler). These interrupts can be software (code in the program can call these interrupt handlers) or hardware (the cpu receives a signal on one of its pins). On a motherboard you have something called a Programmable Interrupt Controller (PIC) which can generate a steady stream of interrupts (timer interrupts). The OS can use the timer interrupt handler to stop a running process and continue another one.
Again, it really depends on the OS, the hardware you are working with,... and without more specifics it's too general a question to give a definite answer.
Processor has something called interutps. OS (like windows) tell processor:
- Use this process for X time and then tell me
So procesor starts timer and works on process. When time pass procesor sends interupt and tell OS that time passed. OS now will decide wwich process will work next.
Hope this answers your question.

Linux Kernel Code Execution Contexts

When a process executing in the user space issues a system call or triggers an exception, it enters into the kernel space and kernel starts executing on behalf of the process. Kernel is said to be executing in the process context. Similarly when an interrupt occurs kernel executes in the interrupt context. I have studied about kernel execution in kernel thread, where kernel processes runs in the background.
My Questions are :
Does the kernel execute in any other contexts?
Suppose a process in the user space never executes a system call or triggers an exception or no interrupt occurs, does the kernel code ever execute ?
The kernel runs periodically, it sets a timer to fire an interrupt at some predefined frequency (100 Hz (Linux 2.4/x86), 1000Hz (early Linux 2.6/x86), 250Hz (newer Linux 2.6/x86)).
The kernel need to do this in order to do preemptive multitasking. OTOH, OSes only doing cooperative multitasking (Windows 3.1, classic Mac OS) needn't do this, and only switch tasks on response to some call from the running task (which could lead to runaway tasks hanging the whole system).
Note that there is some effort to optimize the use of this timer: newer Linux is smarter when there are no runnable tasks, it sets the timer as far in the future as it can, to allow the CPU to sleep longer and deeper, and preserve power (the CONFIG_NOHZ kernel config option). Running powertop will show the number of wakeups per second, which on an idle system can be much lower than the 250 wakeups per second you'd expect of a traditional implementation.
Suppose a process in the user space never executes a system call or triggers an exception or no interrupt occurs, does the kernel code ever execute ?
Assume you have a process p that is running the following code: while(1);. This code will never call into the kernel and won't cause any faults. (It might have set an alarm(3) earlier, causing a signal to be delivered in the future, or it might exceed the setrlimit(2) CPU limit, in which cases the kernel will deliver a signal to the process.)
Or, if another process sends p a signal via kill(2), the kernel will deliver that signal to the process as well.
The signal delivery will either cause a signal handler to run, do nothing (if the signal is ignored or masked), or take the default signal action (which might be nothing or termination).
And, of course, the process execution can be interrupted so the processor can handle interrupts; or a higher-priority process can preempt it.