Dilemma regarding switching time of processes and kernel - process

T1 is the time taken to switch from (user process) process p1 to p2 while T2 is time taken to switch from process p1 to kernel process - Now which will be more T1 or T2. For me it should be T1, my logic is, when the CPU is allocated to process p2 by removing p1 the kernel will have to (first take control in its hands &) remove the PCB of p1 and load PCB of p2. While in case of p1 to kernel, it just have to give control to kernel. Am I right/Wrong?

I do not think you are right as transition from process to kernel will happen in response to some specific event such as system call. Transition from user mode to kernel mode with in same process even though does not require context switch (kernel code is run in same process context) but a lot of other things, such as table look ups and retrieving values passed from user mode, checking validity of parameters, also happen in the transition along with saving registers. I think same or more functionality is performed when interrupt occurs.

Related

Process of State

I learned that when an interrupt occurs, the process goes to the ready queue rather than going through the Blocked Queue. However, in this picture, the interrupted process has moved to the blocked queue(which is a circle with pink color). I'm confused that which case goes to the ready queue and which goes to the blocking queue.
Process management in general is much more complex than this. A task is often tied to one specific processor core. Several tasks are tied to the same processor core and each of these tasks can be blocked waiting for IO. It means that any task can be interrupted at any time by an interrupt triggered by a device controller even if the task currently running on the core had nothing to do with that specific interrupt.
The diagram is thus incomplete. It doesn't take in account the complete process lifecycle. In your diagram, the process goes on the blocked queue if it is waiting for IO (after a syscall like read()). It goes to the ready queue if it was preempted by the kernel for another process to have some time on that core.
I think people often have the misconception that each process will run all the time until completion. It cannot be that way otherwise most processes would never get time on any core. Instead, if the amount of processes is higher than the amount of cores, the kernel uses the per core local APIC's timer (local APIC is on x86-64 but you will have similar mechanisms on every architecture) to give every process tied to that core a time slice. When a certain process is scheduled for a certain core, the kernel starts the timer with its time slice. When the time slice has elapsed, the local APIC triggers an interrupt letting the kernel know that another process should be scheduled on that core. This is why a process can be preempted in the middle of its execution. The process is still considered to be ready to run. It is simply that its time slice was exhausted so the kernel decides to give some time to another process. The preempted process will be given some more timer later. Since, in human terms, the time slice of each process is very short, it gives the impression that each process is running consistently without interruption when it is not really the case. (By the way this diagram is very Linux kernel specific)

Blocked Processes

As far as I know, some conditions must be validated so that a process continue to run. If they are not confirmed, the processor blocks that process not to waste time. After these conditions are validated, the process enter into ready state.
However, I faced a sentence like this in the book "Modern Operating Systems Andrew Tanenbaum": There are two types of processes which are system processes and user processes. If processor takes a disk interrupt when it executes a user application, the system makes a decision to stop running the current process, and starts to run disk process. In this case, application process is kept in blocked state. After the disk is read or anything is written on the disk, the process waiting for it is unblocked.
I know that a process is blocked in only the situation that a requirement or a condition is not validated. However, I suppose this sentence try to say that disk process has higher precedence, that's why application process is blocked. Is the precedence a factor to block any process ?
What you are describing makes no sense. I have to wonder if this is the result of your quotation.
First of all, the processor does not block processes; the operating system does.
Second, I have not worked on an operating system that works anything like the way you describe here.
Usually, if a disk drive triggers an interrupt, the current process handles that interrupt. While in kernel mode the operating system does whatever queuing is necessary for the disk operation. If the process's time slice is up, only then does the process change. If not, after interrupt handling, the process picks up where it left off before the interrupt.
I cannot imagine a "modern" operating system that invokes a disk process to handle disk interrupts.

Operating Systems - General Process Creation

Review Question
Consider the Program
#include <stdio.h>
int main(){
putchar('X');
exit(0);
}
Suppose it is compiled an an a.out file is generated. now suppose that a user in a local console window types a.out and hits the return key. what happens? be sure to describe a plausible but detailed and comprehensive sequence of operating system actions and events, not just what the user sees.
My answer
First, the shell will create a process in User Space
Then it will perform the system call 'putchar' Which simulates input, and the process will switch to kernel mode
It will then add the process (thread) to the long term scheduler where it will join the set of all processes that are ready to run
Once it is selected, it will move to the short term scheduler, where it will receive some processing time (ready -> running)
Since this process is an IO bound process, it will then head to the IO queue, where it will be stored in a buffer where it awaits execution (running -> waiting)
Once the IO is complete, the putchar call will print the X on the peripheral for which it is applied (the monitor) (waiting -> running)
Once the process returns to the short term scheduler it will again receive more processing time. Since there is nothing left to do but terminate, the process terminates (running -> terminated)
Is this valid understanding? Am I missing some critical concepts for process creation? I know it is relatively simple process, but please advise anything I am missing.
Thanks for reading, and thanks in advance for assistance.
First, the shell will create a process in User Space
// A lot of things happen before this!!
//The program will be loaded by the loaded.
//VM areas will be created for this process.
//Linking for library files will be done.
//Then a series of pagefault will occur will happen to bring your file on physical and virtual memory
Then it will perform the system call 'putchar' Which simulates input, and the process will switch to kernel mode
//putchar in not at all a system call!!!!
//putchar will call its library implementation, which will further call a write() system call and your program will get trapped inside the kernel
It will then add the process (thread) to the long term scheduler where it will join the set of all processes that are ready to run
//Totally depends upon the scheduling algorithms.. might be possible your process will be first to run!!
Once it is selected, it will move to the short term scheduler, where it will receive some processing time (ready -> running)
//Right, waiting on RunQ
Since this process is an IO bound process, it will then head to the IO queue, where it will be stored in a buffer where it awaits execution (running -> waiting)
//Sort of, it will be waiting on I/O queue, waiting for an interrupt, to write on o/p device
Once the IO is complete, the putchar call will print the X on the peripheral for which it is applied (the monitor) (waiting -> running)
//Correct
Once the process returns to the short term scheduler it will again receive more processing time. Since there is nothing left to do but terminate, the process terminates (running -> terminated)
//Before this it will again get trapped inside the kernel when your program will execute RETURN statement.
//It will call the back the startup function which was responsible for calling the main() function.
//Then startup() function will return 0 to operating system, and hence OS will kill this process and moce it to terminated state..
I still don't think its a complete version as 100's of machine instruction will be executed for this program and its difficult to pin point each and everyone..
But, still if you have some doubt post your comment!!]
Hope this will help!!!

process states - new state & ready state

As OS concepts book illustrate this section "Process States":
Process has defined states: new, ready, running, waiting and terminated.
I have conflict between new and ready states, I know that in ready state the process is allocated in memory and all resources needed at creation time is allocated but it is only waiting for CPU time (scheduling).
But what is the new state? what is the previous stage before allocating it in memory?
All the tasks that the OS has to perform cannot be allocated memory immediately after the task is submitted to the OS. So they have to remain in the new state. The decision as to when they move to the ready state is taken by the Long term scheduler. More info about long term scheduler here http://en.wikipedia.org/wiki/Scheduling_(computing)#Long-term_scheduling
To be more precise,the new state is for those processes which are just being created.These haven't been created fully and are in it's growing stage.
Whereas,the ready state means that the process created which is stored in PCB(Process Control Block) has got all the resources which it required for execution,but CPU is not running that process' instructions,
I am giving you a simple example :-
Say, you are having 2 processes.Process A is syncing your data over cloud storage and Process B is printing other data.
So,in case process B is getting created to be stored in PCB,the other
process,Process A has been already created and is not getting the
chance to run because CPU hasn't called these instructions of Process
A.But,Process B requires printer to be found and other drivers to be
checked.It must also check for verification of pages to be printed!
So,here Process A has been created and is waiting for
CPU-time---hence,in ready state. Whereas,Process B is waiting for
printer to be initialised and files to be examined to be
printed--->Hence,in new state(That means these processes haven't been
successfully added into PCB).
One more thing to guide you isFor each process there is a Process Control Block, PCB, which stores the process-specific information.
I hope it clears your doubt.Feel free to comment whatever you don't understand...

Who runs the scheduler in operating systems when CPU is given to user processes?

If there are 10 processes P1,P2...P10 and are scheduled using round robin policy by the scheduler to access the CPU.
Now when Process P1 is using the CPU and the current time slice has expired, P1 needs to be preempted and P2 needs to scheduled. But since P1 is using the CPU, who preempts the P1 and schedules P2 ?
We may Scheduler does this, but how does scheduler run when CPU is held by P1 ?
It's exactly like jcoder said but let me elaborate (and make an answer instead of a comment)
Basically, when your OS boots, it initializes an interrupts vector where the CPU upon a given interrupt calls the appropriate interrupt handler.
The OS, also during boot time, will check for the available hardware and it'll detect that your board has x number of timers.
Timers are simply hardware circuits that tick using a given clock speed and they can be set to send an interrupt after a given time (each with a different precision usually, depending on its clock speed and other things)
After the OS detects the timers, it sets one of them, for example, to send an interrupt every 50 ms; now every 50 ms the CPU will stop whatever it's doing and invoke that interrupt handler, usually the scheduler code, which in turn will check what's the currently running process and make a decision to keep it or not depending on the scheduling policy.
The scheduler, like most of the OS actually, is a passive thing that acts only when there's some event.
Based on your Question P1 needs to be preempted and P2 needs to scheduled so there is a concept of CPU scheduler (CPU scheduler is the process of Operating System, that continuously watching the running process) responsibility to selects process among the processes in memory that are ready to execute, and allocates the CPU to one of them.
CPU scheduling is take place if a process:
List item
Switches from running to waiting state
Switches from running to ready state
Switches from waiting to ready
Terminates
Dispatcher module gives control of the CPU to the process selected by the CPU scheduler;