Process of State - process

I learned that when an interrupt occurs, the process goes to the ready queue rather than going through the Blocked Queue. However, in this picture, the interrupted process has moved to the blocked queue(which is a circle with pink color). I'm confused that which case goes to the ready queue and which goes to the blocking queue.

Process management in general is much more complex than this. A task is often tied to one specific processor core. Several tasks are tied to the same processor core and each of these tasks can be blocked waiting for IO. It means that any task can be interrupted at any time by an interrupt triggered by a device controller even if the task currently running on the core had nothing to do with that specific interrupt.
The diagram is thus incomplete. It doesn't take in account the complete process lifecycle. In your diagram, the process goes on the blocked queue if it is waiting for IO (after a syscall like read()). It goes to the ready queue if it was preempted by the kernel for another process to have some time on that core.
I think people often have the misconception that each process will run all the time until completion. It cannot be that way otherwise most processes would never get time on any core. Instead, if the amount of processes is higher than the amount of cores, the kernel uses the per core local APIC's timer (local APIC is on x86-64 but you will have similar mechanisms on every architecture) to give every process tied to that core a time slice. When a certain process is scheduled for a certain core, the kernel starts the timer with its time slice. When the time slice has elapsed, the local APIC triggers an interrupt letting the kernel know that another process should be scheduled on that core. This is why a process can be preempted in the middle of its execution. The process is still considered to be ready to run. It is simply that its time slice was exhausted so the kernel decides to give some time to another process. The preempted process will be given some more timer later. Since, in human terms, the time slice of each process is very short, it gives the impression that each process is running consistently without interruption when it is not really the case. (By the way this diagram is very Linux kernel specific)

Related

Blocked Processes

As far as I know, some conditions must be validated so that a process continue to run. If they are not confirmed, the processor blocks that process not to waste time. After these conditions are validated, the process enter into ready state.
However, I faced a sentence like this in the book "Modern Operating Systems Andrew Tanenbaum": There are two types of processes which are system processes and user processes. If processor takes a disk interrupt when it executes a user application, the system makes a decision to stop running the current process, and starts to run disk process. In this case, application process is kept in blocked state. After the disk is read or anything is written on the disk, the process waiting for it is unblocked.
I know that a process is blocked in only the situation that a requirement or a condition is not validated. However, I suppose this sentence try to say that disk process has higher precedence, that's why application process is blocked. Is the precedence a factor to block any process ?
What you are describing makes no sense. I have to wonder if this is the result of your quotation.
First of all, the processor does not block processes; the operating system does.
Second, I have not worked on an operating system that works anything like the way you describe here.
Usually, if a disk drive triggers an interrupt, the current process handles that interrupt. While in kernel mode the operating system does whatever queuing is necessary for the disk operation. If the process's time slice is up, only then does the process change. If not, after interrupt handling, the process picks up where it left off before the interrupt.
I cannot imagine a "modern" operating system that invokes a disk process to handle disk interrupts.

Operation Systems: How process move from device' queue(waiting) to ready queue?

When a process in currently running in the cpu and suddenly have to wait for I\O,
then the scheduler save its state (Program counter, registers..) into is PCB, and then add him to the device queue which the process wait for I\O from it.
when the process know to move from a waiting(device) queue to the ready queue?
and if im doing in code Thread.Sleep(50000) does the process moving to the waiting queue?
Thanks!
The terms are are using are all pedagogical. How this is done is entirely operating system specific.
The process of going from un-executable due to pending I/O to going to a read for execution state varies among systems.
If you're doing blocking (synchronous) I/O, there can only be one blocking I/O call pending per process (or thread). When that completes, the process should be executable. That would occur in the interrupt handler for the I/O request completion.
On some systems, completion of I/O will boost the priority of the process (or thread). In such a system, process will move ahead of other processes that are waiting because they used up their CPU quantum (as opposed to yielded the CPU voluntarily).
Many process state changes occur during timer interrupt serving. The O/S will schedule regular interrupts on the CPU. The timer interrupt handler usually looks for sleeping processes that need to be wakened, I/O requests that have been queued to be competed, and process switching.

Can a process terminate after I/O without returning to the CPU?

I have a question about the following diagram from Operating Systems Concepts: http://unboltingbinary.in/wp-content/uploads/2015/04/image028.jpg
This diagram seems to imply that after every I/O operation, the process is placed back on the ready queue before being sent to the CPU again. However, is it possible for a process to terminate after I/O but before being sent to the ready queue?
Suppose we have a program that computes a number and then writes it to storage. In this case, does the process really need to return to the CPU after the I/O operation? It seems to me that the process should be allowed to terminate right after I/O. That way, there would be no need for a context switch.
Once one process has successfully executed a termination request on another, the threads of the terminated process should never run again, no matter what state they were in - blocked on I/O, blocked on inter-thread comms, running on a core, sleeping, whatever - they all must be stopped immediately if running and all be put in a state where they will never run again.
Anything else would be a security issue - terminated threads should not be given execution at all, (else it may not be possible to terminate the process).
Process termination requires the cpu. Changes to kernel mode structures on process exit, returning memory resources, etc. all require the cpu.
A process simply just does not evaporate. The term you want here is process rundown - I think.

How processor get to know to switch process with high prioirity process?

I red that, process scheduler will replace the process that is currently processing by cpu
with high priority process. At any point only one process will be executed by processor in that case where the scheduler is running to notify cpu about high priority process, when cpu is busy in executing low priority process ?
The process scheduler is the component of the operating system that is
responsible for deciding whether the currently running process should
continue running and, if not, which process should run next.
To help the scheduler monitor processes and the amount of CPU time that they use, a programmable interval timer interrupts the processor periodically (typically 50 or 60 times per second). This timer is programmed when the operating system initializes itself. At each interrupt, the operating system’s scheduler gets to run and decide whether the currently running process should be allowed to continue running or whether it should be suspended and another ready process allowed to run. This is the mechanism used for preemptive scheduling.
So,basically,the process scheduler runs in the same main memory,when active, but are only activated after getting invoked by interrupts. Hence,they aren't all time running.
BTW,that was a great conceptual question to answer.Best wishes for your topic.
The higher-priority thread/process will preempt the lower-priority thread when an interrupt causes the scheduler to be run to decide on what set of threads to run next, and the scheduler algorithm decides that the lower-priority thread needs to be replaced by the higher-priority one.
Interrupts come in two flavours:
Software interrupts, (syscalls) from threads that are already running and change the state of threads, eg. by signaling an event, mutex or semaphore upon which another thread is waiting, and so making it ready to run.
Hardware interrupts that cause a driver to run and that driver chooses to invoke the scheduler on exit because an I/O operation has completed or some timeout interval has expired that needs to change the set of running threads, (eg. disk, NIC, KB, mouse, timer).

Who runs the scheduler in operating systems when CPU is given to user processes?

If there are 10 processes P1,P2...P10 and are scheduled using round robin policy by the scheduler to access the CPU.
Now when Process P1 is using the CPU and the current time slice has expired, P1 needs to be preempted and P2 needs to scheduled. But since P1 is using the CPU, who preempts the P1 and schedules P2 ?
We may Scheduler does this, but how does scheduler run when CPU is held by P1 ?
It's exactly like jcoder said but let me elaborate (and make an answer instead of a comment)
Basically, when your OS boots, it initializes an interrupts vector where the CPU upon a given interrupt calls the appropriate interrupt handler.
The OS, also during boot time, will check for the available hardware and it'll detect that your board has x number of timers.
Timers are simply hardware circuits that tick using a given clock speed and they can be set to send an interrupt after a given time (each with a different precision usually, depending on its clock speed and other things)
After the OS detects the timers, it sets one of them, for example, to send an interrupt every 50 ms; now every 50 ms the CPU will stop whatever it's doing and invoke that interrupt handler, usually the scheduler code, which in turn will check what's the currently running process and make a decision to keep it or not depending on the scheduling policy.
The scheduler, like most of the OS actually, is a passive thing that acts only when there's some event.
Based on your Question P1 needs to be preempted and P2 needs to scheduled so there is a concept of CPU scheduler (CPU scheduler is the process of Operating System, that continuously watching the running process) responsibility to selects process among the processes in memory that are ready to execute, and allocates the CPU to one of them.
CPU scheduling is take place if a process:
List item
Switches from running to waiting state
Switches from running to ready state
Switches from waiting to ready
Terminates
Dispatcher module gives control of the CPU to the process selected by the CPU scheduler;