Which process is responsible for managing process management? - process

I was learning process management concept in operating system. Before that i came to know that processor can run one process at a time and to handle multiple processes we have process management.
At the very basic level process is just instance of program (code) and process management program is also code. Hence process management is itself a process.
So how come this process of process management program run along with other processes.
Note : I am assuming that CPU can only run single process at a time

Related

Process of State

I learned that when an interrupt occurs, the process goes to the ready queue rather than going through the Blocked Queue. However, in this picture, the interrupted process has moved to the blocked queue(which is a circle with pink color). I'm confused that which case goes to the ready queue and which goes to the blocking queue.
Process management in general is much more complex than this. A task is often tied to one specific processor core. Several tasks are tied to the same processor core and each of these tasks can be blocked waiting for IO. It means that any task can be interrupted at any time by an interrupt triggered by a device controller even if the task currently running on the core had nothing to do with that specific interrupt.
The diagram is thus incomplete. It doesn't take in account the complete process lifecycle. In your diagram, the process goes on the blocked queue if it is waiting for IO (after a syscall like read()). It goes to the ready queue if it was preempted by the kernel for another process to have some time on that core.
I think people often have the misconception that each process will run all the time until completion. It cannot be that way otherwise most processes would never get time on any core. Instead, if the amount of processes is higher than the amount of cores, the kernel uses the per core local APIC's timer (local APIC is on x86-64 but you will have similar mechanisms on every architecture) to give every process tied to that core a time slice. When a certain process is scheduled for a certain core, the kernel starts the timer with its time slice. When the time slice has elapsed, the local APIC triggers an interrupt letting the kernel know that another process should be scheduled on that core. This is why a process can be preempted in the middle of its execution. The process is still considered to be ready to run. It is simply that its time slice was exhausted so the kernel decides to give some time to another process. The preempted process will be given some more timer later. Since, in human terms, the time slice of each process is very short, it gives the impression that each process is running consistently without interruption when it is not really the case. (By the way this diagram is very Linux kernel specific)

What is a crashloop?

I'm reading Google's Site Reliability Engineering book and ran across the word crashloop which I've never heard before and have not been able to locate a definition
"If a task tries to use more resources than it requested, Borg kills the task and restarts it (as a slowly crashlooping task is usually preferable to a task that hasn’t been restar‐ ted at all)."
What is a crashloop and how does it compare to an infinite loop if at all?
A crashloop is when a process crashes and is restarted by a watchdog daemon, indefinitely.
That is, the history is:
Process starts at time T.
Process crashes at time T+1.
Watchdog daemon restarts process.
Process started at time T+2.
Process crashes at time T+3.
Watchdog daemon restarts process.
Process starts...etc.
Here, the watchdog deamon is Borg, and the process is encapsulated into a task.
In general, in distributed computing if you want something to eventually succeed, you have to write down your intent for it to be completed and you need a worker to loop continually to act on this intent. This is "at least once delivery" of a work item.
Here, the intent is that the task runs (written down into Borg), and Borg itself is running the loop that is constantly trying to make sure the task runs. This is why when a task crashes, it is restarted. When a task crashes repeatedly, together you end up with a crashloop.

Blocked Processes

As far as I know, some conditions must be validated so that a process continue to run. If they are not confirmed, the processor blocks that process not to waste time. After these conditions are validated, the process enter into ready state.
However, I faced a sentence like this in the book "Modern Operating Systems Andrew Tanenbaum": There are two types of processes which are system processes and user processes. If processor takes a disk interrupt when it executes a user application, the system makes a decision to stop running the current process, and starts to run disk process. In this case, application process is kept in blocked state. After the disk is read or anything is written on the disk, the process waiting for it is unblocked.
I know that a process is blocked in only the situation that a requirement or a condition is not validated. However, I suppose this sentence try to say that disk process has higher precedence, that's why application process is blocked. Is the precedence a factor to block any process ?
What you are describing makes no sense. I have to wonder if this is the result of your quotation.
First of all, the processor does not block processes; the operating system does.
Second, I have not worked on an operating system that works anything like the way you describe here.
Usually, if a disk drive triggers an interrupt, the current process handles that interrupt. While in kernel mode the operating system does whatever queuing is necessary for the disk operation. If the process's time slice is up, only then does the process change. If not, after interrupt handling, the process picks up where it left off before the interrupt.
I cannot imagine a "modern" operating system that invokes a disk process to handle disk interrupts.

Can a process terminate after I/O without returning to the CPU?

I have a question about the following diagram from Operating Systems Concepts: http://unboltingbinary.in/wp-content/uploads/2015/04/image028.jpg
This diagram seems to imply that after every I/O operation, the process is placed back on the ready queue before being sent to the CPU again. However, is it possible for a process to terminate after I/O but before being sent to the ready queue?
Suppose we have a program that computes a number and then writes it to storage. In this case, does the process really need to return to the CPU after the I/O operation? It seems to me that the process should be allowed to terminate right after I/O. That way, there would be no need for a context switch.
Once one process has successfully executed a termination request on another, the threads of the terminated process should never run again, no matter what state they were in - blocked on I/O, blocked on inter-thread comms, running on a core, sleeping, whatever - they all must be stopped immediately if running and all be put in a state where they will never run again.
Anything else would be a security issue - terminated threads should not be given execution at all, (else it may not be possible to terminate the process).
Process termination requires the cpu. Changes to kernel mode structures on process exit, returning memory resources, etc. all require the cpu.
A process simply just does not evaporate. The term you want here is process rundown - I think.

Trying to understand process state differences

I was wondering what the difference between Microsoft's process states vs. other OS process states was? I've researched see that there is a basic model for process states comprised of 5 states: New (added to ready queue), Ready (list of processes ready to execute), Running (currently running process), Waiting or Blocked (process put on hold to wait for I/o event or waiting for resource), and Terminated (all done).
All operating systems seem to have these 5 states. Is there really a difference between Microsoft and others?