Operating System Basics - process

I am reading process management,and I have a few doubts-
What is meant by an I/o request,for E.g.-A process is executing and
hence it is in running state,it is in waiting state if it is waiting
for the completion of an I/O request.I am not getting by what is meant by an I/O request,Can you
please give an example to elaborate.
Another doubt is -Lets say that a process is executing and suddenly
an interrupt occurs,then the process stops its execution and will be
put in the ready state,is it possible that some other process began
its execution while the interrupt is also being processed?

Regarding the first question:
A simple way to think about it...
Your computer has lots of components. CPU, Hard Drive, network card, sound card, gpu, etc. All those work in parallel and independent of each other. They are also generally slower than the CPU.
This means that whenever a process makes a call that down the line (on the OS side) ends up communicating with an external device, there is no point for the OS to be stuck waiting for the result since the time it takes for that operation to complete is probably an eternity (in the CPU view point of things).
So, the OS fires up whatever communication the process requested (call it IO request), flags the process as waiting for IO, and switches execution to another process so the CPU can do something useful instead of sitting around blocked waiting for the IO request to complete.
When the external device finishes whatever operation was requested, it generates an interrupt, so the OS is informed the work is done, and it can then flag the blocked process as ready again.
This is all a very simplified view of course, but that's the main idea. It allows the CPU to do useful work instead of waiting for IO requests to complete.
Regarding the second question:
It's tricky, even for single CPU machines, and depends on how the OS handles interrupts.
For code simplicity, a simple OS might for example, whenever an interrupt happens process the interrupt in one go, then resume whatever process it decides it's appropriate whenever the interrupt handling is done. So in this case, no other process would run until the interrupt handling is complete.
In practice, things get a bit more complicated for performance and latency reasons.
If you think about an interrupt lifetime as just another task for the CPU (From when the interrupt starts to the point the OS considers that handling complete), you can effectively code the interrupt handling to run in parallel with other things.
Just think of the interrupt as notification for the OS to start another task (that interrupt handling). It grabs whatever context it needs at the point the interrupt started, then keeps processing that task in parallel with other processes.

I/O request generally just means request to do either Input , Output or both. The exact meaning varies depending on your context like HTTP, Networks, Console Ops, or may be some process in the CPU.
A process is waiting for IO: Say for example you were writing a program in C to accept user's name on command line, and then would like to print 'Hello User' back. Your code will go into waiting state until user enters their name and hits Enter. This is a higher level example, but even on a very low level process executing in your computer's processor works on same basic principle
Can Processor work on other processes when current is interrupted and waiting on something? Yes! You better hope it does. Thats what scheduling algorithms and stacks are for. However the real answer depending on what Architecture you are on, does it support parallel or serial processing etc.

Related

exceptions vs interrupts in RiscV?

I read that exceptions are raised by instructions and interrupts are raised by external events.
But I couldn't understand few things:
In which type we stop immediately and in which we let the current line of code finish?
In both cases and after doing 1 we run the next program, is that right?
The terminology is used differently by different authors.
The hardware exception mechanism is used by both software caused exceptions, and by external interrupts.
In which type we stop immediately and in which we let the current line of code finish?
The processor can do what its designers want up to a point.
If there is an external interrupt, the processor may choose to finish some of the work in progress before taking the interrupt.
If there is a software caused interrupt, the processor simply cannot execute that instruction (or any instructions after) and must abort its execution, and take the exception without completing the offending instruction.
In both cases and after doing 1 we run the next program, is that right?
In the case of external interrupt, the most efficient is to resume the program that was interrupted, as the caches and page tables are hot for that process.  However, an external interrupt may change the runnable status of another higher priority process, or may end the timeslice of the interrupted process, indicating another process should now get CPU time instead of the one interrupted.
In the case of a software caused exception, it is could be a non-resumable situation (null pointer dereference), in which case, yes another process get the CPU after.  It could be a resumable situation, like I/O request or page fault.  If servicing the situation requires interfacing with peripherals, that may block the interrupted process, so another process would get some CPU time.

How does the operating system handle its responsibilities while a process is executing?

I had this question in mind from long time and may sound little vacuous. We know that operating system is responsible for handling memory allocation, process management etc. CPU can perform only one task at a time(assuming it to be single core). Suppose an operating system has allocated a CPU cycle to some user initiated process and CPU is executing that. Now where is operating system running? If some other process is using the CPU, then, is operating system not running for that moment? as OS itself must need CPU to run. If in case OS is not running, then who is handling process management, device management etc for that period?
The question is mixing up who's in control of the memory and who's in control of the CPU. The wording “running” is imprecise: on a single CPU, a single task is running at any given time in the sense that the processor is executing its instructions; but many tasks are executing in the sense that their state is stored in memory and their execution can resume at any time.
While a process is executing on the CPU, the kernel is not executing. Its state is saved in memory. The execution of the kernel can resume:
if the process code makes a jump into kernel code — this is called a system call.
if an interrupt occurs.
If the operating system provides preemptive multitasking, it will schedule an interrupt to happen after an interval of time (called a time slice). On a non-preemptive operating system, the process will run forever if it doesn't yield the CPU. See What mechanisms prevent a process from taking over the processor forever? for an explanation of how preemption works.
Tasks such as process management and device management are triggered by some event. If the event is a request by the process, the request will take the form of a system call, which executes kernel code. If the event is triggered from hardware, it will take the form of an interrupt, which executes kernel code.
(Note: in this answer, I use “CPU” and “processor” synonymously, to mean a single execution thread: a single core, or whatever the hardware architecture is.)
The OS kernel does nothing at all until it is entered via an interrupt. It may be entered because of a hardware interrupt that causes a driver to run and the driver chooses to exit via the OS, or a running thread may make a syscall interrupt.
Unless an interrupt occurs, the OS kernel does nothing at all. It does not need to do anything.
Edit:
DMA is, (usually), used for bulk I/O and is handled by a hardware subsystem that handles requests issued by a system call, (software interrupt). When a DMA operation is complete, the DMA hardware raises a hardware interrupt, so running a driver that can further signal the OS of the completion, maybe changing the set of running threads, so DMA is managed by interrupts.
A new process/thread can only be loaded by an existing thread that has issued a system call, (software interrupt), and so new processes are initiated by interrupts.
It's interrupts, all the way down :)
It depends on which type of CPU Scheduling you are using : (in case of single core)
if your process is executing with preemptive scheduling then you can interrupt the process in between for some time duration and you can use the CPU for some other Process or O.S. but in case of Non-Preemptive Scheduling process is not going to yield the CPU before completing there execution.
In case of single Core, if there is a single process then it will execute with given instruction and if there are multiple process then states stored in the PCB. which make process queue and execute one after another, if no interrupts occur.
PCB is responsible for any process management.
when a process initialize it calls to Library function that's System calls and execution of Kernel get invoke if some process get failed during execution or interrupt occur.

OS Concepts Terminology

I'm doing some fill in the blanks from a sample exam for my class and I was hoping you could double check my terminology.
The various scheduling queues used by the operating system would consist of lists of processes.
Interrupt handling is the technique of periodically checking to see if a condition (such as completion of some requested I/O operation) has been met.
When the CPU is in kernel mode, a running program has access to a restricted set of CPU functionality.
The job of the CPU scheduler is to select a process on the ready queue and change its state.
The CPU normally supports a vector of interrupts so the OS can respond appropriately when some event of interest occurs in the hardware.
Using traps, a device controller can use idle time on the bus to read from or write to main memory.
During a context switch, the state of one process is copied from the CPU and saved, and the state of a different process is restored.
An operating system consists of a kernel and a collection of application programs that run as user processes and either provide OS services to the user or work in the background to keep the computer running smooth.
There are so many terms from our chapters, I am not quite sure if I am using the correct ones.
My thoughts:
1. Processes and/or threads. Jobs and tasks aren't unheard of either. There can be other things. E.g. in MS Windows there are also Deferred Procedure Calls (DPCs) that can be queued.
2. This must be polling.
4. Why CPU scheduler? Why not just scheduler?
6. I'm not sure about traps in the hardware/bus context.

An infinite loop executing in a single processor system

A process P1 is executing in infinite loop in a system which has only a single CPU. There are also other processes like P2, P3 which are waiting to gain the CPU, but are in wait queue as P1 is already executing.
The program is, something like:
int main( )
{
while(1);
}
So, what will be the end result? Will the system crash?
Probable answer is, the CPU won't crash and other processes can execute in the CPU as because every process has a specific time slice, so after the time slice of P1 expires, other waiting processes can gain the CPU.
But again, how will the kernel (O/S) check that the time slice has expired, as because there is only one CPU and the process is running in infinite loop? Because if checking has to happen, it needs CPU to do that, and the CPU is already occupied by process P1 which is executing in infinite loop.
So what happens in this case?
It really depends on what operating system and hardware you are using. Interrupts can transfer the execution of code to another location (interrupt handler). These interrupts can be software (code in the program can call these interrupt handlers) or hardware (the cpu receives a signal on one of its pins). On a motherboard you have something called a Programmable Interrupt Controller (PIC) which can generate a steady stream of interrupts (timer interrupts). The OS can use the timer interrupt handler to stop a running process and continue another one.
Again, it really depends on the OS, the hardware you are working with,... and without more specifics it's too general a question to give a definite answer.
Processor has something called interutps. OS (like windows) tell processor:
- Use this process for X time and then tell me
So procesor starts timer and works on process. When time pass procesor sends interupt and tell OS that time passed. OS now will decide wwich process will work next.
Hope this answers your question.

Linux Kernel Code Execution Contexts

When a process executing in the user space issues a system call or triggers an exception, it enters into the kernel space and kernel starts executing on behalf of the process. Kernel is said to be executing in the process context. Similarly when an interrupt occurs kernel executes in the interrupt context. I have studied about kernel execution in kernel thread, where kernel processes runs in the background.
My Questions are :
Does the kernel execute in any other contexts?
Suppose a process in the user space never executes a system call or triggers an exception or no interrupt occurs, does the kernel code ever execute ?
The kernel runs periodically, it sets a timer to fire an interrupt at some predefined frequency (100 Hz (Linux 2.4/x86), 1000Hz (early Linux 2.6/x86), 250Hz (newer Linux 2.6/x86)).
The kernel need to do this in order to do preemptive multitasking. OTOH, OSes only doing cooperative multitasking (Windows 3.1, classic Mac OS) needn't do this, and only switch tasks on response to some call from the running task (which could lead to runaway tasks hanging the whole system).
Note that there is some effort to optimize the use of this timer: newer Linux is smarter when there are no runnable tasks, it sets the timer as far in the future as it can, to allow the CPU to sleep longer and deeper, and preserve power (the CONFIG_NOHZ kernel config option). Running powertop will show the number of wakeups per second, which on an idle system can be much lower than the 250 wakeups per second you'd expect of a traditional implementation.
Suppose a process in the user space never executes a system call or triggers an exception or no interrupt occurs, does the kernel code ever execute ?
Assume you have a process p that is running the following code: while(1);. This code will never call into the kernel and won't cause any faults. (It might have set an alarm(3) earlier, causing a signal to be delivered in the future, or it might exceed the setrlimit(2) CPU limit, in which cases the kernel will deliver a signal to the process.)
Or, if another process sends p a signal via kill(2), the kernel will deliver that signal to the process as well.
The signal delivery will either cause a signal handler to run, do nothing (if the signal is ignored or masked), or take the default signal action (which might be nothing or termination).
And, of course, the process execution can be interrupted so the processor can handle interrupts; or a higher-priority process can preempt it.