I have been researching process scheduling as part of my studies. In doing so I have been referring to the following information:
According to Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin in; "Operating System Concepts, Ninth Edition ", Chapter 3;
a process is in a ready state when:
The process has all the resources available that it needs to run, but the CPU is not currently working on this process's instructions.
However I have also been informed from lecture notes that:
When the short-term scheduler selects the next process [from the ready state and before using the CPU], the Dispatcher Routine gives it control of the CPU. Before the process can actually be dispatched it must go through a conflicts phase. (so far so good, however it goes on..)
"An aspect of this conflicts phase is the acquisition of resources needed by the new process to execute".
If the process is selected from a ready state by the dispatcher routine and the definition of the ready state is that "The process has all the resources available that it needs to run" then:
why is it necessary for 'An aspect of the conflicts phase to be the acquisition of resources'?
At what point exactly does a process acquire the necessary resources?
All of this is system dependent. First of all, to understand the procedure, there is a SCHEDULER. Some OS books talk about long, medium, and short-term schedulers but this division is system specific.
From a general perspective, a process has only three states: (1) running; (2) ready to run and (3) not ready to run. Every operating system on the planet is going to have more states that I have conflated to #3. However, those additional states are entirely system specific.
I find this definition confusing:
a process is in a ready state when:
The process has all the resources available that it needs to run, but the CPU is not currently working on this process's instructions.
The main resource a process needs to run is the CPU. Thus, if a process has all the resources it needs to run, it is running.
I also find confusing the use of the term "resources" to describe a waiting process. From a system neutral point of view, the only resources a process needs to run are (1) the CPU and (2) physical memory needed to execute the current instructions. Other than having these, processes normally are in a "not ready to run" state because they are waiting on events to occur; not resources to become available.
That is not so say some systems may have other resources needed to run, but this is system specific.
The problem that you face is that the description you are giving (or being given) is of a mishmashed description of theory and implementation. An operating system might not have a conflicts phase at all that you are describing. On the other hand some specific operating system might be implemented the way you are describing.
In short, people seem to be making the high level theory more complicated for you than necessary and you are getting a confusing dose of operating system specifics without regard to some specific operating system.
Related
I am on my fourth year of Software Engineering and we are covering the topic of Deadlocks.
The generalization goes that a Deadlock occurs when two processes A and B, use two resources X and Y and wait for the release of the other process resource before releasing theirs.
My question would be, given that the CPU is a resource in itself, is there a scenario where there could be a deadlock involving CPU as a resource?
My first thought on this problem is that you would require a system where a process cannot be released from the CPU by timed interrupts (it could just be a FCFS algorithm). You would also require no waiting queues for resources, because getting into a queue would release the resource. But then I also ask, can there be Deadlocks when there are queues?
CPU scheduler can be implemented in any way, you can build one which used FCFS algorithm and allowed processes to decide when they should relinquish control of CPU. but these kind of implementations are neither going to be practical nor reliable since CPU is the single most important resource an operating system has and allowing a process to take control of it in such a way that it may never be preempted will effectively make process the owner of the system which contradicts the basic idea that operating system should always be in control of the system.
As far as contemporary operating systems (Linux, Windows etc) are concerned, this will never happen because they don't allow such situations.
In a computer system, the ISA level is lower than the OS level. The OS level is built upon the ISA level.
At the OS level, different programs run in different processes.
A program can run before another program finishes running, by context switching.
Programs in different processes do not affect each other.
Assume there is no OS and there is only one cpu core in the computer system. At the ISA level, there does not exist the concept of process. what is it like to run different programs?
must a program start to run after the previous program finishes running?
can the previous finished program affect the following program, in an untended or intended way?
The question "what is it like" sounds like a question about how the processor feels about running the instructions it is given. It just runs them.
It is not true entirely that there wouldn't be a concept of a process on ISA level. Processors may have hardware support for task switching, so they might actually know about them. Of course the OS will still be running the show.
Simply put, at the ISA level the software meets the hardware. So the single core CPU will just churn instructions and command peripherals from a preset memory location onwards until it's turned off or halts otherwise. Is there some other specific question about "what is it like"?
I am just now learning about OSes and I stumbled upon this question from my class' lecture notes. In our class, we define a process as a program in execution and I know that an OS is itself a program. So by this definition, an OS is a process.
At the same time processes can be switched in or out via a context switch, which is something that the OS manages and handles. But what would handle the OS itself when it isn't running?
Also if it is a process, does the OS have a process control block associated with it?
There was an older question on this site that I looked at, but I felt as if the answers weren't clear enough to really outline WHY the OS is/isn't a process so I thought I'd ask again here.
First of all, an OS is multiple parts. The core piece is the kernel, which is not a process. It is a framework for running processes. In practice, a process is more than just a "program in execution". On a system with an MMU, a process is usually run in its own virtual address space. The kernel however, is usually mapped into all processes. It's always there.
Other ancillary parts of the OS exist to make it usuable. The OS may have processes that it runs as part of its management. For example, Linux has many kernel threads that are independently scheduled tasks. But these are often not crucial to the OS's operation.
Short answer: No.
Here's as good a definition of "Operating System" as any:
https://en.wikipedia.org/wiki/Operating_system
An operating system (OS) is system software that manages computer
hardware and software resources and provides common services for
computer programs. The operating system is a component of the system
software in a computer system. Application programs usually require an
operating system to function.
Even "system-level processes" (like "init" on Linux, or "svchost.exe" on Windows) rely on the "operating system" ... but are not themselves the operating system.
Agreeing to some of the comments above/below.
OS is not a process. However there are few variants in design that give the opposite illusion.
For eg: If you are running a FreeRTOS, then there is no such thing as a separate OS address space and Process address space, every thing runs as a single process, the FreeRTOS framework provides API's that allow Synchronization of different tasks.
Operating System is just a set of API's (system calls) and utilities that help to achieve Multi-processing, Resource sharing etc. For eg: schedule() is a core OS function that handles the multi-processing capabilities of the OS.
In that sense, OS is not a process. Although it attaches to every process that runs on the CPU, otherwise how will the process make use of the OS's API.
It is more like soul for the body (hardware), if you will.
It is just not one process but a set of (kernel) processes required to run user processes in the system. PID 0 being the parent of all processes providing scheduler/swapping functionality to the rest of the kernel/user processes, but it is not the only process. These kernel processes (with the help of kernel drivers) provide accessor functionality (through system calls) to the user processes.
It depends upon what you are calling the "operating system".
It depends upon what operating system you are talking about.
That said and at the risk of gross oversimplification, most of what one calls "the operating system" is generally executed from user processes while in kernel mode. The entry into kernel occurs either through an interrupt, trap or fault.
To do a context switch usually either a process causes a fault entering kernel mode to so something (like write to the disk). While in kernel mode, the process realizes it would have to wait so it yields by switching the context to another process. The other common way is a timer causes an interrupt, that forces the process into kernel mode. The process then determines who should be executed next, and switches the process context.
Some operating systems do have their own kernel process that function but that is increasingly rare.
Most operating system have components that have their own processes.
I have been confused about the issue of context switches between processes, given round robin scheduler of certain time slice (which is what unix/windows both use in a basic sense).
So, suppose we have 200 processes running on a single core machine. If the scheduler is using even 1ms time slice, each process would get its share every 200ms, which is probably not the case (imagine a Java high-frequency app, I would not assume it gets scheduled every 200ms to serve requests). Having said that, what am I missing in the picture?
Furthermore, java and other languages allows to put the running thread to sleep for e.g. 100ms. Am I correct in saying that this does not cause context switch, and if so, how is this achieved?
So, suppose we have 200 processes running on a single core machine. If
the scheduler is using even 1ms time slice, each process would get its
share every 200ms, which is probably not the case (imagine a Java
high-frequency app, I would not assume it gets scheduled every 200ms
to serve requests). Having said that, what am I missing in the
picture?
No, you aren't missing anything. It's the same case in the case of non-pre-emptive systems. Those having pre-emptive rights(meaning high priority as compared to other processes) can easily swap the less useful process, up to an extent that a high-priority process would run 10 times(say/assume --- actual results are totally depending on the situation and implementation) than the lowest priority process till the former doesn't produce the condition of starvation of the least priority process.
Talking about the processes of similar priority, it totally depends on the Round-Robin Algorithm which you've mentioned, though which process would be picked first is again based on the implementation. And, Windows and Unix have same process scheduling algorithms. Windows and Unix does utilise Round-Robin, but, Linux task scheduler is called Completely Fair Scheduler (CFS).
Furthermore, java and other languages allows to put the running thread
to sleep for e.g. 100ms. Am I correct in saying that this does not
cause context switch, and if so, how is this achieved?
Programming languages and libraries implement "sleep" functionality with the aid of the kernel. Without kernel-level support, they'd have to busy-wait, spinning in a tight loop, until the requested sleep duration elapsed. This would wastefully consume the processor.
Talking about the threads which are caused to sleep(Thread.sleep(long millis)) generally the following is done in most of the systems :
Suspend execution of the process and mark it as not runnable.
Set a timer for the given wait time. Systems provide hardware timers that let the kernel register to receive an interrupt at a given point in the future.
When the timer hits, mark the process as runnable.
I hope you might be aware of threading models like one to one, many to one, and many to many. So, I am not getting into much detail, jut a reference for yourself.
It might appear to you as if it increases the overhead/complexity. But, that's how threads(user-threads created in JVM) are operated upon. And, then the selection is based upon those memory models which I mentioned above. Check this Quora question and answers to that one, and please go through the best answer given by Robert-Love.
For further reading, I'd suggest you to read from Scheduling Algorithms explanation on OSDev.org and Operating System Concepts book by Galvin, Gagne, Silberschatz.
I'm doing some fill in the blanks from a sample exam for my class and I was hoping you could double check my terminology.
The various scheduling queues used by the operating system would consist of lists of processes.
Interrupt handling is the technique of periodically checking to see if a condition (such as completion of some requested I/O operation) has been met.
When the CPU is in kernel mode, a running program has access to a restricted set of CPU functionality.
The job of the CPU scheduler is to select a process on the ready queue and change its state.
The CPU normally supports a vector of interrupts so the OS can respond appropriately when some event of interest occurs in the hardware.
Using traps, a device controller can use idle time on the bus to read from or write to main memory.
During a context switch, the state of one process is copied from the CPU and saved, and the state of a different process is restored.
An operating system consists of a kernel and a collection of application programs that run as user processes and either provide OS services to the user or work in the background to keep the computer running smooth.
There are so many terms from our chapters, I am not quite sure if I am using the correct ones.
My thoughts:
1. Processes and/or threads. Jobs and tasks aren't unheard of either. There can be other things. E.g. in MS Windows there are also Deferred Procedure Calls (DPCs) that can be queued.
2. This must be polling.
4. Why CPU scheduler? Why not just scheduler?
6. I'm not sure about traps in the hardware/bus context.