Operating Systems: Process State Transition Diagram - process

what is the minimum processes that are in each state(READY,RUNNING,BLOCKED,SUSPEND) the same time?

Zero. Whoever said an OS had to have processes (in the strict sense of the term) running all the time? For example, in the early stages of the Linux boot process, the idle process is not yet created, and so one could say that there are no processes in the system. That said, the answer might be different if you specified your question in more detail (what OS, what stage of the boot process, what do you consider a process). :)

Related

What is it like to run programs at ISA level?

In a computer system, the ISA level is lower than the OS level. The OS level is built upon the ISA level.
At the OS level, different programs run in different processes.
A program can run before another program finishes running, by context switching.
Programs in different processes do not affect each other.
Assume there is no OS and there is only one cpu core in the computer system. At the ISA level, there does not exist the concept of process. what is it like to run different programs?
must a program start to run after the previous program finishes running?
can the previous finished program affect the following program, in an untended or intended way?
The question "what is it like" sounds like a question about how the processor feels about running the instructions it is given. It just runs them.
It is not true entirely that there wouldn't be a concept of a process on ISA level. Processors may have hardware support for task switching, so they might actually know about them. Of course the OS will still be running the show.
Simply put, at the ISA level the software meets the hardware. So the single core CPU will just churn instructions and command peripherals from a preset memory location onwards until it's turned off or halts otherwise. Is there some other specific question about "what is it like"?

Is the Operating System a process?

I am just now learning about OSes and I stumbled upon this question from my class' lecture notes. In our class, we define a process as a program in execution and I know that an OS is itself a program. So by this definition, an OS is a process.
At the same time processes can be switched in or out via a context switch, which is something that the OS manages and handles. But what would handle the OS itself when it isn't running?
Also if it is a process, does the OS have a process control block associated with it?
There was an older question on this site that I looked at, but I felt as if the answers weren't clear enough to really outline WHY the OS is/isn't a process so I thought I'd ask again here.
First of all, an OS is multiple parts. The core piece is the kernel, which is not a process. It is a framework for running processes. In practice, a process is more than just a "program in execution". On a system with an MMU, a process is usually run in its own virtual address space. The kernel however, is usually mapped into all processes. It's always there.
Other ancillary parts of the OS exist to make it usuable. The OS may have processes that it runs as part of its management. For example, Linux has many kernel threads that are independently scheduled tasks. But these are often not crucial to the OS's operation.
Short answer: No.
Here's as good a definition of "Operating System" as any:
https://en.wikipedia.org/wiki/Operating_system
An operating system (OS) is system software that manages computer
hardware and software resources and provides common services for
computer programs. The operating system is a component of the system
software in a computer system. Application programs usually require an
operating system to function.
Even "system-level processes" (like "init" on Linux, or "svchost.exe" on Windows) rely on the "operating system" ... but are not themselves the operating system.
Agreeing to some of the comments above/below.
OS is not a process. However there are few variants in design that give the opposite illusion.
For eg: If you are running a FreeRTOS, then there is no such thing as a separate OS address space and Process address space, every thing runs as a single process, the FreeRTOS framework provides API's that allow Synchronization of different tasks.
Operating System is just a set of API's (system calls) and utilities that help to achieve Multi-processing, Resource sharing etc. For eg: schedule() is a core OS function that handles the multi-processing capabilities of the OS.
In that sense, OS is not a process. Although it attaches to every process that runs on the CPU, otherwise how will the process make use of the OS's API.
It is more like soul for the body (hardware), if you will.
It is just not one process but a set of (kernel) processes required to run user processes in the system. PID 0 being the parent of all processes providing scheduler/swapping functionality to the rest of the kernel/user processes, but it is not the only process. These kernel processes (with the help of kernel drivers) provide accessor functionality (through system calls) to the user processes.
It depends upon what you are calling the "operating system".
It depends upon what operating system you are talking about.
That said and at the risk of gross oversimplification, most of what one calls "the operating system" is generally executed from user processes while in kernel mode. The entry into kernel occurs either through an interrupt, trap or fault.
To do a context switch usually either a process causes a fault entering kernel mode to so something (like write to the disk). While in kernel mode, the process realizes it would have to wait so it yields by switching the context to another process. The other common way is a timer causes an interrupt, that forces the process into kernel mode. The process then determines who should be executed next, and switches the process context.
Some operating systems do have their own kernel process that function but that is increasingly rare.
Most operating system have components that have their own processes.

Is it useful to create many threads for a specific process to increase the chance of this process executed?

This is an interview question I encountered today. I have some knowledge about OS but not really proficient at it. I think maybe there are limited threads for each process can create?
Any ideas will help.
This question can be viewed [at least] in two ways:
Can your process get more CPU time by creating many threads that need to be scheduled?
or
Can your process get more CPU time by creating threads to allow processing to continue when another thread(s) is blocked?
The answer to #1 is largely system dependent. However, any rationally-designed system is going to protect again rogue processes trying this. Generally, the answer here is NO. In fact, some older systems only schedule processes; not threads. In those cases, the answer is always NO.
The answer to #2 is generally YES. One of the reasons to use threads is to allow a process to continue processing while it has to wait on some external event.
The number of threads that can run in parallel depends on the number of CPUs on your machine
It also depends on the characteristic of the processes you're running, if they're consuming CPU - it won't be efficient to run more threads than the number of CPUs on your machine, on the other hand, if they do a lot of I/O, or any other kind of tasks that blocks a lot - it would make sense to increase the number of threads.
As for the question "how many" - you'll have to tune your app, make measurements and decide based on actual data.
Short answer: Depends on the OS.
I'd say it depends on how the OS scheduler is implemented.
From personal experience with my hobby OS, it can certainly happen.
In my case, the scheduler is implemented with a round robin algorithm, per thread, independent on what process they belong to.
So, if process A has 1 thread, and process B has 2 threads, and they are all busy, Process B would be getting 2/3 of the CPU time.
There are certainly a variety of approaches. Check Scheduling_(computing)
Throw in priority levels per process and per thread, and it really depends on the OS.

operating system - context switches

I have been confused about the issue of context switches between processes, given round robin scheduler of certain time slice (which is what unix/windows both use in a basic sense).
So, suppose we have 200 processes running on a single core machine. If the scheduler is using even 1ms time slice, each process would get its share every 200ms, which is probably not the case (imagine a Java high-frequency app, I would not assume it gets scheduled every 200ms to serve requests). Having said that, what am I missing in the picture?
Furthermore, java and other languages allows to put the running thread to sleep for e.g. 100ms. Am I correct in saying that this does not cause context switch, and if so, how is this achieved?
So, suppose we have 200 processes running on a single core machine. If
the scheduler is using even 1ms time slice, each process would get its
share every 200ms, which is probably not the case (imagine a Java
high-frequency app, I would not assume it gets scheduled every 200ms
to serve requests). Having said that, what am I missing in the
picture?
No, you aren't missing anything. It's the same case in the case of non-pre-emptive systems. Those having pre-emptive rights(meaning high priority as compared to other processes) can easily swap the less useful process, up to an extent that a high-priority process would run 10 times(say/assume --- actual results are totally depending on the situation and implementation) than the lowest priority process till the former doesn't produce the condition of starvation of the least priority process.
Talking about the processes of similar priority, it totally depends on the Round-Robin Algorithm which you've mentioned, though which process would be picked first is again based on the implementation. And, Windows and Unix have same process scheduling algorithms. Windows and Unix does utilise Round-Robin, but, Linux task scheduler is called Completely Fair Scheduler (CFS).
Furthermore, java and other languages allows to put the running thread
to sleep for e.g. 100ms. Am I correct in saying that this does not
cause context switch, and if so, how is this achieved?
Programming languages and libraries implement "sleep" functionality with the aid of the kernel. Without kernel-level support, they'd have to busy-wait, spinning in a tight loop, until the requested sleep duration elapsed. This would wastefully consume the processor.
Talking about the threads which are caused to sleep(Thread.sleep(long millis)) generally the following is done in most of the systems :
Suspend execution of the process and mark it as not runnable.
Set a timer for the given wait time. Systems provide hardware timers that let the kernel register to receive an interrupt at a given point in the future.
When the timer hits, mark the process as runnable.
I hope you might be aware of threading models like one to one, many to one, and many to many. So, I am not getting into much detail, jut a reference for yourself.
It might appear to you as if it increases the overhead/complexity. But, that's how threads(user-threads created in JVM) are operated upon. And, then the selection is based upon those memory models which I mentioned above. Check this Quora question and answers to that one, and please go through the best answer given by Robert-Love.
For further reading, I'd suggest you to read from Scheduling Algorithms explanation on OSDev.org and Operating System Concepts book by Galvin, Gagne, Silberschatz.

Are processes executed on operating system

I saw this question somewhere
Four processes p1, p2, p3, p4 - each have sizes 1GB, 1.2GB, 2GB, 1GB. And each processes is executed as a time sharing fashion. Will they be executed on an operating system.
I think the answer should be No,they are not executed on operating system because OS is itself a process and it will be running in parallel to these processes.There will be switching between processes from time to time with the help of dispatcher.
but i am getting the doubt that answer can also be yes because it uses every process uses memory which is managed by theb operating system .
please help me figure out the right answer for the question..
This depends entirely on the OS in question.
As well as starting processes (and possibly consisting of processes), an operating system generally provides services to the processes that run on it, such as memory management, file systems, communications and so on.
In that context, these processes can be said to be running on top of the OS. In other words, processes generally are of little use unless they communicate outside of themselves.
In any event, the dispatcher (or scheduler) tends to be an integral part of the OS so having your processes scheduled means that you're running on top of that OS.
Modern operating systems also provide paging of memory as well which means that you can use a lot more virtual memory than there is physical memory - the OS is then responsible for handling requests for memory that has been paged out.
If two processes co exist, they have their own share of memory. What we assume operating system does is scheduling. Operating system may ask one of the process to stop and another to begin