CPU and I/O Burst - process

Say there are two processes p1 and p2.
Each process has 4 'actions' that follow this format.
CPU Burst - I/O Burst - CPU Burst - I/O Burst.
Say that p2 was currently loaded onto the CPU and is in a CPU Burst state and p1 which was just unloaded is in a I/O Burst state. Will the I/O burst of p1 run at the same time as p2 is running its CPU burst?

Related

Does low GPU utilization indicate bad fit for GPU acceleration?

I'm running some GPU-accelerated PyTorch code and training it against a custom dataset, but while monitoring the state of my workstation during the process, I see GPU usage along the following lines:
I have never written my own GPU primitives, but I have a long history of doing low-level optimizations for CPU-intensive workloads and my experience there makes me concerned that while pytorch/torchvision are offloading the work to the GPU, it may not be an ideal workload for GPU acceleration.
When optimizing CPU-bound code, the goal is to try and get the CPU to perform as much (meaningful) work as possible in a unit of time: a supposedly CPU-bound task that shows only 20% CPU utilization (of a single core or of all cores, depending on whether the task is parallelizable or not) is a task that is not being performed efficiently because the CPU is sitting idle when ideally it would be working towards your goal. Low CPU usage means that something other than number crunching is taking up your wall clock time, whether it's inefficient locking, heavy context switching, pipeline flushes, locking IO in the main loop, etc. which prevents the workload from properly saturating the CPU.
When looking at the GPU utilization in the chart above, and again speaking as a complete novice when it comes to GPU utilization, it strikes me that the GPU usage is extremely low and appears to be limited by the rate at which data is being copied into the GPU memory. Is this assumption correct? I would expect to see a spike in copy (to GPU) followed by an extended period of calculations/transforms, followed by a brief copy (back from the GPU), repeated ad infinitum.
I notice that despite the low (albeit constant) copy utilization, the GPU memory is constantly peaking at the 8GB limit. Can I assume the workload is being limited by the low GPU memory available (i.e. not maxing out the copy bandwidth because there's only so much that can be copied)?
Does that mean this is a workload better suited for the CPU (in this particular case with this RTX 2080 and in general with any card)?

CPU Burst and I/O Burst

If a process that currently is being executed faces an I/O Burst, will the next available process gain the CPU burst or will the processor wait until the I/O Burst of the first process finishes to continue executing the first process.
Also does this get affected by whether an algorithm is preemptive or non-preemptive?
Thanks!

single thread process on multi cpu and threads

lets says I have single threaded process and 2 CPU each with 2 cores.
How many processes can I run at any moment? 2 or 4? I couldn't find a clear answer for this.
is the cpu bound to he process and a core is wasted so only 2 processes can run at the same time or there is optimizations and we can run 4 processes at the same time on the 4 cores even if we only have 2 cpus?
There is no limit. The number of cores or CPUs has no connection whatsoever to the number of processes you can run.
I'm typing this answer to you on a machine with 8 cores that's currently executing 218 processes with a total of 524 threads.
is the cpu bound to he process and a core is wasted so only 2 processes can run at the same time or there is optimizations and we can run 4 processes at the same time on the 4 cores even if we only have 2 cpus?
A CPU has no idea what a process is and doesn't care whether a thread it's executing is associated with a process or not. Processes are OS concepts and CPUs don't know or care about them.

How CPU handles sleep function?

What will CPU do when a sleep(10) or equivalent statement is executed. How will it wait exactly for 60 seconds when CPU also does context switching a brings this process to wait state.
sleep function usually asks operating system to assign cpu to another process. Actually, there is a special process (usually named 'idle'), which gets the cpu if there are no other processes waiting for cpu. On intel processor, idle process executes special instruction (wait), which stops the processor (then, it does not execute more instructions). Whether is processor assigned to idle process or not, it still responds to hardware interrupts (special signals produced by other computer hardware in case the hardware would like to inform processor about some event). One of the interrupts source is a timer - it sends interrupt to cpu each 10ms or so (depends on concrete computer and operating system). As a response to the timer event, operating system may assign cpu back to the process, which executed the sleep - if the specified time elapsed.
As the operating system is able to measure time with precision of 1 timer tick, Your process will be ready in specified time +- 2 ticks. In case Your cpu is busy at the moment, Your process gets assigned the cpu with some additional delay.

Why I/O-bound processes are faster?

Typically the CPU runs for a while without stopping, then a system call is made to read from a file or write to a file. When the system call completes, the CPU computes again until it needs more data or has to write more data, and so on.
Some processes spend most of their time computing, while others spend most of their time waiting for I/O. The former are called compute-bound; the latter are called I/O-bound. Compute-bound processes typically have long CPU bursts and thus infrequent I/O waits, whereas I/O-bound processes have short CPU bursts and thus frequent I/O waits.
As CPU gets faster, processes tend to
get more I/O-bound.
Why and how?
Edited:
It's not a homework question. I was studying the book (Modern Operating Systems by Tanenbaum) and found this matter there. I didn't get the concept that's why I am asking here. Don't tag this question as a homework please.
With a faster CPU, the amount of time spent using the CPU will decrease (given the same code), but the amount of time spent doing I/O will stay the same (given the same I/O performance), so the percentage of time spent on I/O will increase, and I/O will become the bottleneck.
That does not mean that "I/O bound processes are faster".
As CPU gets faster, processes tend to get more I/O-bound.
What it's trying to say is:
As CPU gets faster, processes tend to not increase in speed in proportion to CPU speed because they get more I/O-bound.
Which means that I/O bound processes are slower than non-I/O bound processes, not faster.
Why is this the case? Well, when only CPU speed increases all the rest of your system haven't increased in speed. Your hard disk is still the same speed, your network card is still the same speed, even your RAM is still the same speed*. So as the CPU increases in speed, the limiting factor to your program becomes less and less the CPU speed but more about how slow your I/O is. In other words, programs naturally shift to being more and more I/O bound. In other words: ..as CPU gets faster, processes tend to get more I/O-bound.
*note: Historically everything else also improved in speed along with the CPU, just not as much. For example CPUs went from 4MHz to 2GHz, a 500x speed increase whereas hard disk speed went from around 1MB/s to 70MB/s, a lame 70x increase.