Multilevel feedback queue scheduling - process

Can somebody please explain how to draw the gantt chart for the following using multilevel feedback queue scheduling
Consider a multilevel feedback queue scheduling with three queues, numbered as Q1,Q2,Q3. The scheduler first executes processes in Q1, which is given a time quantum of 10 milli-seconds. If a process does not finish within this time, it is moved to the tail of the Q2.The scheduler executes processes in Q2 only when Q1 is empty. The process at the head of the Q2 is given a quantum of 16 milli-seconds.If it does not complete, it is preempted and is put into Q3. Processes in Q3 are run on an FCFS basis, only when Q1 and Q2 are empty.
Processes Arrival time Burst time
P1 0 17
P2 12 25
P3 28 8
P4 36 32
P5 46 18

First of all, let's fix a quantum time = 10 ms as we need to implement Multilevel Feedback Queue Scheduling algorithm.
Processes will be kept in the ready queue! So, queue will contain P1,P2,P3,P4,P5 in queue as per time, but, feedback will be keep on sending to a lower queue if a process crosses the quantum time and hence, will be placed in the lower queue, if left with incomplete execution!
As given below, last times are inclusive to the interval and starting times are exclusive, but the time-interval in between has to be considered :-
1--->10 ms-------P1
10-->17 ms-------P1 // P1 finished execution..........
17-->20 ms-------P2
20-->30 ms-------P2 // P2 sent to 1st lower queue as it's still incomplete
30-->38 ms-------P3 // P3 finished execution..........
38-->40 ms-------P4
40-->50 ms-------P4 // pushed next to P2 in 1st lower queue
50-->60 ms-------P5 // pushed next to P4 in 1st lower queue
Now,1st lower queue comes in action with time-quantum of 16 ms.
60-->82 ms-------P2 // P2 finished execution.........
82-->98 ms-------P4 // P4 sent in 2nd lower queue as it's still incomplete
99->107 ms-------P5 // P5 finished execution..........
Now,2nd lower queue comes in action with FCFS algorithm implementation.
107-->111 ms-------P4 // Finally, P4 finished execution..........
Hence, this would be the Gantt chart diagram for time-quantum = 10 ms.
If you're left over with any doubt, please leave a comment below!

A process that arrives for queue 1 preempts a process in queue 2.(Operating System Concepts Book, International Student Version, 9th Edition, page 216)
So, I think P2 preempts P1 at 12th second and the suggestion above is not correct.

Execution of process seems to be wrong in this solution. So I have corrected it. Pls correct me if I am wrong.

Final Answer:- Queue Q1 is empty then Q2 is executing but at the time 12ms p2 comes in Q1 so Q2 stop executing that process and wait for empty Q1.

Related

How to add 2 minutes delay between jobs in a queue?

I am using Hangfire in ASP.NET Core with a server that has 20 workers, which means 20 jobs can be enqueued at the same time.
What I need is to enqueue them one by one with 2 minutes delay between each one and another. Each job can take 1-45 minutes, but I don't have a problem running jobs concurrently, but I do have a problem starting 20 jobs at the same time. That's why changing the worker count to 1 is not practical for me (this will slow the process a lot).
The idea is that I just don't want 2 jobs to run at the same second since this may make some conflicts in my logic, but if the second job started 2 minutes after the first one, then I am good.
How can I achieve that?
You can use BackgroundJob.Schedule() to run your job run at a specific time:
BackgroundJob.Schedule(() => Console.WriteLine("Hello"), dateTimeToExecute);
Based on that set a date for the first job to execute, and then increase this date to 2 minutes for each new job.
Something like this:
var dateStartDate = DateTime.Now;
foreach (var j in listOfjobsToExecute)
{
BackgroundJob.Schedule(() => j.Run(), dateStartDate);
dateStartDate = dateStartDate.AddMinutes(2);
}
See more here:
https://docs.hangfire.io/en/latest/background-methods/calling-methods-with-delay.html?highlight=delay

What does "bw: SpinningDown" mean in a RedisTimeoutException?

What does "bw: SpinningDown" mean in this error -
Timeout performing GET (5000ms), next: GET foo!bar!baz, inst: 5, qu: 0, qs: 0, aw: False, bw: SpinningDown, ....
Does it mean that the Redis server instance is spinning down, or something else?
It means something else actually. The abbreviation bw stands for Backlog-Writer, which contains the status of what the backlog is doing in Redis.
For this particular status: SpinningDown, you actually left out the important bits that relate to it.
There are 4 values being tracked for workers being Busy, Free, Min and Max.
Let's take these hypothetical values: Busy=250,Free=750,Min=200,Max=1000
In this case there are 50 more existing (busy) threads than the minimum.
The cost of spinning up a new thread is high, especially if you hit the .NET-provided global thread pool limit. In which case only 1 new thread is created every 500ms due to throttling.
So once the Backlog is done processing an item, instead of just exiting the thread, it will keep it in a waiting state (SpinningDown) for 5 seconds. If during that time there still is more Backlog to process, the same thread will process another item from the Backlog.
If no Backlog item needed to be processed in those 5 seconds, the thread will be exited, which will eventually lead to a decrease in Busy (existing) threads.
This only happens for threads above the Min count of course, as those will be kept alive even if there is no work to do.

AnyLogic selectOutput condition

I'm simulation a queuing system where customers join one queue called RDQueue with a capacity of 5, and then moves to a different queue called TDQueue when RDQueue is full (reached the capacity).
I used a selectOutput block with RDQueue on the true branch and TDQueue on the false branch with the condition: RDQueue.size()<5
There should be customers going to TDQueue, but when I run this simulation no customers ever go through the false branch.
(for some reason the image of what I've done won't upload)
I have a source with arrival rate of 0.361 per minute and a delay for RD with a delay time: exponential(8.76) minutes.
According to queuing theory, 68.5% of arrival customers should find RDQueue full and go to TDQueue.
TIA
If your delay time is exponential(8.76) the delay time will always be below the rate in which they are coming:
Random sample from exponential distribution: x = log(1-u)/(−λ)
with λ=8.76 and u as a uniform random number, the expected value of your delay time is 0.114 minutes, so your RDQueue has a probability of being full of nearly 0%

Unable to exit while loop in UVM monitor

This might be a silly mistake from my side that I have overlooked but I'm fairly new to UVM and I tried tinkering with my code for a while before this. I'm trying to send in a stream of 8 bit data within a packet using Data valid stall protocol from my UVM driver to the DUT. I'm facing an issue with my input monitor not being able to pick up these transactions that are driven.
I have a while loop with a condition that the valid bit must be high and the stall bit should be low. As long as this condition holds good, the monitor needs to pick up the data byte and push into the queue. I know for a fact that the data is being picked up and pushed to a queue as I used $display statements along the way. The problem is arising once all the data bytes are received and the valid bit goes low. Ideally, this should cause the exit from the while loop but isn't doing so. Any help here would be appreciated. I have attached a snippet of the code below. Thanks in advance.
virtual task main_phase (uvm_phase phase);
$display("Run phase of input monitor");
collect_transfer();
endtask: main_phase
virtual task collect_transfer();
fork
forever begin
wait_for_valid_transaction_cycle();
create_and_populate_pkt();
broadcast_pkt();
#(iP0_vif.cb_iP0_MON);
end
join_none
endtask: collect_transfer
virtual task wait_for_valid_transaction_cycle();
wait(iP0_vif.cb_iP0_MON.ip_valid && ~iP0_vif.cb_iP0_MON.ip_stall);
endtask: wait_for_valid_transaction_cycle
virtual task create_and_populate_pkt();
pkt = Router_seq_item :: type_id :: create("pkt");
pkt.valid = iP0_vif.cb_iP0_MON.ip_valid;
pkt.sop = iP0_vif.cb_iP0_MON.ip_sop;
$display("before data collection");
while(iP0_vif.cb_iP0_MON.ip_valid === `HIGH && iP0_vif.cb_iP0_MON.ip_stall === `LOW) begin
$display("After checking for stall");
pkt.data = iP0_vif.cb_iP0_MON.ip_data;
$display(pkt.data);
pkt.data_q.push_front(pkt.data);
pkt.eop = iP0_vif.cb_iP0_MON.ip_eop;
$display("print check in input monitor # time = %0t", $time);
#(iP0_vif.cb_iP0_MON);
end
$display("before printing input packet from monitor");
Check_for_port_route_and_populate_packet_field(pkt);
print_packet(pkt);
endtask: create_and_populate_pkt
The $display statement "before printing input packet from monitor" is not being displayed.
HIGH is defined as a binary 1 and LOW is defined as a binary 0.
The output of the code in terms of display statements is as below.
before data collection
before checking for stall
After checking for stall
2
print check in input monitor # time = 105
before checking for stall
After checking for stall
1
print check in input monitor # time = 115
before checking for stall
After checking for stall
3
print check in input monitor # time = 125
It's possible that the main phase objection is being dropped elsewhere in your environment. UVM will automatically kill any threads that were spawned during a phase when it ends.
To fix this, do not object to the main phase in your monitor. Objecting to that phase is the responsibility of the threads creating the stimulus. Instead, you should be launching this monitor during the run_phase, which will ensure that your loop is not killed until the end of simulation.
Also, during the shutdown phase, you will want your monitor to object whenever it is currently seeing a packet. This will ensure that simulation doesn't end as soon as stimulus has been sent in, giving your other monitors time to collect responses from the DUT.

CPU scheduling algorithms and arrival time

I was looking at the examples found on this website :
http://www.tutorialspoint.com/operating_system/os_process_scheduling_algorithms.htm
And there's something that just doesn't make sense about those examples. Take shortest-job-first for example. The premise is that you take the process with the least execution time and run that first.
The example runs p1 first and then p0. But WHY? At t = 0 the only process that exists in the queue is p0. Wouldn't that start running at t = 0, and then p1 would start at t = 6?
I've got the same issue with priority based scheduling.
you are right , since the process P0 has arrived at the queue at 0 sec and before P1 , it will start executing before P1 .
Their answer would be correct if there was no arrival time for the corresponding process and in that case , it is considered that all the processes have reached at the queue at the same time .So, the process with shortest executing time will be executed by CPU first .