Can both processes block each other in Peterson's Algorithm? - process

Here is a picture of Peterson's algorithm from https://en.wikipedia.org/wiki/Peterson%27s_algorithm :
Is it possible for both processes to block each other?
For example:
Process 0 sets flag[0] = true;
Process 0 is interrupted and process 1 starts.
Process 1 sets flag[1] = true;.
Process 1 is interrupted and then process 0 starts again.
Process 0 sets turn = 1;, and so now the while loop condition is met, blocking process 0.
Process 0 is interrupted and process 1 starts again
Process 1 sets turn = 0;, and so now the while loop condition is met, blocking process 1.
Now both processes are blocked.

Related

vulkan-tutorial - how rendering and presentation are synced

https://vulkan-tutorial.com/Drawing_a_triangle/Drawing/Rendering_and_presentation
While reading above tutorial, I have found a scenario where multiple items pile up in presentation queue.
The tutorial has a loop that runs bellow codes repeatedly.
void drawFrame() {
vkWaitForFences(device, 1, &inFlightFence, VK_TRUE, UINT64_MAX);
vkResetFences(device, 1, &inFlightFence);
uint32_t imageIndex;
vkAcquireNextImageKHR(device, swapChain, UINT64_MAX, imageAvailableSemaphore, VK_NULL_HANDLE, &imageIndex);
vkResetCommandBuffer(commandBuffer, /*VkCommandBufferResetFlagBits*/ 0);
recordCommandBuffer(commandBuffer, imageIndex);
VkSubmitInfo submitInfo{};
submitInfo.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
VkSemaphore waitSemaphores[] = {imageAvailableSemaphore};
VkPipelineStageFlags waitStages[] = {VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT};
submitInfo.waitSemaphoreCount = 1;
submitInfo.pWaitSemaphores = waitSemaphores;
submitInfo.pWaitDstStageMask = waitStages;
submitInfo.commandBufferCount = 1;
submitInfo.pCommandBuffers = &commandBuffer;
VkSemaphore signalSemaphores[] = {renderFinishedSemaphore};
submitInfo.signalSemaphoreCount = 1;
submitInfo.pSignalSemaphores = signalSemaphores;
if (vkQueueSubmit(graphicsQueue, 1, &submitInfo, inFlightFence) != VK_SUCCESS) {
throw std::runtime_error("failed to submit draw command buffer!");
}
VkPresentInfoKHR presentInfo{};
presentInfo.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR;
presentInfo.waitSemaphoreCount = 1;
presentInfo.pWaitSemaphores = signalSemaphores;
VkSwapchainKHR swapChains[] = {swapChain};
presentInfo.swapchainCount = 1;
presentInfo.pSwapchains = swapChains;
presentInfo.pImageIndices = &imageIndex;
vkQueuePresentKHR(presentQueue, &presentInfo);
There are two semaphores; One for rendering, another one for presentation.
Similarly, there are two queues for rendering and presentation.
Here is a scenario I found that can happen.
After the first iteration, each queue has one item to process.
At the second iteration, any of the items in the queues are not processed yet. So, it is blocked at vkWaitForFences.
The first item in graphics queue is processed.
It signals the blocking fence, and rendering semaphore.
The second iteration continues from vkWaitForFences.
Graphics queue receives second item. It has total one item.
Present queue also receives second item. Present queue has not processed the first item yet, so it has total two item.
Graphics queue process the second item.
It signals rendering semaphore again. Rendering semaphore has received two signals without turning off.
Now, present queue will only process one item and do nothing until next iteration.
Even in next iterations, if this issue keeps happening, unprocessed items will get piled up in the present queue.
Hence, if processing speed of graphics queue happens to be faster than present queue, there will be a starvation problem.
The tutorial does not explain how this issue can be solved.
Is there something in Vulkan that prevents this issue to occur, or have I actually found a flaw in the tutorial code?
vkAcquireNextImageKHR make the image semaphore to get signal when the swap image when the index it returned is presentable.
The image with the index returned by vkAcquireNextImageKHR become presentable again, when the item with the index is processed in the present queue.
Hence, if the items in present queue are not processed, vkAcquireNextImageKHR will not signal the image semaphore or block, stopping next rendering.
The number of items that can stay simultaneously in present queue will not grow infinitely, but stops increasing if the number of item is equal to the number of swap images.

DragonflyBSD: possible race-condition in lock manager (kern_lock.c) code?

Lately I've been reading the lock_manager (kern_lock.c) code, and bumped into some scenario which I think would create a race-condition.
Step 1:
#undo_shreq(...): if there is an upgrade request pending, the code will reset "LKC_UPREQ" flag and do the wakeup() call; this thing happen only if
(count & (LKC_EXREQ | LKC_UPREQ | LKC_CANCEL)) &&
(count & (LKC_SMASK | LKC_XMASK)) == 0)
Step 2:
Now, in-parallel, an another thread T2 is trying to get an exclusive lock and it reaches the trivial condition (i.e) #lockmgr_exclusive(...)
if ((count & (LKC_UPREQ | LKC_EXREQ |
LKC_XMASK)) == 0 &&
((count & LKC_SHARED) == 0 ||
(count & LKC_SMASK) == 0))
So, T2 increases the count by 1 and set itself as the owner thread-- meaning it got the exclusiveness.
Step 3:
A thread (T1) slept on a LKC_UPREQ flag woke by Step 1; and here is the code after sleep (....after, LK_SLEEPFAIL and sleep-error sanity check), #lockmgr_upgrade(...)
if ((count & LKC_UPREQ) == 0) { // reset by step 1
KKASSERT((count & LKC_XMASK) == 1); // true, by step 2
lkp->lk_lockholder = td;
break;
}
I see (please correct me if am wrong), at step 3 that, thread T1 resets the lk_lockholder to itself-- meaning, it got the exclusiveness!
The way it works is that if one thread sets UPREQ and then sleeps, another thread will grant it the exclusive lock and wake it up. The granting thread clears UPREQ and increments the exclusive count, but does not know 'who' set the UPREQ so it is then up to the thread that set UPREQ to set the lockholder field.
Since the lockmgr code must deal with both many-exclusive-to-single-shared starvation, many-shared-to-single-exclusive starvation cases, AND deadlock edge cases, it is fairly complex. Edge cases have been found over the last year or two, but this particular case doesn't look like a bug to me. Just a bit confusing because the exclusive lock is granted (UPREQ cleared and excl count incremented) by the second thread but the first thread is still responsible for setting the lk_lockholder field.

test plan is not getting executed while using 1 thread for one loop and using throughput controllers inside the thread group

i have a test plan in which
distribution of throughput controllers is
post and get 1 => 10%
post and get 2 => 40%
post and get 3 => 25%
post => 25%
if i run the test plan with loopcount = forever then it is working fine with single thread or multiple theads
but if i run the test plan with loopcount = 1 and threads = 1 it is not even starting the test.
How to fix it?
1 thread to 1 loop means only 1 execution with 1 thread where you want that to split the execution between different percentage. This is not possible. But, loop is forever, 1 thread can run many iteration and execution the request based on the defined percentage. This is possible.
So, options are loop forever or increase number of threads. In your case minimum 6 thread will work for 1 iteration. Increase thread for multiple execution based on percentage.
Hope this helps.

Unable to exit while loop in UVM monitor

This might be a silly mistake from my side that I have overlooked but I'm fairly new to UVM and I tried tinkering with my code for a while before this. I'm trying to send in a stream of 8 bit data within a packet using Data valid stall protocol from my UVM driver to the DUT. I'm facing an issue with my input monitor not being able to pick up these transactions that are driven.
I have a while loop with a condition that the valid bit must be high and the stall bit should be low. As long as this condition holds good, the monitor needs to pick up the data byte and push into the queue. I know for a fact that the data is being picked up and pushed to a queue as I used $display statements along the way. The problem is arising once all the data bytes are received and the valid bit goes low. Ideally, this should cause the exit from the while loop but isn't doing so. Any help here would be appreciated. I have attached a snippet of the code below. Thanks in advance.
virtual task main_phase (uvm_phase phase);
$display("Run phase of input monitor");
collect_transfer();
endtask: main_phase
virtual task collect_transfer();
fork
forever begin
wait_for_valid_transaction_cycle();
create_and_populate_pkt();
broadcast_pkt();
#(iP0_vif.cb_iP0_MON);
end
join_none
endtask: collect_transfer
virtual task wait_for_valid_transaction_cycle();
wait(iP0_vif.cb_iP0_MON.ip_valid && ~iP0_vif.cb_iP0_MON.ip_stall);
endtask: wait_for_valid_transaction_cycle
virtual task create_and_populate_pkt();
pkt = Router_seq_item :: type_id :: create("pkt");
pkt.valid = iP0_vif.cb_iP0_MON.ip_valid;
pkt.sop = iP0_vif.cb_iP0_MON.ip_sop;
$display("before data collection");
while(iP0_vif.cb_iP0_MON.ip_valid === `HIGH && iP0_vif.cb_iP0_MON.ip_stall === `LOW) begin
$display("After checking for stall");
pkt.data = iP0_vif.cb_iP0_MON.ip_data;
$display(pkt.data);
pkt.data_q.push_front(pkt.data);
pkt.eop = iP0_vif.cb_iP0_MON.ip_eop;
$display("print check in input monitor # time = %0t", $time);
#(iP0_vif.cb_iP0_MON);
end
$display("before printing input packet from monitor");
Check_for_port_route_and_populate_packet_field(pkt);
print_packet(pkt);
endtask: create_and_populate_pkt
The $display statement "before printing input packet from monitor" is not being displayed.
HIGH is defined as a binary 1 and LOW is defined as a binary 0.
The output of the code in terms of display statements is as below.
before data collection
before checking for stall
After checking for stall
2
print check in input monitor # time = 105
before checking for stall
After checking for stall
1
print check in input monitor # time = 115
before checking for stall
After checking for stall
3
print check in input monitor # time = 125
It's possible that the main phase objection is being dropped elsewhere in your environment. UVM will automatically kill any threads that were spawned during a phase when it ends.
To fix this, do not object to the main phase in your monitor. Objecting to that phase is the responsibility of the threads creating the stimulus. Instead, you should be launching this monitor during the run_phase, which will ensure that your loop is not killed until the end of simulation.
Also, during the shutdown phase, you will want your monitor to object whenever it is currently seeing a packet. This will ensure that simulation doesn't end as soon as stimulus has been sent in, giving your other monitors time to collect responses from the DUT.

In celery, how to ensure tasks are retried when worker crashes

First of all please don't consider this question as a duplicate of this question
I have a setup an environment which uses celery and redis as broker and result_backend. My question is how can I make sure that when the celery workers crash, all the scheduled tasks are re-tried, when the celery worker is back up.
I have seen advice on using CELERY_ACKS_LATE = True , so that the broker will re-drive the tasks until it get an ACK, but in my case its not working. Whenever I schedule a task its immediately goes to the worker which persists it until the scheduled time of execution. Let me give some example:
I am scheduling a task like this: res=test_task.apply_async(countdown=600) , but immediately in celery worker logs i can see something like : Got task from broker: test_task[a137c44e-b08e-4569-8677-f84070873fc0] eta:[2013-01-...] . Now when I kill the celery worker, these scheduled tasks are lost. My settings:
BROKER_URL = "redis://localhost:6379/0"
CELERY_ALWAYS_EAGER = False
CELERY_RESULT_BACKEND = "redis://localhost:6379/0"
CELERY_ACKS_LATE = True
Apparently this is how celery behaves.
When worker is abruptly killed (but dispatching process isn't), the message will be considered as 'failed' even though you have acks_late=True
Motivation (to my understanding) is that if consumer was killed by OS due to out-of-mem, there is no point in redelivering the same task.
You may see the exact issue here: https://github.com/celery/celery/issues/1628
I actually disagree with this behaviour. IMO it would make more sense not to acknowledge.
I've had the issue, where I was using some open-source C libraries that went totaly amok and crashed my worker ungraceful without throwing an exception. For any reason whatsoever, one can simply wrap the content of a task in a child process and check its status in the parent.
n = os.fork()
if n > 0: //inside the parent process
status = os.wait() //wait until child terminates
print("Signal number that killed the child process:", status[1])
if status[1] > 0: // if the signal was something other then graceful
// here one can do whatever they want, like restart or throw an Exception.
self.retry(exc=SomeException(), countdown=2 ** self.request.retries)
else: // here comes the actual task content with its respected return
return myResult // Make sure there are not returns in child and parent at the same time.