Zombie vs Orphan process with sleep - process

I have the next code and I need to check the child process' status, but the sleep() confuses me. I think that the child becomes zombie for a period of time (until the parent finishes sleeping and waits). If this is correct, then what happens if the parent sleeps for 1 second instead of 1000? Will the child become orphan for a period of time? Or is the process finished correctly since the parent waits?
pid_t pid = fork();
if (pid) {
sleep(1000);
wait(NULL);
}
else {
sleep(5);
printf("Hello!");
}

Related

vulkan-tutorial - how rendering and presentation are synced

https://vulkan-tutorial.com/Drawing_a_triangle/Drawing/Rendering_and_presentation
While reading above tutorial, I have found a scenario where multiple items pile up in presentation queue.
The tutorial has a loop that runs bellow codes repeatedly.
void drawFrame() {
vkWaitForFences(device, 1, &inFlightFence, VK_TRUE, UINT64_MAX);
vkResetFences(device, 1, &inFlightFence);
uint32_t imageIndex;
vkAcquireNextImageKHR(device, swapChain, UINT64_MAX, imageAvailableSemaphore, VK_NULL_HANDLE, &imageIndex);
vkResetCommandBuffer(commandBuffer, /*VkCommandBufferResetFlagBits*/ 0);
recordCommandBuffer(commandBuffer, imageIndex);
VkSubmitInfo submitInfo{};
submitInfo.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
VkSemaphore waitSemaphores[] = {imageAvailableSemaphore};
VkPipelineStageFlags waitStages[] = {VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT};
submitInfo.waitSemaphoreCount = 1;
submitInfo.pWaitSemaphores = waitSemaphores;
submitInfo.pWaitDstStageMask = waitStages;
submitInfo.commandBufferCount = 1;
submitInfo.pCommandBuffers = &commandBuffer;
VkSemaphore signalSemaphores[] = {renderFinishedSemaphore};
submitInfo.signalSemaphoreCount = 1;
submitInfo.pSignalSemaphores = signalSemaphores;
if (vkQueueSubmit(graphicsQueue, 1, &submitInfo, inFlightFence) != VK_SUCCESS) {
throw std::runtime_error("failed to submit draw command buffer!");
}
VkPresentInfoKHR presentInfo{};
presentInfo.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR;
presentInfo.waitSemaphoreCount = 1;
presentInfo.pWaitSemaphores = signalSemaphores;
VkSwapchainKHR swapChains[] = {swapChain};
presentInfo.swapchainCount = 1;
presentInfo.pSwapchains = swapChains;
presentInfo.pImageIndices = &imageIndex;
vkQueuePresentKHR(presentQueue, &presentInfo);
There are two semaphores; One for rendering, another one for presentation.
Similarly, there are two queues for rendering and presentation.
Here is a scenario I found that can happen.
After the first iteration, each queue has one item to process.
At the second iteration, any of the items in the queues are not processed yet. So, it is blocked at vkWaitForFences.
The first item in graphics queue is processed.
It signals the blocking fence, and rendering semaphore.
The second iteration continues from vkWaitForFences.
Graphics queue receives second item. It has total one item.
Present queue also receives second item. Present queue has not processed the first item yet, so it has total two item.
Graphics queue process the second item.
It signals rendering semaphore again. Rendering semaphore has received two signals without turning off.
Now, present queue will only process one item and do nothing until next iteration.
Even in next iterations, if this issue keeps happening, unprocessed items will get piled up in the present queue.
Hence, if processing speed of graphics queue happens to be faster than present queue, there will be a starvation problem.
The tutorial does not explain how this issue can be solved.
Is there something in Vulkan that prevents this issue to occur, or have I actually found a flaw in the tutorial code?
vkAcquireNextImageKHR make the image semaphore to get signal when the swap image when the index it returned is presentable.
The image with the index returned by vkAcquireNextImageKHR become presentable again, when the item with the index is processed in the present queue.
Hence, if the items in present queue are not processed, vkAcquireNextImageKHR will not signal the image semaphore or block, stopping next rendering.
The number of items that can stay simultaneously in present queue will not grow infinitely, but stops increasing if the number of item is equal to the number of swap images.

how can a child process update the varaible in father process?

"version:redis-3.0.2, file:rdb.c, method: int rdbSave(char * filename)", there're some UPDATE action to the global varaible "server":
server.dirty = 0;
server.lastsave = time(NULL);
server.lastbgsave_status = REDIS_OK;
I wonder, how can a child process update the varaible in father process? Theoretically, it can't.
rdbSave is run in the foreground in the main event loop thread, hence the update isn't done by a chile process.
Look at rdbSaveBackground for fork implementation.

why are parent.getpid() and child.getppid() different

I am trying to understand the consept of process. So I wrote a program like this:
#include<stdio.h>
#include<sys/types.h>
#include<unistd.h>
int main() {
pid_t pid;
pid = fork();
if (pid == 0)
printf("This is the child process. My pid is %d and my parent's id is %d.\n", getpid(), getppid());
else
printf("This is the parent process. My pid is %d and my child's id is %d.\n", getpid(), pid);
}
I expected this program would print something like
This is the parent process. My pid is 2283 and my child's id is 2284.
This is the child process. My pid is 2284 and my parent's id is 2283.
But instead, it prints this
This is the parent process. My pid is 2283 and my child's id is 2284.
This is the child process. My pid is 2284 and my parent's id is 1086.
At the end of the second line, the parent pid of the child is different form the parent process's pid.
Why is this happening? Is there something that I am missing?
Thanks in advance
The hint of Tony Tannous was correct: The child may live longer than the parent. When the parent is exiting its child process is "hung up" i.e. it becomes child process of the init process.
I modified the sample code of OP to force the child process living longer then the parent process.
#include<stdio.h>
#include<sys/types.h>
#include<unistd.h>
int main()
{
pid_t pid;
pid = fork();
if (pid == 0) {
sleep(1); /* 1 s */
printf(
"This is the child process."
" My pid is %d and my parent's id is %d.\n", getpid(), getppid());
} else {
printf(
"This is the parent process."
" My pid is %d and my child's id is %d.\n", getpid(), pid);
}
return 0;
}
Compiled and tested with gcc on cygwin:
$ gcc -o test-pid-ppid test-pid-ppid.c
$ ./test-pid-ppid
This is the parent process. My pid is 10748 and my child's id is 10300.
$ This is the child process. My pid is 10300 and my parent's id is 1.
In my test this is obvious due to the specific PID 1 (the PID the init process usually gets). I'm a little bit surprised about PID 1086 observed in the OP but:
There is no specification (I know) that init process must get PID 1 - its only usual.
The OP was run on a VM. There, may be, things are done slightly different than usual...
Concerning my belief that an exiting process would kill all of its children, I investigated further and found this: Is there any UNIX variant on which a child process dies with its parent?. In short: my belief was wrong. Thank's for that question which forced me to enlightment.

Kill Process When Another Opens

I've tried some variations of this without luck:
Process, Exist, Game.exe
Process, Close, GamePatcher.exe
Return
I'm playing a game where the launcher/patcher stays open even after the game launches.
Any ideas?
A While loop should help you out. Here is a solution using a little ProcExists function that can be reused.
Loop
{
If ProcExists("Game.exe") and ProcExists("GamePatcher.exe")
break
Sleep 500
}
; Both procs exist, wait for Game to close.
While ProcExists("Game.exe")
Sleep 500
Process, Close, GamePatcher.exe
Reload ; Reloads waiting for both to exist again
ProcExists(p)
{
Process, Exist, % p
Return ErrorLevel
}
If you want this to perform continuously (keep the script running at all times), it would be best to implement SetTimer like this:
#Persistent
SetTimer, checkGame, 1000
Return
CheckGame:
If ! ProcExists("Game.exe")
Process, Close, GamePatcher.exe
ProcExists(p)
{
Process, Exist, % p
Return ErrorLevel
}

In celery, how to ensure tasks are retried when worker crashes

First of all please don't consider this question as a duplicate of this question
I have a setup an environment which uses celery and redis as broker and result_backend. My question is how can I make sure that when the celery workers crash, all the scheduled tasks are re-tried, when the celery worker is back up.
I have seen advice on using CELERY_ACKS_LATE = True , so that the broker will re-drive the tasks until it get an ACK, but in my case its not working. Whenever I schedule a task its immediately goes to the worker which persists it until the scheduled time of execution. Let me give some example:
I am scheduling a task like this: res=test_task.apply_async(countdown=600) , but immediately in celery worker logs i can see something like : Got task from broker: test_task[a137c44e-b08e-4569-8677-f84070873fc0] eta:[2013-01-...] . Now when I kill the celery worker, these scheduled tasks are lost. My settings:
BROKER_URL = "redis://localhost:6379/0"
CELERY_ALWAYS_EAGER = False
CELERY_RESULT_BACKEND = "redis://localhost:6379/0"
CELERY_ACKS_LATE = True
Apparently this is how celery behaves.
When worker is abruptly killed (but dispatching process isn't), the message will be considered as 'failed' even though you have acks_late=True
Motivation (to my understanding) is that if consumer was killed by OS due to out-of-mem, there is no point in redelivering the same task.
You may see the exact issue here: https://github.com/celery/celery/issues/1628
I actually disagree with this behaviour. IMO it would make more sense not to acknowledge.
I've had the issue, where I was using some open-source C libraries that went totaly amok and crashed my worker ungraceful without throwing an exception. For any reason whatsoever, one can simply wrap the content of a task in a child process and check its status in the parent.
n = os.fork()
if n > 0: //inside the parent process
status = os.wait() //wait until child terminates
print("Signal number that killed the child process:", status[1])
if status[1] > 0: // if the signal was something other then graceful
// here one can do whatever they want, like restart or throw an Exception.
self.retry(exc=SomeException(), countdown=2 ** self.request.retries)
else: // here comes the actual task content with its respected return
return myResult // Make sure there are not returns in child and parent at the same time.