In the book 'Operating System Concepts' - 9th Edition - Chapter 3 - Page 117 - Page 120 it says:
Both processes (the parent and the child) continue execution at the instruction after the fork(), with one difference: the return code for the fork() is zero for the new (child) process, whereas the (nonzero) process identifier of the child is returned to the parent.
The only difference is that the value of pid (the process identifier) for the child process is zero, while that for the parent is an integer value greater than zero (in fact, it is the actual pid of the child process).
Please can someone explain this concept to me.
Whenever a processor(CPU) runs a program , it stores the line number of the code it is executing (more formally address pointing to that instruction).The computer stores it in a register (kind of variable) called as stack pointer. So the stack pointer(SP) will store the current instruction processor has to execute (or run). This way computer track which instruction should be executed .
When a program runs , it is allocated some small memory in the computer's main memory.Here's where all our code along with important registers(including SP) which help processor to keep track of program when it's running.Too a process is uniquely identified by Process ID(PID).
Now let me come to your question. Whenever we call fork , a copy of the program from which you called fork is created. This copy is called as the "child process" and our original process is known as parent process.
When the copy created , all the memory that your program has been allocated is copied to some other place in memory (which is now child process's memory).So an identical running program(process) is created.
Now this copied memory contains the SP of the parent process , so whenever processor runs the program it directly runs the program from the same fork call line (since SP will store this line as current instruction when the process is created).Since our fork call was successful , it has to return a non-negative value (denoting success of fork system call)So it return 0 to the child process and child process ID to the parent(since the current Process ID was pointing here too).
Returning Child Process ID to parent makes a good deal since it's better that parent can keep a track of child process created from it.Too returning child a 0 make the deal even better as we have to return a non-negative number and other positive number may be some process's PID.
Run cd /proc in your linux system . All the directories having some numeral name are pid of some process(which may be active/inactive).Read more about it to clear the concept.
Hope that clears your doubt :).
when you call fork in your code it returns two values in case call is success one for parent (the code run by parent) and zero for child (the code run by child)
int childparent;
childparent = fork();
if (childparent >=0)
{
if (childparent==0)
//child code
if (childparent>0)
//parent code
}
the pid 0 mention in your sentence is not the process id shown by shell command ps
Edit: yes the code running in the parent (if childparent>0) case is running in context of that specific child which is just created. so return value to parent is the child actual Process ID (PID). if you fork in your simple code and sleep for long time in code enough to run ps you can match the PIDs shown in PS and printf in parent the return value of the fork() (printf("%d",childparent))
fork() makes a complete copy of the current process by creating a new process and then filling it with the current process. It chooses one to be the parent and the other to be the child. It indicates the difference only by adjusting the return value of fork(). If the process(es) ignore the return value, they behave identically. (There are some rarely significant other differences between the processes related to signal handling and delivery, file descriptors opened with special attributes, memory maps, etc.)
When you think you begin to understand it, look at this to see if you do.
I had this doubt when I was reading the book too. The answer is as follows:
When the main program (parent) executes fork(), a copy of its address space, including the program and all data, is created. System call fork() returns the child process ID to the parent and returns 0 to the child process. Both the parent and the child process can now start their execution from the immediate next line after the fork system call.
Let me illustrate this with a simple example.
Consider this code:
main()
{
pid=fork();
if(pid == 0) // Condition to determine Parent/Child Process
ChildProcess();
else
ParentProcess();
}
void ChildProcess()
{
//Some Arbitrary Code
}
void ParentProcess()
{
//Some Arbitrary Code
}
This snippet explains that based on a condition, both the processes (parent and child) can now execute in their own pre-defined way.
In the above example, say the process id of the child is 3456, then the parent would get that id as the return value from fork. However, the child will always get the process-id as 0 and then the execution continues.
The design of the fork() call is such because the complete administration of the child process(es) can now be handled by the parent and the parent can always keep a note of which of the child processes terminates, normally or abnormally, by implicitly/explicitly invoking exit() system call. A parent may also simply wait for the child by making a wait() system call which then return the pid of the child and in this way, the parent can keep a note of which child has terminated.
This is how child process creation and termination is handled.
There is one more thing I would like to add here which is not completely relevant to the question but I think would be helpful.
You would also have noticed the description of the exec() system call immediately after this discussion in the book. In short, both the discussions explain this:
Forking provides a way for an existing process to start a new one, but
what about the case where the new process is not part of the same
program as parent process? This is the case in the shell; when a user
starts a command it needs to run in a new process, but it is unrelated
to the shell.
This is where the exec system call comes into play. exec will replace
the contents of the currently running process with the information
from a program binary.
Thus the process the shell follows when launching a new program is to
firstly fork, creating a new process, and then exec (i.e. load into
memory and execute) the program binary it is supposed to run.
If you would like to know more about the fork() system call then you should also know about its internal implementation, especially how the clone() system call works.
Reference Sites:
The fork() system call article by MTU
How Fork & Exec work together
Related
I ran into a code that looks like this
int main(void){
pid_t pid;
char sharedVariable='P';
char *ptrSharedVariable=&sharedVariable;
pid = fork()
if(pid==0) {
sharedVariable = 'C';
print("Child Process\n");
printf("Address is %p\n", ptrSharedVariable);
printf("char value is %c\n", sharedVariable);
sleep(5);
} else {
sleep(5);
print("Parent Process\n");
printf("Address is %p\n", ptrSharedVariable);
printf("char value is %c\n", sharedVariable);
}
By what I learned on stack overflow, I can tell that the char value of the parent and child process will be different. The child's value is 'C' and the parent's is 'P'. I also can tell that the address in both parent and child should be the same, which is the address to 'sharedVariable'(&sharedVariable).
However here are my question.
What is the point of assigning different char values to different processses? Because for one thing, since we can already identify each process by pid==0 or >0, wouldn't this step be a redundancy? Another reason is I don't see a point in differentiating two processes that do the same job, can't they work without letting the programmers tell them apart?
Why let the addresses of parent and child stay the same? I can suggest that since they are assumed to proceed on similar tasks, it would be convenient to do so, because then we can just copy and paste code. I am hesitant and want to make sure.
if I replaced fork() with vfork(), would the result of the parent's char value then be 'C'?
Thanks a million in advance.
This question has been answered several times. For example here.
Even though I may repeat what has been already written in several answers, here are some precisions for your 3 points:
The naming of the char variable (sharedVariable) in the code that you shared is confusing because the variable is not shared between a parent process and a child process. The address space of the child is a copy of the address space of the parent. So, here there are two processes (father and child) running concurrently with their own stack where the above variable is located (one in the parent's stack and the other in the child's stack).
The address space of a process is virtual. In each process you will see the same virtual addresses but they run with their proper code and data (i.e. they "points" on different physical memory locations). Optimizations are done in the kernel to share as most resources as possible until they are modified by one of the processes (e.g. Copy On Write principle) but this is transparent from the user space programmer's point of view.
If you use vfork(), the variables are shared because the address spaces are shared between the parent and the child. You don't have a copy as it is done for fork(). The resulting child process is like a co-routine (it is lighter than a thread as even the stack is shared!). It is why the parent process is suspended until the child either exits or executes a new program. The manual warns about the risks of such an operation. Its goal is to execute a process immediately (fast fork()/exec() procedures). It is not dedicated to long living child processes as any call to the GLIBC or any other library service may either fail or trigger corruptions in the parent process. vfork() is a direct call to the system without any added value from the GLIBC. In the case of fork(), the user space libraries do lots of "housekeeping" to make the libraries usable in both parent and child processes (GLIBC's wrapper of fork() and pthread_atfork() callbacks). In the case of vfork(), this "housekeeping" is not done because the child process is supposed to be directly overwritten by another program through an execve() call. This is also the reason why a child spawn through vfork() must not call exit() but _exit() because the child would run any registered atexit() callbacks of the father process which could lead to unexpected crashes in both the child and the father processes.
I want to save some information within the python code that is part of my snake file, and have this information available to the python code in every instance that snakemake creates when it is running the workflow. But a separate run of the workflow should have its own separate instance of information.
For example, say I were to create a UUID in my python code, and then later use it in the python code. But I want the UUID to be the same one in all running instances of the workflow. Instead, a new UUID gets created each time an instance is started.
If I start snakemake twice at the same time, I would want each of the two runs to create their own UUID, but within each run, all instances created by the run would use the same UUID. How to do this? Is there an identifier somewhere in the snakemake object that remains the same within one run across all instances, but changes from run to run?
Here's an example that fails with a 'No rule to produce' error:
import uuid
ID = str(uuid.uuid4())
print("ID:", ID)
rule all:
output: ID
run: print("Hello world")
If instead of 'run' it uses 'shell', it works fine, so I assume that Snakemake is rerunning the snakefile code when it executes the "run" portion of the rule. How could this be modified to work, to retain the first UUID value instead of generating a second one? Also, why isn't the ID specified for output in the rule captured when the rule is first processed, without requiring a second invocation of the python code? Since it works with 'shell', the second invocation is not needed specifically for processing the "output" statement.
Indeed, when you use a run block, Snakemake will invoke itself to execute that job, meaning that it also reparses the Snakefile, generating a new UUID. The same will happen on the cluster. There are good technical reasons for doing it like this (performance, the Python GIL, restrictions with pickling, simplicity and robustness of the implementation).
I am not sure what exactly you want to achieve, but it might help to look at this: http://snakemake.readthedocs.io/en/stable/project_info/faq.html#i-want-to-pass-variables-between-rules-is-that-possible
I've found a method that seems to work: use the process group ID:
ID = str(os.getpgrp())
Multiple instances of the same pipeline have the same group ID. However, I'm not sure if this remains true on a cluster, probably not. In my case that didn't matter.
I am using lua coroutines (lua 5.1) to create a plugin system for an application. I was hoping to use coroutines so that the plugin could operate as if it were a separate application program which yields once per processing frame. The plugin programs generally follow a formula something like:
function Program(P)
-- setup --
NewDrawer(function()
-- this gets rendered in a window for this plugin program --
drawstuff(howeveryouwant)
end)
-- loop --
local continue = true
while continue do
-- frame by frame stuff excluding rendering (handled by NewDrawer) --
P = coroutine.yield()
end
end
Each plugin is resumed in the main loop of the application once per frame. Then when drawing begins each plugin has an individual window it draws in which is when the function passed to NewDrawer is executed.
Something like this:
while MainContinue do
-- other stuff left out --
ExecutePluginFrames() -- all plugin coroutines resumed once
BeginRendering()
-- other stuff left out --
RenderPluginWindows() -- functions passed to NewDrawer called.
EndRendering()
end
However I found that this suddenly began acting strangely and messing up my otherwise robust error handling system whenever an error occurred in the rendering. It took me a little while to wrap my head around what was happening but it seems that the call to WIN:Draw() which I expected to be in the main thread's call stack (because it is handled by the main application) was actually causing an implicit jump into the coroutine's call stack.
At first the issue was that the program was closing suddenly with no useful error output. Then after looking at a stack traceback of the rendering function defined in the plugin program I saw that everything leading up to the window's Draw from the main thread was not there and that yield was in the call stack.
It seems that because the window was created in the thread and the drawing function, that they are being handled by that thread's call stack, which is a problem because it means they are outside of the pcall set up in the main thread.
Is this suppose to happen? is it the result of a bug/shortcut in the C source? am I doing something wrong or at least not correctly enough? is there a way to handle this cleanly?
I can't reproduce the effect you are describing. This is the code I'm running:
local drawer = {}
function NewDrawer(func)
table.insert(drawer, func)
end
function Program(P)
NewDrawer(function()
print("inside program", P)
end)
-- loop --
local continue = true
while continue do
-- frame by frame stuff excluding rendering (handled by NewDrawer) --
P = coroutine.yield()
end
end
local coro = coroutine.create(Program)
local MainContinue = true
while MainContinue do
-- other stuff left out --
-- ExecutePluginFrames() -- all plugin coroutines resumed once
coroutine.resume(coro, math.random(10))
-- RenderPluginWindows() -- functions passed to NewDrawer called.
for _, plugin in ipairs(drawer) do
plugin()
end
MainContinue = false
end
When I step through the code and look at the stack, the callback that is set in NewDrawer is called in the "main" thread as it should. You can see it yourself if you call coroutine.running() which returns the current thread or nil if you are inside the main thread.
I have discovered why this was happening in my case. The render objects which called the function passed to NewDrawer are initialized on creation (by the c code) with a pointer to the lua state that created them and this is used for accessing their associated lua data and for calling the draw function. I had not seen the connection between lua_State and coroutines. So as it turns out it is possible for functions to be called in the stack after yield if C code is causing them.
As far as a solution goes I've decided to break the program into two coroutines, one for rendering and one for processing. This fixes the problem by allowing the creating thread of the render objects to also be the calling thread, and keeps the neat advantages of the independence of the rendering loop and processing loop.
I am trying to find out whether it is possible to start a gen_server with a given state.
I would like to be able to set up a monitor/supervisor that restarts the server with its last valid state when this server crashes.
Any suggestion on how to tackle this problem would be very Welcome.
So far my only idea is to have a special handle_call/3 that changes the server state to the desired state when called, but I would like to avoid modifying the server module and handle this purely from my monitor/supervisor process if possible.
Thank you for your time.
gen_server:init takes argument Args. You can pass whatever state you want and set it as the state of the server. You can pass Args to start_link and it will pass it to init for you.
http://www.erlang.org/doc/man/gen_server.html#Module:init-1
http://www.erlang.org/doc/man/gen_server.html#start_link-3
I think that in your case you might want to store the state in mnesia. That way you don't have to take care of passing last valid state to the gen_server. In case you don't want to start mnesia you can use ETS. Create public ETS in some process that won't die and use it from your gen_server (note that when server that created ets dies, the ets is destroyed)
http://www.erlang.org/doc/man/ets.html
http://www.erlang.org/doc/man/mnesia.html
I have a problem with the sequence model seen in the diagram below, specifically where the System object is creating a new Number. In this case, there is no need for a return message since the function SaveInput(n), both in System and Number, is the end of the line for that portion of the program, but unless I include one, the modeller reshaped my diagram into the other one I've uploaded here, and I can't see how to arrange the messages so that my program will work the way I intend without including the return message (the one without a name) from Number to System, since the functions SaveInput() both return a void.
How should void-returning functions be handled in sequence diagrams so that they behave correctly? I have opened the message properties and explicitly defined it as returning a void, but that hasn't helped.
When A calls operation b in B, the "return" arrow from B to A indicates the end of the operation b has finished its execution. This doesn´t mean that as part of the return message you have to return a value, it only means that the execution is done and you can continue with the next messages. Visually, most tools also use these return messages to manage the life bar of the object.