How to fork/clone a process in Erlang - process

How to fork/clone a process in Erlang, as the fork in Unix?
I have searched a lot but just got nothing.
Maybe the usage looks like this:
case fork() of
{parent, Pid} ->
in_parent_process_now();
{child, Pid} ->
in_child_process_now();
{error, Msg} ->
report_fork_error(Msg)
end.
Any ideas?
EDIT:
In order to explain my point better, take the following C code as an example:
f();
fork();
g();
Here the return value of fork() is ignored, so the next steps of both the parent process and the child process are the same, which is to execute g().
Can I achieve this in Erlang?

(This question was also answered in the erlang-questions mailing list.)
Erlang does not have a 'fork' operation. It has a spawn operation however:
parent_process() ->
will_be_executed_by_parent_process(),
spawn(fun() -> will_be_executed_by_child_process() end),
will_also_be_executed_by_parent_process().
... where function names show in what context they will be executed. Note that any data passed to the child process will be copied to the new process' heap.

As you know, there is generic pattern to implement processes in erlang:
loop( State ) ->
receive
Message ->
NewState = process( Message, State ),
loop( NewState )
end.
In each quant of time process has a State. So if you want to "fork" some process from current - you have to pass specific message for it. Process have to recognize that message and spawn the new process with copy of its current state in spawned process.
I've created example, to illustrate text above:
-module( test ).
-export( [ fork/1, get_state/1, change_state/2 ] ).
-export( [ loop/1 ] ).
loop( State ) ->
receive
{ fork, Sender } ->
%%
%% if you want to link with child process
%% call spawn_link instead of spawn
%%
ClonePid = spawn( ?MODULE, loop, [ State ] ),
responseTo( Sender, ClonePid ),
loop( State );
{ get_state, Sender } ->
responseTo( Sender, { curr_state, State } ),
loop( State );
{ change_state, Data, Sender } ->
{ Response, NewState } = processData( Data, State ),
responseTo( Sender, Response ),
loop( NewState )
end.
fork( Pid ) ->
Ref = make_ref(),
Pid ! { fork, { Ref, self() } },
get_response( Ref ).
get_state( Pid ) ->
Ref = make_ref(),
Pid ! { get_state, { Ref, self() } },
get_response( Ref ).
change_state( Pid, Data ) ->
Ref = make_ref(),
Pid ! { change_state, Data, { Ref, self() } },
get_response( Ref ).
get_response( Ref ) ->
receive
{ Ref, Message } -> Message
end.
responseTo( { Ref, Pid }, Mes ) ->
Pid ! { Ref, Mes }.
processData( Data, State ) ->
%%
%% here comes logic of processing data
%% and changing process state
%%
NewState = Data,
Response = { { old_state, State }, { new_state, NewState } },
{ Response, NewState }.
Lets test it in erlang shell:
1> c(test).
{ok,test}
Creating parent process with initial state first_state
2> ParentPid = spawn( test, loop, [ first_state ] ).
<0.38.0>
3> test:get_state( ParentPid ).
{curr_state,first_state}
4>
Lets change state of parent process to second_state:
4> test:change_state( ParentPid, second_state ).
{{old_state,first_state},{new_state,second_state}}
Fork new process from parent process:
5> ChildPid = test:fork( ParentPid ).
<0.42.0>
Check state of forked process (it is the same as in parent process):
6> test:get_state( ChildPid ).
{curr_state,second_state}

There is no fork in Erlang. But you can use one among spawn/1, spawn/2, spawn/3, spawn/4 (see also spawn_link) that are BIFs of erlang see erlang module.
So, for example:
-module(mymodule).
-export([parent_fun/0]).
parent_fun() ->
io:format("this is the parent with pid: ~p~n", [self()]),
spawn(fun() -> child_fun() end),
io:format("still in parent process: ~p~n", [self()]).
child_fun() ->
io:format("this is child process with pid: ~p~n", [self()]).
Execute in erlang shell as:
mymodule:parent_fun().
Note that parent process and child process have different pids.
I strongly suggest you to read: http://learnyousomeerlang.com/the-hitchhikers-guide-to-concurrency

Related

Need to start a code line after react subscribe() functions end

val totalNumInst = TotalNumObj()
devSupportService.sendAllTalktalkMessages(naverId)
devSupportService.sendAllAutoDepositTalktalkMessages(naverId, totalNum)
logger.info("${totalNumInst.totalNum}")
Mono<>
.doOnSuccess { }
.subscribe()
First two lines execute several Mono<>.subscribe() functions. In each Mono<>'s .doOnSuccess{} the totalNum variable is increasing. At the last line, I added a log which shows totalNum. But the totalNum variable always shows the initial value, 0.
I need to leave a log which shows how many times does the Mono<>.subscribe() is executed.
Thank you for reading my question.
There are 2 ways of solving your issue. The blocking and non-blocking.
Blocking
create a countDownLatch, pass it to sendAllTalktalkMessages and sendAllAutoDepositTalktalkMessages, then wait for it being latched
val totalNumInst = TotalNumObj()
val latch = CountDownLatch(2)
devSupportService.sendAllTalktalkMessages(naverId, totalNumInst, latch)
devSupportService.sendAllAutoDepositTalktalkMessages(naverId, totalNumInst, latch)
if (!latch.await(30, TimeUnit.SECONDS)) {
throw TimeoutException("Waiting timed out")
}
logger.info("${totalNumInst.totalNum}")
And add latch.countDown() to each doOnSuccess (but I'd recommend to countDown in doFinally in case of the chain sending error signal)
Mono<>
.doOnSuccess { latch.countDown() }
.subscribe()
This is blocking solution, it is against reactive non-blocking concept.
Non-blocking
Make sendAllTalktalkMessages and sendAllAutoDepositTalktalkMessages returning Mono and zip them (moreover in that case you don't need to pass totalNumInst to them)
Mono.zip(
devSupportService.sendAllTalktalkMessages(naverId)
.map { 1 }
.onErrorResume { Mono.just(0) }
.defaultIfEmpty(0),
devSupportService.sendAllAutoDepositTalktalkMessages(naverId)
.map { 1 }
.onErrorResume { 0 }
.defaultIfEmpty(0)
) { counter1, counter2 -> counter1 + counter2 }
.subscribe { totalNum -> logger.info("$totalNum") }
in this realisation you count each success as 1 and each error or empty signal as 0.

Why is `export` not working properly in this module?

What I understand is that -export() make it possible to expose some, but not all, functions in a module definition. Inside the module definition, all functions are available, however.
I have a module that looks like this
-module(supervisor_test).
-export([start_listening/0, stop_listening/0, send_to_listener/1]).
listener() ->
receive
{Pid, Ref, x} ->
Pid ! {Ref, o};
{Pid, Ref, o} ->
Pid ! {Ref, x}
end.
supervisor() ->
process_flag(trap_exit, true),
Pid = spawn_link(?MODULE, listener, []),
register(reg_listener, Pid),
receive
{'EXIT', Pid, normal} -> % received when listener() finishes executing
ok;
{'EXIT', Pid, shutdown} -> % received when stop_listening() is called
ok;
{'EXIT', Pid, _} ->
supervisor()
end.
start_listening() ->
spawn(?MODULE, supervisor, []).
stop_listening() ->
Pid = whereis(reg_listener),
exit(Pid, shutdown).
send_to_listener(Value) ->
Ref = make_ref(),
reg_listener ! {self(), Ref, Value},
receive
{Ref, Reply} -> Reply
after 5000 ->
timeout
end.
Whenever I compile and call supvervisor_test:start_listening(), I get the following error
=ERROR REPORT==== ... ===
Error in process ... with exit value:
{undef,[{supervisor_test,supervisor,[],[]}]}
It goes away if I export_all and expose everything.
I tried compiling
-module(test).
-export([f1/0]).
f1() ->
f2().
f2() ->
io:format("I am here!~n").
and calling test:f1() and it works fine.
In start_listener() you're calling the MFA version of spawn(). This will use apply() and the apply docs state: "The applied function must be exported from Module."

Wrong number of arguments when using mget with redis-rs

I'm trying to access Redis using Rust with the following:
extern crate redis;
use redis::{Client, Commands, Connection, RedisResult};
fn main() {
let redis_client = Client::open("redis://127.0.0.1/").unwrap();
let redis_conn = redis_client.get_connection().unwrap();
let mut keys_to_get = vec![];
keys_to_get.push("random_key_1".to_string());
keys_to_get.push("random_key_2".to_string());
let redis_result: String = redis_conn.get(keys_to_get).unwrap();
}
When I run cargo run I get:
Running `target/debug/test_resdis`
thread '<main>' panicked at 'called `Result::unwrap()` on an `Err` value: An error was signalled by the server: wrong number of arguments for 'get' command', ../src/libcore/result.rs:746
note: Run with `RUST_BACKTRACE=1` for a backtrace.
error: Process didn't exit successfully: `target/debug/test_resdis` (exit code: 101)
Am I doing something wrong, or is it a bug?
Running your program against a netcat server shows the following requests made:
*3
$3
GET
$12
random_key_1
$12
random_key_2
The GET command should be an MGET.
I believe this to be a bug in the implementation:
impl<T: ToRedisArgs> ToRedisArgs for Vec<T> {
fn to_redis_args(&self) -> Vec<Vec<u8>> {
ToRedisArgs::make_arg_vec(self)
}
}
impl<'a, T: ToRedisArgs> ToRedisArgs for &'a [T] {
fn to_redis_args(&self) -> Vec<Vec<u8>> {
ToRedisArgs::make_arg_vec(*self)
}
fn is_single_arg(&self) -> bool {
ToRedisArgs::is_single_vec_arg(*self)
}
}
Under the hood, the library inspects the key type to know if it's multivalued or not, using ToRedisArgs::is_single_arg, which has a default implementation of true.
As you can see, a slice implements ToRedisArgs::is_single_arg, but a Vec does not. This also suggests a workaround: treat the vector like a slice:
redis_conn.get(&*keys_to_get)
This issue has now been filed with the library.

How to receive message from 'any' channel in PROMELA/SPIN

I'm modeling an algorithm in Spin.
I have a process that has several channels and at some point, I know a message is going to come but don't know from which channel. So want to wait (block) the process until it a message comes from any of the channels. how can I do that?
I think you need Promela's if construct (see http://spinroot.com/spin/Man/if.html).
In the process you're referring to, you probably need the following:
byte var;
if
:: ch1?var -> skip
:: ch2?var -> skip
:: ch3?var -> skip
fi
If none of the channels have anything on them, then "the selection construct as a whole blocks" (quoting the manual), which is exactly the behaviour you want.
To quote the relevant part of the manual more fully:
"An option [each of the :: lines] can be selected for execution only when its guard statement is executable [the guard statement is the part before the ->]. If more than one guard statement is executable, one of them will be selected non-deterministically. If none of the guards are executable, the selection construct as a whole blocks."
By the way, I haven't syntax checked or simulated the above in Spin. Hopefully it's right. I'm quite new to Promela and Spin myself.
If you want to have your number of channels variable without having to change the implementation of the send and receive parts, you might use the approach of the following producer-consumer example:
#define NUMCHAN 4
chan channels[NUMCHAN];
init {
chan ch1 = [1] of { byte };
chan ch2 = [1] of { byte };
chan ch3 = [1] of { byte };
chan ch4 = [1] of { byte };
channels[0] = ch1;
channels[1] = ch2;
channels[2] = ch3;
channels[3] = ch4;
// Add further channels above, in
// accordance with NUMCHAN
// First let the producer write
// something, then start the consumer
run producer();
atomic { _nr_pr == 1 ->
run consumer();
}
}
proctype consumer() {
byte var, i;
chan theChan;
i = 0;
do
:: i == NUMCHAN -> break
:: else ->
theChan = channels[i];
if
:: skip // non-deterministic skip
:: nempty(theChan) ->
theChan ? var;
printf("Read value %d from channel %d\n", var, i+1)
fi;
i++
od
}
proctype producer() {
byte var, i;
chan theChan;
i = 0;
do
:: i == NUMCHAN -> break
:: else ->
theChan = channels[i];
if
:: skip;
:: theChan ! 1;
printf("Write value 1 to channel %d\n", i+1)
fi;
i++
od
}
The do loop in the consumer process non-deterministically chooses an index between 0 and NUMCHAN-1 and reads from the respective channel, if there is something to read, else this channel is always skipped. Naturally, during a simulation with Spin the probability to read from channel NUMCHAN is much smaller than that of channel 0, but this does not make any difference in model checking, where any possible path is explored.

Problem with predicate in Alloy

So I have the following bit of code in Alloy:
sig Node { }
sig Queue { root : Node }
pred SomePred {
no q, q' : Queue | q.root = q'.root
}
run SomePred for 3
but this won't yield any instance containing a Queue, I wonder why. It only shows up instances with Nodes. I've tried the equivalent predicate
pred SomePred' {
all q, q' : Queue | q.root != q'.root
}
but the output is the same.
Am I missing something?
There's a logic flaw in there:
fact SomeFact {
no q, q' : Queue | q.root = q'.root
}
Assume there is an instance with a single queue Q that has a given root R. When running SomeFact, it'd test the only queue available, Q, and it'd find it that Q.root = Q.root, thus, excluding the given instance from coming to life.
The same reasoning can be made for instances with an arbitrary number of queues.
Here is a working version:
sig Node {
}
sig Queue {
root : Node
}
fact sss {
all disj q, q' : Queue | q.root != q'.root
}
pred abc() {
}
run abc for 3