Problem with predicate in Alloy - formal-methods

So I have the following bit of code in Alloy:
sig Node { }
sig Queue { root : Node }
pred SomePred {
no q, q' : Queue | q.root = q'.root
}
run SomePred for 3
but this won't yield any instance containing a Queue, I wonder why. It only shows up instances with Nodes. I've tried the equivalent predicate
pred SomePred' {
all q, q' : Queue | q.root != q'.root
}
but the output is the same.
Am I missing something?

There's a logic flaw in there:
fact SomeFact {
no q, q' : Queue | q.root = q'.root
}
Assume there is an instance with a single queue Q that has a given root R. When running SomeFact, it'd test the only queue available, Q, and it'd find it that Q.root = Q.root, thus, excluding the given instance from coming to life.
The same reasoning can be made for instances with an arbitrary number of queues.
Here is a working version:
sig Node {
}
sig Queue {
root : Node
}
fact sss {
all disj q, q' : Queue | q.root != q'.root
}
pred abc() {
}
run abc for 3

Related

Kotlin: Is there a tool that allows me to control parallelism when executing suspend functions?

I'm trying to execute certain suspend function multiple times, in such a way that never more than N of these are being executed at the same time.
For those acquainted with Akka and Scala Streaming libraries, something like mapAsync.
I did my own implementation using one input channel (as in kotlin channels) and N output channels. But it seems cumbersome and not very efficient.
The code I'm currently using is somewhat like this:
val inChannel = Channel<T>()
val outChannels = (0..n).map{
Channel<T>()
}
launch{
var i = 0
for(t in inChannel){
outChannels[i].offer(t)
i = ((i+1)%n)
}
}
outChannels.forEach{outChannel ->
launch{
for(t in outChannel){
fn(t)
}
}
}
Of course it has error management and everything, but still...
Edit: I did the following test, and it failed.
test("Parallelism is correctly capped") {
val scope = CoroutineScope(Dispatchers.Default.limitedParallelism(3))
var num = 0
(1..100).map {
scope.launch {
num ++
println("started $it")
delay(Long.MAX_VALUE)
}
}
delay(500)
assertEquals(3,num)
}
You can use the limitedParallelism-function on a Dispatcher (experimental in v1.6.0), and use the returned dispatcher to call your asynchronous functions. The function returns a view over the original dispatcher which limits the parallelism to a limit you provide. You can use it like this:
val limit = 2 // Or some other number
val dispatcher = Dispatchers.Default
val limitedDispatcher = dispatcher.limitedParallelism(limit)
for (n in 0..100) {
scope.launch(limitedDispatcher) {
executeTask(n)
}
}
Your question, as asked, calls for #marstran's answer. If what you want is that no more than N coroutines are being actively executed at any given time (in parallel), then limitedParallelism is the way to go:
val maxThreads: Int = TODO("some max number of threads")
val limitedDispatcher = Dispatchers.Default.limitedParallelism(maxThreads)
elements.forEach { elt ->
scope.launch(limitedDispatcher) {
doSomething(elt)
}
}
Now, if what you want is to even limit concurrency, so that at most N coroutines are run concurrently (potentially interlacing), regardless of threads, you could use a Semaphore instead:
val maxConcurrency: Int = TODO("some max number of concurrency coroutines")
val semaphore = Semaphore(maxConcurrency)
elements.forEach { elt ->
scope.async {
semaphore.withPermit {
doSomething(elt)
}
}
}
You can also combine both approaches.
Other answers already explained that it depends whether you need to limit parallelism or concurrency. If you need to limit concurrency, then you can do this similarly to your original solution, but with only a single channel:
val channel = Channel<T>()
repeat(n) {
launch {
for(t in channel){
fn(t)
}
}
}
Also note that offer() in your example does not guarantee that the task will be ever executed. If the next consumer in the round robin is still occupied with the previous task, the new task is simply ignored.

"Mix" operator does not wait for upstream processes to finish

I have several upstream processes, say A, B and C, doing similar tasks.
Downstream of that, I have one process X that needs to treat all outputs of the A, B and C in the same way.
I tried to use the "mix" operator to create a single channel from the output files of A, B and C like so :
process A {
output:
file outA
}
process B {
output:
file outB
}
process C {
output:
file outC
}
inX = outA.mix(outB,outC)
process X {
input:
file inX
"myscript.sh"
}
Process A often finishes before B and C, and somehow, process X does not wait for process B and C to finish, and only take the outputs of A as input.
The following snippet works nicely:
process A {
output:
file outA
"""
touch outA
"""
}
process B {
output:
file outB
"""
touch outB
"""
}
process C {
output:
file outC
"""
touch outC
"""
}
inX = outA.mix(outB,outC)
process X {
input:
file inX
"echo myscript.sh"
}
If you continue to experience the same problem feel free to open an issue including a reproducible test case.

Accessing local variable of one process from another in Promela

Is it possible to access value of local variable of one process from another process .
For example in program below, I want to read value of my_id from manager.
proctype user (byte id){
byte my_id = id;
}
proctype manager (){
printf ("my_id : %d \n" , user:my_id);
}
init {
run user (5);
run manager();
}
You can refer to the current value of local variable by using "procname[pid]:var".
You can accomplish this using c_code{} and/or c_expr() syntax. Here is an example from the SPIN manual:
active proctype ex1()
{ int x;
do
:: c_expr { Pex1->x < 10 } ->
c_code { Pex1->x++; }
:: x < 10 -> x++
:: c_expr { fct() } -> x--
:: else -> break
od
}
The local 'x' of 'ex1' can be accessed using 'Pex1->x' from within c_expr{}.

How to fork/clone a process in Erlang

How to fork/clone a process in Erlang, as the fork in Unix?
I have searched a lot but just got nothing.
Maybe the usage looks like this:
case fork() of
{parent, Pid} ->
in_parent_process_now();
{child, Pid} ->
in_child_process_now();
{error, Msg} ->
report_fork_error(Msg)
end.
Any ideas?
EDIT:
In order to explain my point better, take the following C code as an example:
f();
fork();
g();
Here the return value of fork() is ignored, so the next steps of both the parent process and the child process are the same, which is to execute g().
Can I achieve this in Erlang?
(This question was also answered in the erlang-questions mailing list.)
Erlang does not have a 'fork' operation. It has a spawn operation however:
parent_process() ->
will_be_executed_by_parent_process(),
spawn(fun() -> will_be_executed_by_child_process() end),
will_also_be_executed_by_parent_process().
... where function names show in what context they will be executed. Note that any data passed to the child process will be copied to the new process' heap.
As you know, there is generic pattern to implement processes in erlang:
loop( State ) ->
receive
Message ->
NewState = process( Message, State ),
loop( NewState )
end.
In each quant of time process has a State. So if you want to "fork" some process from current - you have to pass specific message for it. Process have to recognize that message and spawn the new process with copy of its current state in spawned process.
I've created example, to illustrate text above:
-module( test ).
-export( [ fork/1, get_state/1, change_state/2 ] ).
-export( [ loop/1 ] ).
loop( State ) ->
receive
{ fork, Sender } ->
%%
%% if you want to link with child process
%% call spawn_link instead of spawn
%%
ClonePid = spawn( ?MODULE, loop, [ State ] ),
responseTo( Sender, ClonePid ),
loop( State );
{ get_state, Sender } ->
responseTo( Sender, { curr_state, State } ),
loop( State );
{ change_state, Data, Sender } ->
{ Response, NewState } = processData( Data, State ),
responseTo( Sender, Response ),
loop( NewState )
end.
fork( Pid ) ->
Ref = make_ref(),
Pid ! { fork, { Ref, self() } },
get_response( Ref ).
get_state( Pid ) ->
Ref = make_ref(),
Pid ! { get_state, { Ref, self() } },
get_response( Ref ).
change_state( Pid, Data ) ->
Ref = make_ref(),
Pid ! { change_state, Data, { Ref, self() } },
get_response( Ref ).
get_response( Ref ) ->
receive
{ Ref, Message } -> Message
end.
responseTo( { Ref, Pid }, Mes ) ->
Pid ! { Ref, Mes }.
processData( Data, State ) ->
%%
%% here comes logic of processing data
%% and changing process state
%%
NewState = Data,
Response = { { old_state, State }, { new_state, NewState } },
{ Response, NewState }.
Lets test it in erlang shell:
1> c(test).
{ok,test}
Creating parent process with initial state first_state
2> ParentPid = spawn( test, loop, [ first_state ] ).
<0.38.0>
3> test:get_state( ParentPid ).
{curr_state,first_state}
4>
Lets change state of parent process to second_state:
4> test:change_state( ParentPid, second_state ).
{{old_state,first_state},{new_state,second_state}}
Fork new process from parent process:
5> ChildPid = test:fork( ParentPid ).
<0.42.0>
Check state of forked process (it is the same as in parent process):
6> test:get_state( ChildPid ).
{curr_state,second_state}
There is no fork in Erlang. But you can use one among spawn/1, spawn/2, spawn/3, spawn/4 (see also spawn_link) that are BIFs of erlang see erlang module.
So, for example:
-module(mymodule).
-export([parent_fun/0]).
parent_fun() ->
io:format("this is the parent with pid: ~p~n", [self()]),
spawn(fun() -> child_fun() end),
io:format("still in parent process: ~p~n", [self()]).
child_fun() ->
io:format("this is child process with pid: ~p~n", [self()]).
Execute in erlang shell as:
mymodule:parent_fun().
Note that parent process and child process have different pids.
I strongly suggest you to read: http://learnyousomeerlang.com/the-hitchhikers-guide-to-concurrency

How to receive message from 'any' channel in PROMELA/SPIN

I'm modeling an algorithm in Spin.
I have a process that has several channels and at some point, I know a message is going to come but don't know from which channel. So want to wait (block) the process until it a message comes from any of the channels. how can I do that?
I think you need Promela's if construct (see http://spinroot.com/spin/Man/if.html).
In the process you're referring to, you probably need the following:
byte var;
if
:: ch1?var -> skip
:: ch2?var -> skip
:: ch3?var -> skip
fi
If none of the channels have anything on them, then "the selection construct as a whole blocks" (quoting the manual), which is exactly the behaviour you want.
To quote the relevant part of the manual more fully:
"An option [each of the :: lines] can be selected for execution only when its guard statement is executable [the guard statement is the part before the ->]. If more than one guard statement is executable, one of them will be selected non-deterministically. If none of the guards are executable, the selection construct as a whole blocks."
By the way, I haven't syntax checked or simulated the above in Spin. Hopefully it's right. I'm quite new to Promela and Spin myself.
If you want to have your number of channels variable without having to change the implementation of the send and receive parts, you might use the approach of the following producer-consumer example:
#define NUMCHAN 4
chan channels[NUMCHAN];
init {
chan ch1 = [1] of { byte };
chan ch2 = [1] of { byte };
chan ch3 = [1] of { byte };
chan ch4 = [1] of { byte };
channels[0] = ch1;
channels[1] = ch2;
channels[2] = ch3;
channels[3] = ch4;
// Add further channels above, in
// accordance with NUMCHAN
// First let the producer write
// something, then start the consumer
run producer();
atomic { _nr_pr == 1 ->
run consumer();
}
}
proctype consumer() {
byte var, i;
chan theChan;
i = 0;
do
:: i == NUMCHAN -> break
:: else ->
theChan = channels[i];
if
:: skip // non-deterministic skip
:: nempty(theChan) ->
theChan ? var;
printf("Read value %d from channel %d\n", var, i+1)
fi;
i++
od
}
proctype producer() {
byte var, i;
chan theChan;
i = 0;
do
:: i == NUMCHAN -> break
:: else ->
theChan = channels[i];
if
:: skip;
:: theChan ! 1;
printf("Write value 1 to channel %d\n", i+1)
fi;
i++
od
}
The do loop in the consumer process non-deterministically chooses an index between 0 and NUMCHAN-1 and reads from the respective channel, if there is something to read, else this channel is always skipped. Naturally, during a simulation with Spin the probability to read from channel NUMCHAN is much smaller than that of channel 0, but this does not make any difference in model checking, where any possible path is explored.