What does ==> mean in proverif? - verification

I found the following line of code in the proverif manual.
⋮
query event(evCocks) ==> event(evRSA).
⋮
What does ==> mean?

Correspondence properties:
P and Q are predicates.
P(x) ==> Q(x, y)
for all x where P(x) is true, then there exists a y
such that Q(x, y) is true
query event(e1) ==> event(e2).
the query above can be translated to the following question:
Is it true that;
for all executions of event e1 (in the protocol) it is true that
e2 has executed?
In the proverif manual it is stated that
query event(evCocks) ==> event(evRSA)
is true if and only if, for all executions of the protocol, if the event
evCocks has been executed, then
event evRSA has also [already] been executed
link to proverif manual

Related

Why is a conditional channel source causing a downstream process to not execute an instance for each value in a different channel?

I have a Nextflow DSL2 pipeline where an early process generally takes a very long time (~24 hours) and has intermediate products that occupy a lot of storage (~1 TB). Because of the length and resources required for this process, it would be desirable to be able to set a "checkpoint", i.e. save the (relatively small) final output to a safe location, and on subsequent pipeline executions retrieve the output from that location. This means that the intermediate data can be safely deleted without preventing resumption of the pipeline later.
However, I've found that when I implement this and use the checkpoint, a process further downstream that is supposed to run an instance for every value in a list only runs a single instance. Minimal working example and example outputs below:
// foobarbaz.nf
nextflow.enable.dsl=2
params.publish_dir = "$baseDir/output"
params.nofoo = false
xy = ['x', 'y']
xy_chan = Channel.fromList(xy)
process foo {
publishDir "${params.publish_dir}/", mode: "copy"
output:
path "foo.out"
"""
touch foo.out
"""
}
process bar {
input:
path foo_out
output:
path "bar.out"
script:
"""
touch bar.out
"""
}
process baz {
input:
path bar_out
val xy
output:
tuple val(xy), path("baz_${xy}.out")
script:
"""
touch baz_${xy}.out
"""
}
workflow {
main:
if( params.nofoo ) {
foo_out = Channel.fromPath("${params.publish_dir}/foo.out")
}
else {
foo_out = foo() // generally takes a long time and uses lots of storage
}
bar_out = bar(foo_out)
baz_out = baz(bar_out, xy_chan)
// ... continue to do things with baz_out ...
}
First execution with foo:
$ nextflow foobarbaz.nf
N E X T F L O W ~ version 21.10.6
Launching `foobarbaz.nf` [soggy_gautier] - revision: f4e70a5cd2
executor > local (4)
[77/c65a9a] process > foo [100%] 1 of 1 ✔
[23/846929] process > bar [100%] 1 of 1 ✔
[18/1c4bb1] process > baz (2) [100%] 2 of 2 ✔
(note that baz successfully executes two instances: one where xy==x and one where xy==y)
Later execution using the checkpoint:
$ nextflow foobarbaz.nf --nofoo
N E X T F L O W ~ version 21.10.6
Launching `foobarbaz.nf` [infallible_babbage] - revision: f4e70a5cd2
executor > local (2)
[40/b42ed3] process > bar (1) [100%] 1 of 1 ✔
[d9/76888e] process > baz (1) [100%] 1 of 1 ✔
The checkpointing is successful (bar executes without needing foo), but now baz only executes a single instance where xy==x.
Why is this happening, and how can I get the intended behaviour? I see no reason why whether foo_out comes from foo or retrieved directly from a file should make any difference to how the xy channel is interpreted by baz.
The problem is that the Channel.fromPath factory method creates a queue channel to provide a single value, whereas the output of process 'foo' implicitly produces a value channel:
A value channel is implicitly created by a process when an input
specifies a simple value in the from clause. Moreover, a value channel
is also implicitly created as output for a process whose inputs are
only value channels.
So without --nofoo, 'foo_out' and 'bar_out' are both value channels. Since, 'xy_chan' is a queue channel that provides two values, process 'bar' gets executed twice. With --nofoo, 'foo_out' and 'bar_out' are both queue channels which provide a single value. Since there's only one complete input configuration (i.e. one value from each input channel), process 'bar' gets executed only once. See also: Understand how multiple input channels work.
The solution is to ensure that 'foo_out' is either, always a queue channel or always value channel. Given your 'foo' process declaration, you probably want the latter:
if( params.nofoo ) {
foo_out = file( "${params.publish_dir}/foo.out" )
}
else {
foo_out = foo()
}
In my experience a process is executed according to the input channel with lowest N of emissions (which is one path emission from bar in your case).
So in this case the strange behaviour is actually the example without --nofoo in my mind.
If you want it executed 2 time you may try to combine the Channels using combine something like baz_input_ch=bar.out.combine(xy_chan)

Wrong Signature.ACCEPTS after .wrap sub

After I wrap a sub the signature do not ACCEPTS a Capture accepted before wrapping.
sub wr(:$a) {say $a};
my $sig-before = &wr.signature;
say $sig-before; # (:$a)
say %(:a(3)) ~~ $sig-before; # True
&wr.wrap(-> |c {callsame(c)});
my $sig-after = &wr.signature;
say $sig-after; # (:$a)
say %(:a(3)) ~~ $sig-after; # False
say %(:a(3)) ~~ $sig-before; # False
say $sig-before.WHICH, ' ', $sig-after.WHICH; # Signature|140466574255752 Signature|140466574255752
say $sig-before eq $sig-after; # True
say %(:a(3)).Capture ~~ $sig-after; # 'Cannot invoke object with invocation handler in this context'
say $sig-after.ACCEPTS(%(:a(3)).Capture); # 'Cannot invoke object with invocation handler in this context'
I see in Rakudo code:
multi method ACCEPTS(Signature:D: Capture $topic) {
nqp::p6bool(nqp::p6isbindable(self, nqp::decont($topic)));
}
Probably it is a bug? Or how can I workaround that behavior if it is expected and how can I understand in run time that I have do the workaround in concrete case?
2022 update Works as #MikhailKhorkov expected in a recent Rakudo (v2022.02):
(:$a)
True
(:$a)
True
True
Signature|4343308333200 Signature|4343308333200
True
True
True
Original answer
Probably it is a bug?
I've been wrong before when I call something a bug but I'd say something in there is a bug, even if it's just a Less Than Awesome error message bug.
I think wrap has relatively few roast tests (many matches are false positives; search for wrap( or wrap: in the results). One key thing to do here if you want to use wrap is add a roast test covering what we want it to do here that it is not doing correctly (assuming it's not just a Less Than Awesome error message).
I think wrap is one of the most fragile official P6 features:
new/open/stalled bugs in RT Perl 6 queue matching 'wrap'.
new/open/stalled bugs in RT Perl 6 queue matching 'Cannot invoke object with invocation handler in this context'.
open rakudo repo issues matching 'wrap'.
open rakudo repo issues matching 'Cannot invoke object with invocation handler in this context'.

Twisted test - both success and error callback fire on deferred

Writing a unit test for a twisted application. Trying to perform some asserts once deferred is resolved with a new connection (instance of Protocol), however seeing that both success and error callbacks are being fired (judging by both SUCCESS and FAIL being printed in console).
def test_send_to_new_connection(self):
# Given
peerAddr = ('10.22.22.190', 5060)
# If
self.tcp_transport.send_to('test', peerAddr)
# Then
assert peerAddr in self.tcp_transport._connections
assert True == isinstance(self.tcp_transport._connections[peerAddr], Deferred)
connection = _string_transport_connection(self.hostAddr, peerAddr, None, self.tcp_transport.connectionMade)
def assert_cache_updated_on_connection(connection):
print('--------- SUCCESS ----------')
peer = connection.transport.getPeer()
peerAddr = (peer.host, peer.port)
assert peerAddr in self.tcp_transport._connections
assert True == isinstance(self.tcp_transport._connections[peerAddr], Protocol)
def assert_fail(fail):
print('--------- FAIL ----------')
self.tcp_transport._connections[peerAddr].addCallback(assert_cache_updated_on_connection)
self.tcp_transport._connections[peerAddr].addErrback(assert_fail)
# Forcing deferred to fire with mock connection
self.tcp_transport._connections[peerAddr].callback(connection)
I thought execution of Callbacks and Errbacks was mutually exclusive. I.e. only ones or the others would run depending on deferred resolution. Why is assert_fail() also being called?
See the "railroad" diagram in the Deferred Reference:
Notice how there are diagonal arrows from the callback side to the errback side and vice versa. At anyone single position (where a "position" is a pair of boxes side-by-side, one green and one red, with position increasing as you go down the diagram) in the callback/errback chain only one of the callback or errback will be called. However, since there are multiple positions in the chain, you can have many callbacks and many errbacks all called on a single Deferred.
In the case of your code:
....addCallback(assert_cache_updated_on_connection)
....addErrback(assert_fail)
There are two positions. The first has a callback and the second has an errback. If the callback signals failure, execution switches to the errback side for the next position - exactly where you have assert_fail.

What's the equivalent of moment-yielding (from Tornado) in Twisted?

Part of the implementation of inlineCallbacks is this:
if isinstance(result, Deferred):
# a deferred was yielded, get the result.
def gotResult(r):
if waiting[0]:
waiting[0] = False
waiting[1] = r
else:
_inlineCallbacks(r, g, deferred)
result.addBoth(gotResult)
if waiting[0]:
# Haven't called back yet, set flag so that we get reinvoked
# and return from the loop
waiting[0] = False
return deferred
result = waiting[1]
# Reset waiting to initial values for next loop. gotResult uses
# waiting, but this isn't a problem because gotResult is only
# executed once, and if it hasn't been executed yet, the return
# branch above would have been taken.
waiting[0] = True
waiting[1] = None
As it is shown, if in am inlineCallbacks-decorated function I make a call like this:
#inlineCallbacks
def myfunction(a, b):
c = callsomething(a)
yield twisted.internet.defer.succeed(None)
print callsomething2(b, c)
This yield will get back to the function immediately (this means: it won't be re-scheduled but immediately continue from the yield). This contrasts with Tornado's tornado.gen.moment (which isn't more than an already-resolved Future with a result of None), which makes the yielder re-schedule itself, regardless the future being already resolved or not.
How can I run a behavior like the one Tornado does when yielding a dummy future like moment?
The equivalent might be something like a yielding a Deferred that doesn't fire until "soon". reactor.callLater(0, ...) is generally accepted to create a timed event that doesn't run now but will run pretty soon. You can easily get a Deferred that fires based on this using twisted.internet.task.deferLater(reactor, 0, lambda: None).
You may want to look at alternate scheduling tools instead, though (in both Twisted and Tornado). This kind of re-scheduling trick generally only works in small, simple applications. Its effectiveness diminishes the more tasks concurrently employ it.
Consider whether something like twisted.internet.task.cooperate might provide a better solution instead.

Erlang process event error

I'm basically following the tutorial on this site Learn you some Erlang:Designing a concurrent application and I tried to run the code below with the following commands and got an error on line 48. I did turn off my firewall just in case that was the problem but no luck. I'm on windows xp SP3.
9> c(event).
{ok,event}
10> f().
ok
11> event:start("Event",0).
=ERROR REPORT==== 9-Feb-2013::15:05:07 ===
Error in process <0.61.0> with exit value: {function_clause,[{event,time_to_go,[0],[{file,"event.erl"},{line,48}]},{event,init,3,[{file,"event.erl"},{line,31}]}]}
<0.61.0>
12>
-module(event).
-export([start/2, start_link/2, cancel/1]).
-export([init/3, loop/1]).
-record(state, {server,
name="",
to_go=0}).
%%% Public interface
start(EventName, DateTime) ->
spawn(?MODULE, init, [self(), EventName, DateTime]).
start_link(EventName, DateTime) ->
spawn_link(?MODULE, init, [self(), EventName, DateTime]).
cancel(Pid) ->
%% Monitor in case the process is already dead
Ref = erlang:monitor(process, Pid),
Pid ! {self(), Ref, cancel},
receive
{Ref, ok} ->
erlang:demonitor(Ref, [flush]),
ok;
{'DOWN', Ref, process, Pid, _Reason} ->
ok
end.
%%% Event's innards
init(Server, EventName, DateTime) ->
loop(#state{server=Server,
name=EventName,
to_go=time_to_go(DateTime)}).
%% Loop uses a list for times in order to go around the ~49 days limit
%% on timeouts.
loop(S = #state{server=Server, to_go=[T|Next]}) ->
receive
{Server, Ref, cancel} ->
Server ! {Ref, ok}
after T*1000 ->
if Next =:= [] ->
Server ! {done, S#state.name};
Next =/= [] ->
loop(S#state{to_go=Next})
end
end.
%%% private functions
time_to_go(TimeOut={{_,_,_}, {_,_,_}}) ->
Now = calendar:local_time(),
ToGo = calendar:datetime_to_gregorian_seconds(TimeOut) -
calendar:datetime_to_gregorian_seconds(Now),
Secs = if ToGo > 0 -> ToGo;
ToGo =< 0 -> 0
end,
normalize(Secs).
%% Because Erlang is limited to about 49 days (49*24*60*60*1000) in
%% milliseconds, the following function is used
normalize(N) ->
Limit = 49*24*60*60,
[N rem Limit | lists:duplicate(N div Limit, Limit)].
It's running purely locally on your machine so the firewall will not affect it.
The problem is the second argument you gave when you started it event:start("Event",0).
The error reason:
{function_clause,[{event,time_to_go,[0],[{file,"event.erl"},{line,48}]},{event,init,3,[{file,"event.erl"},{line,31}]}]}
says that it is a function_clause error which means that there was no clause in the function definition which matched the arguments. It also tells you that it was the function event:time_to_go/1 on line 48 which failed and that it was called with the argument 0.
It you look at the function time_to_go/ you will see that it expects its argument to be a tuple of 2 elements where each element is a tuple of 3 elements:
time_to_go(TimeOut={{_,_,_}, {_,_,_}}) ->
The structure of this argument is {{Year,Month,Day},{Hour,Minute,Second}}. If you follow this argument backwards you that time_to_go/ is called from init/3 where the argument to time_to_go/1, DateTime, is the 3rd argument to init/3. Almost there now. Now init/3 is the function which the process spawned in start/2 (and start_link/2) and the 3rd argument toinit/3is the second argument tostart/2`.
So when you call event:start("Event",0). it is the 0 here which is passed into the call time_to_go/1 function in the new peocess. And the format is wrong. You should be calling it with something like event:start("Event", {{2013,3,24},{17,53,62}}).
To add background to rvirding's answer, you get the error because the
example works up until the final code snippet as far
as I know. The normalize function is used first, which deals with the
problem. Then the paragraph right after the example in the question
above, the text says:
And it works! The last thing annoying with the event module is that we
have to input the time left in seconds. It would be much better if we
could use a standard format such as Erlang's datetime ({{Year, Month,
Day}, {Hour, Minute, Second}}). Just add the following function that
will calculate the difference between the current time on your
computer and the delay you inserted:
The next snippet introduces the code bit that takes only a date/time and
changes it to the final time left.
I could not easily link to all transitional versions of the file, which
is why trying the linked file directly with the example doesn't work
super easily in this case. If the code is followed step by step, snippet
by snippet, everything should work fine. Sorry for the confusion.