Can we have terminal states in NuSMV? - verification

Is it possible to have a state in NuSMV that does not have transitions to any other states? For example is it valid that in my code l3 does not have any transitions? When I run this NuSMV gives me the error that "case conditions are not exhaustive". Thanks!
MODULE main
VAR
location: {l1,l2,l3};
ASSIGN
init(location):=l1;
next(location):= case
(location = l1): l2;
(location = l2): l1;
(location = l2): l3;
esac;

By construction, in the so-called assignment-style [used in your model]:
there is always at least one initial state
all states have at least one next state
non-determinism is apparent
Alternatively, one can use the so-called constraint-style, which allows for:
inconsistent INIT constraints, which result in no initial state
inconsistent TRANS constraints, which result in deadlock states (i.e. a state without any outgoing transition to some next state)
non-determinism is hidden
For more info, see the second part of this course, which applies also to NuSMV for the most part.
An example of a FSM wherein some state has no future state is:
MODULE main
VAR b : boolean;
TRANS b -> FALSE;
The state in which b is true has no future state.
Conversely, the state in which b is false can loop in itself or go to the state in which b = true.
You can use the command check_fsm to detect deadlock states.
Please note that the presence of deadlock states can hinder the correctness of model checking in some situations. For instance:
FAQ #007: CTL specification with top level existential path quantifier
is wrongly reported as being violated. For example, for the model
below both specifications are reported to be false though one is just
the negation of the other! I know such problems can arise with
deadlock states, but running it with -ctt says that everything is
fine.
MODULE main
VAR b : boolean;
TRANS next(b) = b;
CTLSPEC EF b
CTLSPEC !(EF b)
For other critical consequences of deadlock states in the transition relation, see this page.
Usually, when NuSMV says that "the case conditions are not exhaustive" one adds a default action with a TRUE condition at the end of the case construct, which is triggered when none of the previous conditions applies. The most common choice for the default action is a self-loop, which is encoded as follows:
MODULE main
VAR
location: {l1,l2,l3};
ASSIGN
init(location):= l1;
next(location):=
case
(location = l1): l2;
(location = l2): {l1, l3};
TRUE : location;
esac;
NOTE: please notice that if there are multiple conditions with the same guard, only the first of these conditions is ever used. For this reason, when in your model location = l2 then the next value of location can only be l1 and never l3. To fix this, and have some non-deterministic choice among l1 and l3, one must list all possible future values under the same condition as a set of values (i.e. {l1, l3}).

Related

Shadow variable re-calculation when tail leaves the chain

In our chain based (VRP-like) implementation. On specific entities, we have to use some data from n+1 in the chain, to calculate shadow variables of n.
With some careful calculation this works all good, until the tail leaves the chain, leaving such entity at the tail location.
Re-calculation would solve the issue, but it is never triggered, because n+1 entity is the one that is changed and re-calculated, so the old "left-over" chain is left in an inconsistent state.
Is there a way to manually trigger re-calculation for involved chains, even if previousStandstill did not change in any of the chain's entities, but in the ones that used to be there?
Suggestion from #Lukas was on the point.
I changed the #PlanningVariableReference from previousStandstill to nextVisit and then the listener gets called also when something moves from the tail.
#CustomShadowVariable(
variableListenerClass = ArrivalTimeVariableListener.class,
sources = {
#PlanningVariableReference(
entityClass = Standstill.class,
variableName = "nextVisit"
),
}
)
This allowed me, to properly re-calculate the chain on these occasions

How are "special" return values that signal a condition called?

Suppose I have a function which calculates a length and returns it as a positive integer, but could also return -1 for timeout, -2 for "cannot compute", and -3 for invalid arguments.
Notwithstanding any discussion on best practices, proper exceptions, and whatnot, this occurs regularly in legacy codebases. What is the name for this practice or for return values which are outside of the normal output value range, -1 being the most common?
Exceptions vs. status returns article refers to them as return status codes:
Broadly speaking, there are two ways to handle errors as they pass
from layer to layer in software: throwing exceptions and returning
status codes...
With status returns, a valuable channel of communication (the return
value of the function) has been taken over for error handling.
Personally I would also call them status codes, similarly to HTTP status codes (if we pretend HTTP response is like a function return).
As a side note, beside exceptions and return status codes, there also exists a monadic approach to error handling, which in some sense combines the former two approaches. For example, in Scala, Either monad may be used to specify a return value that can express both an error status code and a regular happy value without having to block out part of the domain for status codes:
def divide(a: Double, b: Double): Either[String, Double] =
if (b == 0.0) Left("Division by zero") else Right(a / b)
divide(4,0)
divide(4,2)
which outputs
res0: Either[String,Double] = Left(Division by zero)
res1: Either[String,Double] = Right(2.0)
That's an example of a "magic" value. I don't know of any more specific term for when this idea is applied to a function's return value.

Can final and non-final states having identical configurations be merged together in a minimal DFA?

I have been practicing some questions on automata theory where I came across a question on minimal dfa at which I can't figure out where I have gone wrong.I am getting 4 states in the minimal dfa but my book says answer is 3.The question asks a given NFA to convert into a minimal DFA and count the number of states in the latter one.The given NFA(p and r are initial and final states respectively) is:
{p}---b-->{q}
{q}---a--->{r}
{q}---b--->{s}
{r}---a--->{r}
{r}---b--->{s}
{s}---a--->{r}
{s}---b--->{s}
I am getting 4 states:[r],[p],[q,s],[dead].Can the final [r] and the non-final state [q,s] be merged here since they lead to the similar configuration on receiving inputs a and b??I have learned that final and non-final states cannot be in the same equivalence class...
OK, let's start with all possible states for our DFA, there will be 17 (2^4 for the 4 symbols plus 1 for the empty set). These will be:
{}
{p}
{q}
{r}
{s}
{p,q}
{p,r}
{p,s}
{q,r}
{q,s}
{r,q}
{r,s}
{p,q,r}
{p,q,s}
{p,r,s}
{q,r,s}
{p,q,r,s}
Now that we have all the possible sets, let's highlight all that are reachable from the start state p:
{}
{p} --- Start state. From here we can get to {q} only as defined by the transition {p}--b-->{q}
{q} --- from here, we get to r or s, so {r,s} as defined by {q}--a-->{r} and {q}--b-->{s}
{r}
{s}
{p,q}
{p,r}
{p,s}
{q,r}
{q,s}
{r,q}
{r,s} --- from here, we can only get to r or s (the same state!), so we're at a dead end.
{p,q,r}
{p,q,s}
{p,r,s}
{q,r,s}
{p,q,r,s}
The three accessible states are thus {p},{q}, and {r,s}. The reason the "dead" state, or empty set is not reachable is that none of the accessible transitions lead to it.
Hope this helps!

Some questions on DLR call site caching

Kindly correct me if my understanding about DLR operation resolving methodology is wrong
DLR has something call as call site caching. Given "a+b" (where a and b are both integers), the first time it will not be able to understand the +(plus) operator as wellas the operand. So it will resolve the operands and then the operators and will cache in level 1 after building a proper rule.
Next time onwards, if a similar kind of match is found (where the operand are of type integers and operator is of type +), it will look into the cache and will return the result and henceforth the performance will be enhanced.
If this understanding of mine is correct(if you fell that I am incorrect in the basic understanding kindly rectify that), I have the below questions for which I am looking answers
a) What kind of operations happen in level 1 , level 2
and level 3 call site caching in general
( i mean plus, minus..what kind)
b) DLR caches those results. Where it stores..i mean the cache location
c) How it makes the rules.. any algorithm.. if so could you please specify(any link or your own answer)
d) Sometime back i read somewhere that in level 1 caching , DLR has 10 rules,
level 2 it has 20 or something(may be) while level 3 has 100.
If a new operation comes, then I makes a new rule. Are these rules(level 1 , 2 ,3) predefined?
e) Where these rules are kept?
f) Instead of passing two integers (a and b in the example),
if we pass two strings or one string and one integer,
whether it will form a new rule?
Thanks
a) If I understand this correctly currently all of the levels work effectively the same - they run a delegate which tests the arguments which matter for the rule and then either performs an operation or notes the failure (the failure is noted by either setting a value on the call site or doing a tail call to the update method). Therefore every rule works like:
public object Rule(object x, object y) {
if(x is int && y is int) {
return (int)x + (int)y;
}
CallSiteOps.SetNotMatched(site);
return null;
}
And a delegate to this method is used in the L0, L1, and L2 caches. But the behavior here could change (and did change many times during development). For example at one point in time the L2 cache was a tree based upon the type of the arguments.
b) The L0 cache and the L1 caches are stored on the CallSite object. The L0 cache is a single delegate which is always the 1st thing to run. Initially this is set to a delegate which just updates the site for the 1st time. Additional calls attempt to perform the last operation the call site saw.
The L1 cache includes the last 10 actions the call site saw. If the L0 cache fails all the delegates in the L1 cache will be tried.
The L2 cache lives on the CallSiteBinder. The CallSiteBinder should be shared across multiple call sites. For example there should generally be one and only one additiona call site binder for the language assuming all additions are the same. If the L0 and L1 all of the available rules in the L2 cache will be searched. Currently the upper limit for the L2 cache is 128.
c) Rules can ultimately be produced in 2 ways. The general solution is to create a DynamicMetaObject which includes both the expression tree of the operation to be performed as well as the restrictions (the test) which determines if it's applicable. This is something like:
public DynamicMetaObject FallbackBinaryOperation(DynamicMetaObject target, DynamicMetaObject arg) {
return new DynamicMetaObject(
Expression.Add(Expression.Convert(target, typeof(int)), Expression.Convert(arg, typeof(int))),
BindingRestrictions.GetTypeRestriction(target, typeof(int)).Merge(
BindingRestrictions.GetTypeRestriction(arg, typeof(int))
)
);
}
This creates the rule which adds two integers. This isn't really correct because if you get something other than integers it'll loop infinitely - so usually you do a bunch of type checks, produce the addition for the things you can handle, and then produce an error if you can't handle the addition.
The other way to make a rule is provide the delegate directly. This is more advanced and enables some advanced optimizations to avoid compiling code all of the time. To do this you override BindDelegate on CallSiteBinder, inspect the arguments, and provide a delegate which looks like the Rule method I typed above.
d) Answered in b)
e) I believe this is the same question as b)
f) If you put the appropriate restrictions on then yes (which you're forced to do for the standard binders). Because the restrictions will fail because you don't have two ints the DLR will probe the caches and when it doesn't find a rule it'll call the binder to produce a new rule. That new rule will come back w/ a new set of restrictions, will be installed in the L0, L1, and L2 caches, and then will perform the operation for strings from then on out.

'if' vs 'when' for making multiplexer

i have been told to use 'when' statement to make multiplexer but not use 'if' statement as it will cause timing errors...
i don't understand this ...
so what is the difference between 'if' and 'when' ? and do they map to the same thing in hardware ?
OK, lets discuss some points at first on the difference between if and when statements:
Both are called Dataflow Design Elements.
when statement
concurrent statement
not used in process, used only in architecture as process is sequential execution
if statement
sequential statement
used in process as it is sequential statement, and not used outside the process
And you know multiplexer is a component don't need process block, as its behavior doesn't change with changing its input, so it will be outside process, so you have to write it using when statement as it is concurrent statement.. And if you wrote it with if statement, timing errors may occur. Also all the references and also Xilinx help (if you are using Xilinx) are writing the Multiplexer block using when statement not if statement
Reference: Digital Design Priciples & Practices, John F. Wakerly, 3rd Edition
See these:
VHDL concurrent statements, which includes when.
VHDL sequential statements, which includes if.
Basically, if is sequential, and when is concurrent. They do not map to the same thing in hardware... This page describes, at the bottom, some of the special considerations needed to synthesize an if statement.
Both coding styles are totally valid.
Let's recall some elements. Starting from HDL, synthesis is done in two main steps :
first, the VHDL is analyzed in order to detect RTL templates (consisting in RTL elements : flip-flops, arithmetic expressions, multiplexers , control logic ). We say that these elements are "infered" (i.e you must code using the right template to get what you wanted initially. You must imagine how these elements are connected, before coding ).
The second step is real logic synthesis, that takes a particular target technology parameters into account (types of gates available, timing, area, power).
These two steps clearly separates RTL functional needs (steering logic, computations) from technology contingencies (timing etc).
Let's come back to the first step (RTL) :
Concerning multiplexers, several coding styles are possible :
using concurrent assignement :
y<= a1 when cond1 else a2 when cond2 else cond3;
using if statement within a process :
process(a1,a2,a3,cond1,cond2)
begin
if(cond1) then
y<=a1;
elsif(cond2) then
y<=a2;
else
y<=a3;
end if;
end;
using another concurrent assignment
form, suitable for generic
descriptions : if sel is an integer
and muxin an array of signals, then :
muxout <= muxin(sel); --will infer a mux
Note that the 3 coding styles always work. Note also that they are "a bit more" than simple multiplexer as the coding style force the presence of a priority encoding (if elsif, when else), which is not the case of a simple equation-based multiplexer, really symmetric.
using a case statement
process(a1,a2,a3,cond1,cond2)
variable cond : std_logic(1 downto 0);
begin
cond := cond2 & cond1;
case cond is
when "01" => y<= a1;
when "10" => y<= a2;
when others => y<=a3;
end case;
end;
using a select statement (in our
example, two concurrent assignements
needed) :
sel <= cond2 & cond1;
WITH sel SELECT
y <= a1 WHEN "01",
a2 WHEN "10",
a3 WHEN OTHERS;
A final remark is about the rising of abstraction, even for RTL design : the synthesizers are now really mature. Have a look at Jiri Gaisler coding styles for LEON2 open source processor for example, as well as his coding styles (see here). He prones a very different approach, yet perfectly valid, from classical books.
You should always understand what the RTL synthesizer will infer.
In the contrary, behavioral synthesis allows you to forget (partially) what the synthesizer will infer. But that's another story.