e HVL (IEEE 1647): expect expression fails unexpectedly - verification

I'm trying to verify a pretty simple handshake between two modules. One module is on a slow clock and raises "req", the faster module should raise "ack" on the next fast clock and hold it until the next slow clock posedge. The end result looks like this:
This is how I wrote the expect:
expect expect_ack_when_req_go is
(#req_rise_e) => #ack_rise_e
else dut_error("ERROR: ack expected to be asserted when req rises!");
*both #req_rise_e and #ack_rise_e are sampled on slow clock.
Running the simulator yields the error as the first expression seems to succeed but the second one does not. This is despite the fact that when tracing events to the wave, I can see both events occur together (as seen in the wave: event_req, event_ack).

You're trying to do overlapped implication, i.e. both your events happen in the same cycle. What the => operator does is check that the consequent happens on the next sampling event, in this case the next slow clock edge. This is called non-overlapped implication in SystemVerilog assertion parlance.
You can get your desired behavior by writing the following:
expect expect_ack_when_req_go is
(#req_rise_e) => detach({#ack_rise_e; ~[1]})
else dut_error("ERROR: ack expected to be asserted when req rises!");
Full explanation here.
On a methodological note, I would recommend against writing the temporal expression in a different way. I'm assuming that you are verifying the module that is driving ack and this module works on both clocks. I would also assume that this module samples req with the fast clock. It would be clearer to formulate your check as:
expect expect_ack_when_req_go is
(#req_rise_at_fast_clock_e) => #ack_rise_at_slow_clock_e
else dut_error("ERROR: ack expected to be asserted when req rises!");
This way you don't have to mess around with detach(...) and the expect more closely matches the natural language description of your desired behavior.

Related

Without unwinding, translate a simple while loop iteration into SMT-LIB formula to prove correctness

Consider proving correctness of the following while loop, i.e. I want show that given the loop condition holds to start with, it will eventually terminate and result in the final assertion being true.
int x = 0;
while(x>=0 && x<10){
x = x + 1;
}
assert x==10;
What would be the correct translation into SMT-LIB for checking the correctness, without using loop unwinding?
Hoare logic and loop-invariants
Typical proof of such a statement would be done via the classic Hoare logic, which I assume you're already familiar with. If not, see: https://en.wikipedia.org/wiki/Hoare_logic
The idea is to come up with an invariant for your loop. This invariant must be true before the loop starts, it must be maintained by the loop body, and it must imply the final result when the loop condition is no longer true. Additionally, you also need to prove that the loop will eventually terminate, by means of a measure function. (More on that later.)
You can convince yourself why this would be sufficient: An invariant is something that's "always" true. And if it implies your final result, then your proof is complete. The proof steps I outlined above ensure that the invariant is indeed an invariant, i.e., its truth is always maintained by your program.
Coming up with the invariant
What would be a good invariant for your loop here? Let's give this invariant the name I. A moment of thinking reveals a good choice for I is:
I = x >= 0 && x <= 10
Note how similar (but not exactly the same!) this is to your loop-condition, and this is not by accident. Loop-invariants are not unique, and coming up with a good one can be really difficult. It's an active area of research (since 60's) to synthesize loop-invariants automatically. See the plethora of research out there. https://en.wikipedia.org/wiki/Loop_invariant is a good starting point.
Proof using SMT
Now that we "magically" came up with the loop invariant, let's use SMT to prove that it is indeed correct. Instead of writing SMTLib (which is verbose and mostly intended for machines only), I'll use z3-python interface as a close enough substitute. To finish the proof, I need to show 4 things:
The invariant holds before the loop starts
The invariant is maintained by the loop body
The invariant and the negation of the loop-condition implies the desired post-condition
The loop terminates
Let's look at each in turn.
(0) Preliminaries
Since we'll use z3's python interface, we'll have to do a little bit of leg-work to get us started. Here's the skeleton we need:
from z3 import *
def C(p):
return And(p >= 0, p < 10)
def I(p):
return And(p >= 0, p <= 10)
x = Int('x')
Note that we parameterized the loop-condition (C) and the invariant (I) with a parameter so it's easy to call them with different arguments. This is a common trick in programming, abstracting away the control from the data. This way of coding will simplify our life later on.
(1) The invariant holds before the loop starts
This one is easy. Right before the loop, we know that x = 0. So we need to ask the SMT solver if x == 0 implies our invariant:
>>> prove (Implies(x == 0, I(x)))
proved
Voila! If you want to see the SMTLib for the proof obligation, you can ask z3 to print it for you:
>>> print(Implies(x == 0, I(x)).sexpr())
(=> (= x 0) (and (>= x 0) (<= x 10)))
(2) The invariant is maintained by the loop-body
The loop body is only run when the loop condition (C) is true. The body increments x by one. So, what we need to show is that if our invariant (I) is true, if the loop condition (C) is true, and if I increment x by one, then I remains true. Let's ask z3 exactly that:
>>> prove(Implies(And(I(x), C(x)), I(x+1)))
proved
Almost too easy!
(3) The invariant implies the result when loop condition is false
This time, all we need to ask the solver is to prove the required conclusion when I holds, but C doesn't:
>>> prove(Implies(And(I(x), Not(C(x))), x == 10))
proved
And we have now completed what's known as the partial-correctness claim. That is, if the loop terminates, then x will indeed be 10 at the end. This is what you were trying to prove to start with.
(4) The loop terminates
What we've done so far is known as partial-correctness. It says if the loop terminates, then your post-condition (i.e., x == 10) holds. But it does not make any guarantees that the loop will always terminate.
To get a full-proof, we have to prove termination. This is done by coming up with a measure function: A measure function is a function that assigns (typically) a numeric value to the set of program variables, which is bounded from below. Then we show that it goes down in each iteration and has an initial value that's above its lower-bound. Then we know that the loop cannot continue forever: The measure has to go down in each iteration, but it cannot do so since it's bounded below.
Termination proofs are usually harder, and coming up with a good measure can be tricky. But in this case, it's easy to come up with it:
def M(x):
return 10-x
The claim is that the measure is always non-negative in this case. Let's prove that before the loop starts, i.e., when x == 0:
>>> prove (Implies(x == 0, M(x) >= 0))
proved
It goes down in each iteration:
>>> prove (Implies(C(x), M(x) > M(x+1)))
proved
And finally, it's always positive if the loop executes:
>>> prove (Implies(C(x), M(x) >= 0))
proved
Now we know that the loop will terminate, so our proof is complete.
But wait!
You might wonder if I pulled a rabbit out of a hat here. How do we know that the above steps are sufficient? Or that I didn't make a mistake in my coding as I waved my hand over your program and magically translated it to z3-python?
For the first question: There's established research that for traditional imperative program semantics, Hoare-logic style reasoning is sound. Here's a good slide deck to start with: https://www.cl.cam.ac.uk/teaching/1617/HLog+ModC/slides/lecture2.pdf
For the second question: This is where the rubber hits the road. You have to put my argument to peer-review, possibly using an established theorem prover to code the whole thing up and trust that the mechanization is correct. Why3 (https://why3.lri.fr) is a good-platform to get started for this style of reasoning.
Picking the invariant
The trickiest part of this proof is coming up with the right invariant. A "good" invariant is one that's not only true, but one that allows you to prove the result you want. For instance, consider the following invariant:
def I(p):
return True
This invariant is manifestly true for all programs as well! But if you attempt to run the proofs we had with this version of I, you'll see that it won't go through and you'll get a counter-example. (It's quite instructive to do so.) In general, you can:
Pick an "invariant" that's not really enforced by your program, i.e., it doesn't stay true at all times as described above. Hopefully the counter-example you get from the solver will be helpful to identify what goes wrong.
Or, and this is way more likely, the invariant you picked is indeed an invariant of the program, but it is not strong enough to prove the result you want. In this case the counter-example will be less useful, and for complicated programs it can be hard to track down the reason why.
An invariant that allows you to prove the final result is called an "inductive invariant." The process of "improving" the invariant to get to a proof is known as "strengthening the invariant." There's a plethora of research in all of these topics, especially in the realm of model-checking. A good paper to read in these topics is Bradley's "Understanding IC3:" https://theory.stanford.edu/~arbrad/papers/Understanding_IC3.pdf.
Summary
The strategy outlined here is a "meta"-level proof: It's equivalent to a paper-proof which identified the proof goals, and shipped them to an SMT solver (z3 in this case), to finish the job. This is common practice in modern day proofs, i.e., coming up with sub-goals and using an automated-solver to discharge them. Theorem-provers like ACL2, Isabelle, Coq, etc. mechanize the "coming up with subgoals" part to a large extent, making sure the whole proof is sound with respect to a trusted (but typically very small) set of core-axioms. (This is the so called LCF methodology, see https://www.cl.cam.ac.uk/~jrh13/slides/manchester-12sep01/slides.pdf for a nice slide-deck on it.)
Hopefully this is a detailed-enough level answer for you to get you started in program verification with SMT-solvers. Perhaps it's more than what you asked for; but the rule-of-thumb is there is no free lunch in verification. It is a lot of work! However, you get pretty close to push-button reasoning these days (at least for certain kinds of programs) with the advances in automated theorem provers, SMT-solvers, and other frameworks that many people built over the years. Best of luck, but be warned that program-verification remains the holy-grail of computer science after almost 7-decades of work on it. Things always get better/easier, but there's much more work to be done in the field.

How to break the gem5 executable in GDB at a the nth instruction?

Using --debug-flags ExecAll tracing, I found that there is a bug at the Nth instruction, which happens at the Nth line of the log.
Is there an easy way to break specifically at that instruction to debug it in GDB and view gem5's internal state?
The simplest approach is to use --debug-break as shown at: schedBreak(<tick>) gdb debugging function not working
That makes gem5 raise a signal at a given simulation, which GDB stops at by default. You can determine what simulation time corresponds to your instruction by looking at an --debug-flags ExecAll trace beforehand.
You will want to break on the tick much more often than on the Nth instructions, in particular since gem5 simulates the instruction pipeline, and therefor there can be multiple instructions in flight at the same time.
Alternatively, from GDB your point of interest sees the ExecutionContext object, which if often called xc, you can just add a conditional breakpoint like:
b MyClass::myFunction if xc->numInsts.data()->value() == <n> - 2
The -2 is needed because this index is zero based, and because the tick increments after instruction execution.
You can also find the tick time rather than instruction count with:
p xc->cpu->tick
or from the other commonly available ThreadContext object with:
p tc->baseCpu->tick
You generally want to do this from the ::tick() function of your CPU model of interest.
For AtomicSimpleCPU::tick() you could also break just before the second instruction with:
b AtomicSimpleCPU::tick if (*threadInfo[curThread]).numInst == 1
Or to break at a given tick, say 1000 (500 is the one before it):
b AtomicSimpleCPU::tick if tick == 500
Two other important break locations are at the main event loop when an event is executed:
b EventQueue::serviceOne() if head->when() == 1000
and the event scheduling target point:
b EventQueue::schedule if when == <target-time>
b EventQueue::reschedule if when == <target-time>
or for the time of schedule itself:
b EventQueue::schedule if _curTick == 1000
b EventQueue::reschedule if _curTick == 1000
Together with reverse debugging and:
--debug-flags Event
these event breakpoints will actually allow you to understand what gem5 is doing.
Note however that conditional breakpoints significantly slow down simulation unfortunately... arghh.
Another useful technique to have in mind is that you can do a run that stops shortly after the point of interest with:
-m <tick>
and then reverse debug back to the exact point of interest, possibly conditionally since now you will be close the the point of interest, so the performance loss will not be a huge problem. You can then just continue going back to the root cause.
Tested in gem5 9f247403e558977738b5911a45e5776afff87b1a.

Unable to interrupt psychopy script, event.getKeys() always empty

I'm new to psychopy and python. I'm trying to program a way to quit a script (that I didn't write), by pressing a key for example. I've added this to the while loop:
while n < total
start=time.clock()
if len(event.getKeys()) > 0:
break
# Another while loop here that ends when time is past a certain duration after 'start'.
And it's not working, it doesn't register any key presses. So I'm guessing key presses are only registered during specific times. What are those times? What is required to register key presses? That loop is extremely fast, sending signals every few milliseconds, so I can't just add wait commands in the loop.
If I could just have a parallel thread checking for a key press that would be good too, but that sounds complicated to learn.
Thanks!
Edits: The code runs as expected otherwise (in particular no errors). "core" and "event" are included. There aren't any other "event" command of any kind that would affect the "key press log".
Changing the rest of the loop content to something that includes core.wait statements makes it work. So for anybody else having this difficulty, my original guess was correct: key presses are not registered during busy times (i.e. in my case a while statement that constantly checks the time), or possibly only during specific busy times... Perhaps someone with more knowledge can clarify.
....So I'm guessing key presses are only registered during specific
times. What are those times? What is required to register key
presses?....
To try and answer your specific question, the psychopy api functions/methods that cause keyboard events to be registered are ( now updated to be literally every psychopy 1.81 API function to do this):
event.waitKeys()[1]
event.clearEvents()[1]
event.getKeys()[2]
event.Mouse.getPressed()
win.flip()
core.wait()
visual.Window.dispatchAllWindowEvents()
1: These functions also remove all existing keyboard events from the event list. This means that any future call to a function like getKeys() will only return a keyboard event if it occurred after the last time one of these functions was called.
2: If keyList=None, does the same as *, else removes keys from the key event list that are within the keyList kwarg.
Note that one of the times keyboard events are 'dispatched' is in the event.getKeys() call itself. By default, this function also removes any existing key events.
So, without being seeing the full source of the inner loop that you mention, it seems highly likely that the event.getKeys() is never returning a key event because key events are being consumed by some other call within the inner loop. So the chance that an event is in the key list when the outer getKeys() is called is very very low.
Update in response to OP's comment on Jonas' test script ( I do not have enough rep to add comments to answers yet):
... Strange that you say this ..[jonas example code].. works
and from Sol's answer it would seem it shouldn't. – zorgkang
Perhaps my answer is giving the wrong understanding, as it is intended to provide information that shows exactly why Jonas' example should, and does, work. Jonas' example code works because the only time key events are being removed from the event buffer is when getKeys() is called, and any events that are removed are also returned by the function, causing the loop to break.
This is not really an answer. Here's an attempt to minimally reproduce the error. If the window closes on keypress, it's a success. It works for me, so I failed to reproduce it. Does it work for you?
from psychopy import event, visual, core
win = visual.Window()
clock = core.Clock()
while True:
clock.reset()
if event.getKeys():
break
while clock.getTime() < 1:
pass
I don't have the time module installed, so I used psychopy.core.Clock() instead but it shouldn't make a difference, unless your time-code ends up in an infinite loop, thus only running event.getKeys() once after a few microseconds.

How to handle GSM buffer on the Microcontroller?

I have a GSM module hooked up to PIC18F87J11 and they communicate just fine . I can send an AT command from the Microcontroller and read the response back. However, I have to know how many characters are in the response so I can have the PIC wait for that many characters. But if an error occurs, the response length might change. What is the best way to handle such scenario?
For Example:
AT+CMGF=1
Will result in the following response.
\r\nOK\r\n
So I have to tell the PIC to wait for 6 characters. However, if there response was an error message. It would be something like this.
\r\nERROR\r\n
And if I already told the PIC to wait for only 6 characters then it will mess out the rest of characters, as a result they might appear on the next time I tell the PIC to read the response of a new AT command.
What is the best way to find the end of the line automatically and handle any error messages?
Thanks!
In a single line
There is no single best way, only trade-offs.
In detail
The problem can be divided in two related subproblems.
1. Receiving messages of arbitrary finite length
The trade-offs:
available memory vs implementation complexity;
bandwidth overhead vs implementation complexity.
In the simplest case, the amount of available RAM is not restricted. We just use a buffer wide enough to hold the longest possible message and keep receiving the messages bytewise. Then, we have to determine somehow that a complete message has been received and can be passed to further processing. That essentially means analyzing the received data.
2. Parsing the received messages
Analyzing the data in search of its syntactic structure is parsing by definition. And that is where the subtasks are related. Parsing in general is a very complex topic, dealing with it is expensive, both in computational and laboriousness senses. It's often possible to reduce the costs if we limit the genericity of the data: the simpler the data structure, the easier to parse it. And that limitation is called "transport layer protocol".
Thus, we have to read the data to parse it, and parse the data to read it. This kind of interlocked problems is generally solved with coroutines.
In your case we have to deal with the AT protocol. It is old and it is human-oriented by design. That's bad news, because parsing it correctly can be challenging despite how simple it can look sometimes. It has some terribly inconvenient features, such as '+++' escape timing!
Things become worse when you're short of memory. In such situation we can't defer parsing until the end of the message, because it very well might not even fit in the available RAM -- we have to parse it chunkwise.
...And we are not even close to opening the TCP connections or making calls! And you'll meet some unexpected troubles there as well, such as these dreaded "unsolicited result codes". The matter is wide enough for a whole book. Please have a look at least here:
http://en.wikibooks.org/wiki/Serial_Programming/Modems_and_AT_Commands. The wikibook discloses many more problems with the Hayes protocol, and describes some approaches to solve them.
Let's break the problem down into some layers of abstraction.
At the top layer is your application. The application layer deals with the response message as a whole and understands the meaning of a message. It shouldn't be mired down with details such as how many characters it should expect to receive.
The next layer is responsible from framing a message from a stream of characters. Framing is extracting the message from a stream by identifying the beginning and end of a message.
The bottom layer is responsible for reading individual characters from the port.
Your application could call a function such as GetResponse(), which implements the framing layer. And GetResponse() could call GetChar(), which implements the bottom layer. It sounds like you've got the bottom layer under control and your question is about the framing layer.
A good pattern for framing a stream of characters into a message is to use a state machine. In your case the state machine includes states such as BEGIN_DELIM, MESSAGE_BODY, and END_DELIM. For more complex serial protocols other states might include MESSAGE_HEADER and MESSAGE_CHECKSUM, for example.
Here is some very basic code to give you an idea of how to implement the state machine in GetResponse(). You should add various types of error checking to prevent a buffer overflow and to handle dropped characters and such.
void GetResponse(char *message_buffer)
{
unsigned int state = BEGIN_DELIM1;
bool is_message_complete = false;
while(!is_message_complete)
{
char c = GetChar();
switch(state)
{
case BEGIN_DELIM1:
if (c = '\r')
state = BEGIN_DELIM2;
break;
case BEGIN_DELIM2:
if (c = '\n')
state = MESSAGE_BODY:
break;
case MESSAGE_BODY:
if (c = '\r')
state = END_DELIM;
else
*message_buffer++ = c;
break;
case END_DELIM:
if (c = '\n')
is_message_complete = true;
break;
}
}
}

Elm - How to modify the parameterisation of one signal based on another signal

How can I parameterise one signal based on another signal?
e.g. Suppose I wanted to modify the fps based on the x-position of the mouse. The types are:
Mouse.x : Signal Int
fps : number -> Signal Time
How could I make Elm understand something along the lines of this pseudocode:
fps (Mouse.x) : Signal Time
Obviously, lift doesn't work in this case. I think the result would be Signal (Signal Time) (but I'm still quite new to Elm).
Thanks!
Preamble
fps Mouse.x
Results in a type-error that fps requires an Int, not a Signal Int.
lift fps Mouse.x : Signal (Signal Int)
You are correct there. As CheatX's answers mentions, you cannot using these "nested signals" in Elm.
Answer to your question
It seems like you're asking for something that doesn't exist yet in the Standard Libraries. If I understand your question correctly you would like a time (or fps) signal of which the timing can be changed dynamically. Something like:
dynamicFps : Signal Int -> Signal Time
Using the built-in functions like lift does not give you the ability to construct such a function yourself from a function of type Int -> Signal Time.
I think you have three options here:
Ask to have this function added to the Time library on the mailing-list. (The feature request instructions are a little bloated for a request of such a function so you can skip stuff that's not applicable)
Work around the problem, either from within Elm or in JavaScript, using Ports to connect to Elm.
Find a way to not need a dynamically changing Time signal.
I advise option 1. Option 3 is sad, you should be able to what you asked in Elm. Option 2 is perhaps not a good idea if you're new to Elm. Option 1 is not a lot of work, and the folks on the mailing-list don't bite ;)
To elaborate on option 2, should you want to go for that:
If you specify an outgoing port for Signal Int and an incoming port for Signal Time you can write your own dynamic time function in JavaScript. See http://elm-lang.org/learn/Ports.elm
If you want to do this from within Elm, it'll take an uglier hack:
dynamicFps frames =
let start = (0,0)
time = every millisecond -- this strains your program enormously
input = (,) <~ frames ~ time
step (frameTime,now) (oldDelta,old) =
let delta = now - old
in if (oldDelta,old) == (0,0)
then (frameTime,now) -- this is to skip the (0,0) start
else if delta * frameTime >= second
then (delta,now)
else (0,old)
in dropIf ((==) 0) 0 <| fst <~ foldp step start input
Basically, you remember an absolute timestamp, ask for the new time as fast as you can, and check if the time between the remembered time and now is big enough to fit the timeframe you want. If so you send out that time delta (fps gives time deltas) and remember now as the new timestamp. Because foldp sends out everything it is to remember, you get both the new delta and the new time. So using fst <~ you keep only the delta. But the input time is (likely) much faster than the timeframe you want so you also get a lot of (0,old) from foldp. That's why there is a dropIf ((==) 0).
Nested signals are explicitly forbidden by the Elm's type system [part 3.2 of this paper].
As far as I understand FRP, nested signals are only useful when some kind of flattering provided (monadic 'join' function for example). And that operation is hard to be implemented without keeping an entire signal history.