SPIN: How to interact with this model checker passing parameter from outside? - spin

I'm using SPIN Model Checker, I'd like to know If SPIN is a close box or not.
Can be used passing parameters from an upper level (ex: Phyton or batch?).
I'm interested on Simulation mode of SPIN, not on the Verification mode.
In the end, I'd like to know If (always in simulation mode) there is any method to write the simulation output to a phisical File, without using C embedded code.

Related

Is Cmd.map the right way to split an Elm SPA into modules?

I'm building a single page application in Elm and was having difficulty deciding how to split my code in files.
I ended up splitting it using 1 module per page and have Main.elm convert the Html and Cmd emitted by each page using Cmd.map and Html.map.
My issue is that the documentation for both Cmd.map and Html.map says that :
This is very rarely useful in well-structured Elm code, so definitely read the section on structure in the guide before reaching for this!
I checked the only 2 large apps I'm aware of :
elm-spa-example uses Cmd.map (https://github.com/rtfeldman/elm-spa-example/blob/cb32acd73c3d346d0064e7923049867d8ce67193/src/Main.elm#L279)
I was not able to figure out how https://github.com/elm/elm-lang.org
deals with the issue.
Also, both answers to this stackoverflow question suggest using Cmd.map without second thoughts.
Is Cmd.map the "right" way to split a single page application in modules ?
I think sometimes you just have to do what's right for you. I used the Cmd.map/Sub.map/Html.map approach for an application I wrote that had 3 "pages" - Initializing, Editing and Reporting.
I wanted to make each of these pages its own module as they were relatively complicated, each had a fair number of messages that are only relevant to each page, and it's easier to reason about each page independently in its own context.
The downside is that the compiler won't prevent you from receiving the wrong message for a given page, leading to a runtime error (e.g., if the application receives an Editing.Save when it is in the Reporting page, what is the correct behavior? For my specific implementation, I just log it to the console and move on - this was good enough for me (and it never happened anyway); Other options I've considered include displaying a nasty error page to indicate that something horrible has happened - a BSOD if you will; Or to simply reset/reinitialize the entire application).
An alternative is to use the effect pattern as described extensively in this discourse post.
The core of this approach is that :
The extended Effect pattern used in this application consists in definining an Effect custom type that can represent all the effects that init and update functions want to produce.
And the main benefits :
All the effects are defined in a single Effect module, which acts as an internal API for the whole application that is guaranteed to list every possible effect.
Effects can be inspected and tested, not like Cmd values. This allows to test all the application effects, including simulated HTTP requests.
Effects can represent a modification of top level model data, like the Session 3 when logging in 3, or the current page when an URL change is wanted by a subpage update function.
All the update functions keep a clean and concise Msg -> Model -> ( Model, Effect Msg ) 2 signature.
Because Effect values carry the minimum information required, some parameters like the Browser.Navigation.key are needed only in the effects perform 3 function, which frees the developer from passing them to functions all over the application.
A single NoOp or Ignored String 25 can be used for the whole application.

Verilog: assigning to a module input from within the module itself is okay to do?

I just encountered a case where Verilog module inputs were being assigned to from within the module itself!
I thought for sure this would error out any Verilog simulator, but no, one (at least) lets this pass!
How can this be?!
Isn't this just inviting an "X" tragedy, as soon as something outside the module assigns a different value to the input?
Am I REALLY missing something here?
In case it matters, the module in question came as part of a behavioral simulation library provided to us by our foundry.
The Verilog language does not have any rules about the flow of data based on port direction. The SystemVerilog LRM has a section 23.3.3.1 Port coercion that explicily describes places where inputs can be coerced to output and vice versa. However, Synthesis tools have coding requirements that prevent multiple drivers on the same signal. So if there are drivers from both inside and outside the instatiated module, you will get synthesis errors.
SystemVerilog has a number of coding styles that can catch multiple drivers on a signal as part of the simulation flow, so you don't have to wait until you get to synthesis, or use a separate linting tool.

GNU Radio Companion: how can I convert float stream to be printed to console?

I have an input to a Threshold block, which I have verified is working with a QT GUI number sink.
I want to print the output of the Threshold block to console, preferably using the Message Debug block.
However, the output of the Threshold block is a float stream, which doesn't match the input of the Message Debug block.
Is there a way to convert a float stream into message, or am I going down the wrong hole?
My general aim is: when the input exceeds a certain threshold, print to console. Another program will monitor the console and when there is a print out, this triggers another action. I'm also not sure how to output ONLY when the threshold is exceeded, but one problem at a time.
I'm also not sure how to output ONLY when the threshold is exceeded, but one problem at a time.
Yes, but that problem is a blocker: Std output is a shared thing across all things in the GNU Radio process, so you can't generally guarantee exclusivity.
Let's not go down that route!
Instead, use something designed for exactly for this kind of things decades ago in UNIX!
Named pipes. These are FIFOs that you can handle like files.
So, use a file source to write to a FIFO, and pipe that FIFO into your other program.
It's really simple:
mkfifo /path/to/where/you/want/to/named/pipe
add a file sink that writes to /path/to/where/you/want/to/named/pipe
either make your other software open that /path/to/where/you/want/to/named/pipe, or do something like other_program < /path/to/where/you/want/to/named/pipe (that actually makes the standard (==console-equivalent) input of your other program what you write to that file sink)

Simulating multiple instances of an embedded processor

I'm working on a project which will entail multiple devices, each with an embedded (ARM) processor, communicating. One development approach which I have found useful in the past with projects that only entailed a single embedded processor was develop the code using Visual Studio, divided into three portions:
Main application code (in unmanaged C/C++ [see note])
I/O-simulating code (C/C++) that runs under Visual Studio
Embedded I/O code (C), which Visual Studio is instructed not to build, runs on the target system. Previously this code was for the PIC; for most future projects I'm migrating to the ARM.
Feeding the embedded compiler/linker the code from parts 1 and 3 yields a hex file that can run on the target system. Running parts 1 and 2 together yields code which can run on the PC, with the benefit of better debugging tools and more precise control over I/O behavior (e.g. I can make the simulation code introduce certain types of random hiccups more easily than I can induce controlled hiccups on real hardware).
Target code is written in C, but the simulation environment uses C++ so as to simulate I/O registers. For example, I have a PortArray data structure; the header file for the embedded compiler includes a line like unsigned char LATA # 0xF89; and my header file for simulation includes #define LATA _IOBIT(f89,1) which in turn invokes a macro that accesses a suitable property of an I/O object, so a statement like LATA |= 4; will read the simulated latch, "or" the read value with 4, and write the new value. To make this work, the target code has to compile under C++ as well as under C, but this mostly isn't a problem. The biggest annoyance is probably with enum types (which behave as integers in C, but have to be coaxed to do so in C++).
Previously, I've used two approaches to making the simulation interactive:
Compile and link a DLL with target-application and simulation code, and have VB code in the same project which interacts with it.
Compile the target-application code and some simulation code to an EXE with instance of Visual Studio, and use a second instance of Visual Studio for the simulation-UI. Have the two programs communicate via TCP, so nearly all "real" I/O logic is in the simulation program. For example, the aforementioned `LATA |= 4;` would send a "read port 0xF89" command to the TCP port, get the response, process the received value, and send a "write port 0xF89" command with the result.
I've found the latter approach to run a tiny bit slower than the former in some cases, but it seems much more convenient for debugging, since I can suspend execution of the unmanaged simulation code while the simulation UI remains responsive. Indeed, for simulating a single target device at a time, I think the latter approach works extremely well. My question is how I should best go about simulating a plurality of target devices (e.g. 16 of them).
The difficulty I have is figuring out how to make each simulated instance get its own set of global variables. If I were to compile to an EXE and run one instance of the EXE for each simulated target device, that would work, but I don't know any practical way to maintain debugger support while doing that. Another approach would be to arrange the target code so that everything would compile as one module joined together via #include. For simulation purposes, everything could then be wrapped into a single C++ class, with global variables turning into class-instance variables. That would be a bit more object-oriented, but I really don't like the idea of forcing all the application code to live in one compiled and linked module.
What would perhaps be ideal would be if the code could load multiple instances of the DLL, each with its own set of global variables. I have no idea how to do that, however, nor do I know how to make things interact with the debugger. I don't think it's really necessary that all simulated target devices actually execute code simultaneously; it would be perfectly acceptable for simulation instances to use cooperative multitasking. If there were some way of finding out what range of memory holds the global variables, it might be possible to have the 'task-switch' method swap out all of the global variables used by the previously-running instance and swap in the contents applicable to the instance being switched in. Although I'd know how to do that in an embedded context, though, I'd have no idea how to do that on the PC.
Edit
My questions would be:
Is there any nicer way to allow simulation logic to be paused and examined in VS2010 debugger, while keeping a responsive UI for the simulator front-end, than running the simulator front end and the simulator logic in separate instances of VS2010, if the simulation logic must be written in C and the simulation front end in managed code? For example, is there a way to tell the debugger that when a breakpoint is hit, some or all other threads should be allowed to keep running while the thread that had hit the breakpoint sits paused?
If the bulk of the simulation logic must be source-code compatible with an embedded system written in C (so that the same source files can be compiled and run for simulation purposes under VS2010, and then compiled by the embedded-systems compiler for use in real hardware), is there any way to have the VS2010 debugger interact with multiple simulated instances of the embedded device? Assume performance is not likely to be an issue, but the number of instances will be large enough that creating a separate project for each instance would be likely be annoying in the absence of any way to automate the process. I can think of three somewhat-workable approaches, but don't know how to make any of them work really nicely. There's also an approach which would be better if it's possible, but I don't know how to make it work.
Wrap all the simulation code within a single C++ class, such that what would be global variables in the target system become class members. I'm leaning toward this approach, but it would seem to require everything to be compiled as a single module, which would annoyingly affect the design of the target system code. Is there any nice way to have code access class instance members as though they were globals, without requiring all functions using such instances to be members of the same module?
Compile a separate DLL for each simulated instance (so that e.g. if I want to run up to 16 instances, I would include 16 DLL's in the project, all sharing the same source files). This could work, but every change to the project configuration would have to be repeated 16 times. Really ugly.
Compile the simulation logic to an EXE, and run an appropriate number of instances of that EXE. This could work, but I don't know of any convenient way to do things like set a breakpoint common to all instances. Is it possible to have multiple running instances of an EXE attached to a single debugger instance?
Load multiple instances of a DLL in such a way that each instance gets its own global variables, while still being accessible in the debugger. This would be nicest if it were possible, but I don't know any way to do so. Is it possible? How? I've never used AppDomains, but my intuition would suggest that might be useful here.
If I use one VS2010 instance for the front-end, and another for the simulation logic, is there any way to arrange things so that starting code in one will automatically launch the code in the other?
I'm not particularly committed to any single simulation approach; while it might be nice to know if there's some way of slightly improving the above, I'd also like to know of any other alternative approaches that could work even better.
I would think that you'd still have to run 16 copies of your main application code, but that your TCP-based I/O simulator could keep a different set of registers/state for each TCP connection that comes in.
Instead of a bunch of global variables, put them into a single structure that encompasses the I/O state of a single device. Either spawn off a new thread for each socket, or just keep a list of active sockets and dedicate a single instance of the state structure for each socket.
the simulators I have seen that handle multiple instances of the instruction set/processor are designed that way. There is a structure usually that contains a complete set of registers, and a new pointer or an array of these structures are used to multiply them into multiple instances of the processor.

Restricting Valgrind to a specific function

I have a big program to run. Using valgrind it takes hours and hours to run. I heard that there is something where we can call valgrind for a specific function in the program. And rest of program will be executed normally(without valgrind env).
Can anybody help me with this. I tried searching it over internet , May be I am missing the term to search.
It all depends on what tool you're wanting to use. For callgrind (the profiler in valgrind) there is an option --toggle-collect=function to allow you to collect information inside a particular function and all its children.
However if the tool you're interested in is memcheck (for capturing leaks / memory errors) then there is no available command line option.
Googling "valgrind profile specific function only" and go "I feel lucky"
In addition to enabling instrumentation, you must also enable event collection for the parts
of your program you are interested in. By default, event collection is enabled everywhere.
You can limit collection to a specific function by using --toggle-collect=function. This will
toggle the collection state on entering and leaving the specified functions. When this option
is in effect, the default collection state at program start is "off". Only events happening
while running inside of the given function will be collected. Recursive calls of the given
function do not trigger any action.
More here