I'm new to verilog, Is there a difference between writing a test bench for a pipelined module and writing a test bench for an ordinary module?
I just need a simple example clarifying the expected difference in the test bench code for testing a pipelined module and a non pipelined module please. Note that the module that I'm testing is pipelined not the testbench.
If you only want to verify the behaviour of the pipelined module as a whole, you could just build a simple UVM-based testbench architecture, like the example in the link: Simple UVM Testbench Example.
If you want to verify the connections between the internal components of the pipeline structure, you could build a Universal Verification Component (UVC) for each pipeline stage and a UVM verification environment that will include all UVCs.
In any way, if you want to verify the pipelined module as a black box, knowing only the expected responses from the desired inputs, it is about the same as verifying it as a non-pipelined module.
Related
I'm working on developing some Behavior Driven Development i.e style tests using pytest-bdd. We want to re-use the same features and more or less the same step definitions to having both stubbed and live calls to a third party API i.e. we want to reuse test code for integration and end to end testing.
I'm wondering about whether there was a convention on how to handle alternating between mocked and real calls in pytest_bdd or pytest
This question is similar: Running pytest tests against multiple backends? with an answer to add a parser option with a pytest_addoption hook placed in the top level conftest.py.
It looks like a good approach to select a stubbed or live api call in api is to add a parser option with a pytest_addoption hook. Conditional logic will need to look for those option in the relevant tests.
This answer to a similar question is the source for this approach and has more detail: https://stackoverflow.com/a/50686439/961659
In Rust, is there any way to execute a teardown function after all tests have been run (i.e. at the end of cargo test) using the standard testing library?
I'm not looking to run a teardown function after each test, as they've been discussed in these related posts:
How to run setup code before any tests run in Rust?
How to initialize the logger for integration tests?
These discuss ideas to run:
setup before each test
teardown before each test (using std::panic::catch_unwind)
setup before all tests (using std::sync::Once)
One workaround is a shell script that wraps around the cargo test call, but I'm still curious if the above is possible.
I'm not sure there's a way to have a global ("session") teardown with Rust's built-in testing features, previous inquiries seem to have yielded little, aside from "maybe a build script". Third-party testing systems (e.g. shiny or stainless) might have that option though, might be worth looking into their exact capabilities
Alternatively, if nightly is suitable there's a custom test frameworks feature being implemented, which you might be able to use for that purpose.
That aside, you may want to look at macro_rules! to cleanup some boilerplate, that's what folks like burntsushi do e.g. in the regex package.
I have two gen_servers that communicate using gen_tcp sockets.
The first gen_server exports a function, that when called builds (calling another function) a RFC 791 packet, connects to a socket where the other gen_server is listening for incoming connections, and sends the packet to it.
I tested this in the shell and it is working, but what would be the right tool/way to test such a code? Should I use eunit or or is there any other tool more suitable?
Moreover I would like to know what should I actually test? Only the sending part or also function for packet construction?
You can definitely write some EUnit tests for every gen_servers:
http://www.erlang.org/doc/apps/eunit/chapter.html
http://learnyousomeerlang.com/eunit#the-need-for-tests
You can also have a look at Common Test to test the interaction:
http://www.erlang.org/doc/apps/common_test/basics_chapter.html
http://learnyousomeerlang.com/common-test-for-uncommon-tests#what-is-common-test
Since your implementation strongly depends on the data passed, I would have a look to generators provided by QuickCheck Mini or PropEr:
http://www.quviq.com/
http://proper.softlab.ntua.gr/
A brief explaination on how you can improve your unit tests with something like QuickCheck mini is available here:
http://www.erlang-solutions.com/upload/docs/85/EUG-London-Apr2011.pdf
As a start, I would focus on testing the functions you export (the module interface). You can still add more tests later.
A command line utility/software could potentially consist of many different switches and arguments.
Lets say your software is called CLI and lets say CLI has the following features:
The general syntax of CLI is:
CLI <data structures> <operation> <required arguments> [optional arguments]
<data structures> could be 'matrix', 'complex numbers', 'int', 'floating point', 'log'
<operation> could be 'add', 'subtract', 'multiply', 'divide'
I cant think of any required and optional arguments, but lets say your software does support it
Now you want to test this software. And you wish to test interface itself, not the logic. Essentially the interface must return the correct success codes and error codes.
Essentially a lot of real word software still present a Command Line interface with several options. I am curious if there is any formal testing methodology established for this. One idea i had was to construct a grammar (like EBNF) and describing the 'language' of the interface. But I fail to push this idea ahead. What good is a grammar for in this case? How does it enable the generation of many many combinations .
I am curious to learn more about any theoretical models which could be applied to such a problem or if anyone in here has actually done such testing with satisfying coverage
There is a command-line tool as part of a product i maintain, and i have a situation thats very similar to what you describe. What i did was employ a unit testing framework, and encode each combination of arguments as a test method.
The program is implemented in c#/.NET, so i use microsoft's testing framework that's builtin to Visual Studio, but the approach would work with any unit testing framework.
Each test invokes a utility function that starts the process and sends in the input and cole ts the output. Then, each test is responsible for verifying that the output from the CLI matches what was expected. In some cases, there's a family of test cases that can be performed by a single test method, wih a for loop in it. The logic needs to run the CLI and check the output for each iteration.
The set of tests i have does not cover every permutation of arguments, but it covers the 80% cases and i can add new tests if there are ever any defects.
Using a recursive grammar to generate switches is an interesting idea. If you where to try this then you would need to first write the grammar in such a way that all switches could be used, and then do a random walk of the grammar.
This provides an easy method of randomly walking a grammar and outputting the result.
For one of my testing project, I am working on sqlite testing. I see that they have have a regression test suite. Is it an open source test suite I can use? I am either planning to use a test suite available or design a test suite..My aim is to design a huge number of tests (so using random values adn assigning them is the best call)
Any help and views wil be appreciated:-)
Saying they have a regression test suite is a little bit of an understatement. The SQLite engine itself has 73,000 lines of code. The test suite has 91,378,600 lines of test code. Besides that, the library is C and tests are TCL, so a lot more bang for your buck in each line of test code.
You can read about SQLite's regression test suite here:
http://www.sqlite.org/testing.html
And you can browse and download the source from the public repository (requires anonymous login):
http://www.sqlite.org/cgi/src/dir?name=test