I build uart verification environment.
I have uart_tx_agent and uart_rx_agent.
The uart_tx_agent has dummy bfm which control cts port and doesn't have driver.
The uart_rx_agent has bfm and driver which has uart_sequence.
uart_env.e :
In the uart_env I initiate the agents as the following
unit uart_env_u like uvm_env {
uart_tx_agent: uart_tx_agent_u is instance;
uart_rx_agent: uart_rx_agent_u is instance;
};
unit uart_tx_agent_u like uvm_agent {
keep soft active_passive == PASSIVE;
};
unit uart_rx_agent_u like uvm_agent {
keep soft active_passive == PASSIVE;
};
uart_tx_agent.e:
extend uart_rx_agent_u{
uart_rx_monitor : RX uart_monitor_u is instance;
when ACTIVE uart_rx_agent_u
{
uart_bfm : uart_rx_bfm_u is instance;
driver: uart_driver_u is instance;
};
};
unit uart_rx_bfm_u like uvm_bfm{
};
sequence uart_sequence using
item = uart_frame_s,
created_driver = uart_driver_u;
uart_tx_agent.e:
extend uart_tx_agent_u{
uart_tx_monitor : TX uart_monitor_u is instance;
uart_tx_scb: uart_tx_scoreboard_u is instance;
when ACTIVE uart_tx_agent_u {
uart_bfm : uart_tx_bfm_u is instance;
};
};
unit uart_tx_bfm_u like uvm_bfm{
};
In the tx_test I have only one Main sequence - vr_ad_sequence, and I do the following:
extend MAIN vr_ad_sequence {
.....
.....
keep uart_env.uart_tx_agent.active_passive == ACTIVE;
...
};
In the rx_test I have two MAIN sequences:
extend MAIN uart_sequence {
....
....
body() #driver.clock is only {
};
};
extend MAIN vr_ad_sequence {
.....
.....
keep uart_env.uart_tx_agent.active_passive == ACTIVE;
...
};
But it does work as I expected.
In both of the tests the agents are staying PASSIVE (with no bfm /driver).
the sequences are being generated only after the test starts, so it's too late to set the agent as ACTIVE at that point.
put this constraint in some higher level in the hierarchy. for example - in the env containing this agent
You need to distinguish between the unit hierarchy, which is generated at the beginning of the simulation, and structs. The latter can be dynamically generated at any time.
Hence any constraints in e.g. sequences generated at a later point in simulation cannot change the unit hierarchy. It's also bad style to do this.
Related
my scenario: I have an existing unit test framework with ~3000 individual test cases. They are made from TEST, TEST_F and TEST_P macros.
Internally the tested modules make use of a logger library and now my goal is to create individual log files for each test case. To do so I would like to call a function as a SetUp for each test case.
Is there a way to register such function at the framework and get it called automatically?
The obvious solution for me would look like: do the work in a test fixture constructor or SetUp() but then I'd have to touch every single test case.
I do like the idea of registering a global setup at the framework with AddGlobalTestEnvironment() but as I understand this is handled only once per executable.
By the way: I have acceptance tests implemented in robot test and guess what? I want to repeat the task there...
Thanks for any inspiration!
Christoph
You mentioned:
The obvious solution for me would look like: do the work in a test fixture constructor or SetUp() but then I'd have to touch every single test case.
If the reason that you think you would need to touch every single test case is to set the filename differently, you can use the combination of SetUp() function and the current_test_info provided by GTest to get the test name for each test, and then use that to create a separate file for each test.
Here is an example:
// Class for test fixture
class MyTestFixture : public ::testing::Test {
protected:
void SetUp() override {
test_name_ = std::string(
::testing::UnitTest::GetInstance()->current_test_info()->name());
std::cout << "test_name_: " << test_name_ << std::endl;
// CreateYourLogFileUsingTestName(test_name_);
}
std::string test_name_;
};
TEST_F(MyTestFixture, Test1) {
EXPECT_EQ(this->test_name_, std::string("Test1"));
}
TEST_F(MyTestFixture, Test2) {
EXPECT_EQ(this->test_name_, std::string("Test2"));
}
Live example here: https://godbolt.org/z/YjzEG3G77
The solution I found in the gtest docs:
class TraceHandler : public testing::EmptyTestEventListener
{
// Called before a test starts.
void OnTestStart( const testing::TestInfo& test_info ) override
{
// set the logfilename here
}
// Called after a test ends.
void OnTestEnd( const testing::TestInfo& test_info ) override
{
// close the log here
}
};
int main( int argc, char** argv )
{
testing::InitGoogleTest( &argc, argv );
testing::TestEventListeners& listeners =
testing::UnitTest::GetInstance()->listeners();
// Adds a listener to the end. googletest takes the ownership.
listeners.Append(new TraceHandler);
return RUN_ALL_TESTS();
}
This way it automatically applies to all tests linked to this main-function.
Maybe I have to mention: my logger is a collection of static functions that send udp-packets to a receiver that cares for actual logging. I can control the filename by one of that functions. That's the reason why I don't need to insert code in every single TEST, TEST_F or TEST_P.
I have a kotlin kotest (formerly known as kotlintest) BehaviorSpec
with one Given("...") and many When("...") Then("...") under it
I want to execute a cleanup after the whole Spec (respectively every Given clause) has finished.
#MicronautTest
class StructurePersistSpec(
private val iC : InstancesC
) : BehaviorSpec({
// afterSpec {
finalizeSpec {
cleanup()
}
Given("...") {
When("...") {
Then("...") {
...
}
Then("...") {
...
}
}
When("...") {
Then("...") {
...
}
Then("...") {
...
}
}
}
...
}
on using afterSpec { } I get multiple calls (amount of Whens??) to the afterSpec { } clause and NOT just one after the Spec finished (or finishing of the/each Given Clause)
on using finalizeSpec { } it does NOT get called at all (breakpoint inside it is never hit)
what am I doing wrong?
or did I miss some fancy characteristics of BehaviorSpecs ?
The reason you are getting multiple calls is that probably you have set a different IsolationMode for your test.
That would mean your Spec will be recreated (and then cleaned) for every test. In order to have a single afterSpec call from the framework, your IsolationMode must be set to SingleInstance.
Bare in mind that might affect the way your tests are being executed hence their validity or ability to pass.
Documentation: https://kotest.io/isolation_mode/
Here is my test code:
test('should set correct constant', (){
expect(Stores.CurrentContext, 'currentContext');
});
but the picture above shows that the static constant code not tested. and why?
version infos:
Flutter 1.2.2-pre.3 • channel master • https://github.com/flutter/flutter.git
Framework • revision 67cf21577f (4 days ago) • 2019-02-14 23:17:16 -0800
Engine • revision 3757390fa4
Tools • Dart 2.1.2 (build 2.1.2-dev.0.0 0a7dcf17eb)
A coverage tool registers which code instructions was accessed by the running code.
Think of it as a recording of the memory addresses of "code sections" visited by the Program Counter register
of the processor stepping through program functions.
A static variable is reached through a data memory access, there are no code instructions involved:
a variable should be on the stack, on the heap or in a data section if it is a constant.
Consider this code:
import 'package:rxdart/rxdart.dart';
class Stores {
static const String Login = 'login';
static const String CurrentContext = 'currentContext';
}
class Store {
final name;
static var eMap = Map();
Store._internal(this.name); // DA:13
factory Store(String name) { // DA:15
if (eMap.containsKey(name)) { // DA:16
return eMap[name]; // DA:17
} else {
final store = Store._internal(name); // DA:19
eMap[name] = store; // DA:20
return store;
}
}
}
and this code run:
test('should set correct constant', (){
Store('currentContext');
Store('currentContext');
expect(Stores.CurrentContext, 'currentContext');
});
If you look at the raw output of icov you will notice that lines number of static variable is never reached, giving meaning to the model described above:
SF:lib/stores.dart
DA:13,1
DA:15,1
DA:16,2
DA:17,2
DA:19,1
DA:20,2
LF:6
LH:6
The visual reporting tool shows a 100% coverage:
If your reporting tool shows red lines over static variables it has to be considered a "false positive": survive with it or change the reporting tool.
I have a state machine with a relatively small set of states and inputs and I want to test the transitions exhaustively.
Transitions are coded using a Map<State, Map<Input, State>>, the code is something like this:
enum State {
S1,
// ...
}
enum Input {
I1,
// ...
}
class StateMachine {
State current;
Map<State, Map<Input, State>> transitions = {
S1: {
I1: S2,
// ...
},
// ...
};
State changeState(Input x) {
if (transitions[current] == null)
throw Error('Unknows state ${current}');
if (transitions[current][x] == null)
throw Error('Unknown transition from state ${current} with input ${x}');
current = transitions[current][x];
return current;
}
void execute() {
// ...
}
}
To test it I see 3 approaches:
1) Write lot of boilerplate code to check every single combination
2) Automate the tests creation: this seems a better approach to me, but this would end up using a structure that is identical to the Map used in the StateMachine. What should I do? Copy the Map in the test file or import it from the implementation file? The latter would make the test file depend on the implementation and doesn't seem a good idea.
3) Test Map for equality, same problem as before: equality with itself or with a copy? This approach is essentially what I do with the other 2 but doesn't seem like a canonical test
Maybe you want to have a look at this: https://www.itemis.com/en/yakindu/state-machine/documentation/user-guide/sctunit_test-driven_statechart_development_with_sctunit
It shows, how you can do a model based and test driven development of state machines including the option to generate unit test code and measuring the test coverage.
I have this net/http server setup with several middleware in a chain and I can't find examples on how I should test these...
I am using basic net/http with the gorilla/mux router and one Handle looks somewhat like this:
r.Handle("/documents", addCors(checkAPIKey(getDocuments(sendJSON)))).Methods("GET")
In these I aggregate some data and supply them via Gorilla Context context.Set methods.
Usually I test my http functions with httptest, and I hope to do it with these as well but I can't figure out how and I am curious as to what is the best way. Should I test each middleware seperately? Should I prefill the appropriate context values then when they are needed? Can I test this entire chain at once so I can just check desired states on input?
I would not test anything involving Gorilla or any other 3rd party package. If you want to test to make sure it works, i'd setup some external test runner or integration suite for the endpoints of a running version of your app (e.g. a C.I. server).
Instead, test your Middleware and Handlers individually - as those you have control over.
But, if you are set on testing the stack (mux -> handler -> handler -> handler -> MyHandler), this is where defining the middleware globally using functions as vars could help:
var addCors = func(h http.Handler) http.Handler {
...
}
var checkAPIKey = func(h http.Handler) http.Handler {
...
}
During normal use, their implementation remains the same with change.
r.Handle("/documents", addCors(checkAPIKey(getDocuments(sendJSON)))).Methods("GET")
But for unit testing, you can override them:
// important to keep the same package name for
// your test file, so you can get to the private
// vars.
package main
import (
"testing"
)
func TestXYZHandler(t *testing.T) {
// save the state, so you can restore at the end
addCorsBefore := addCors
checkAPIKeyBefore := checkAPIKey
// override with whatever customization you want
addCors = func(h http.Handler) http.Handler {
return h
}
checkAPIKey = func(h http.Handler) http.Handler {
return h
}
// arrange, test, assert, etc.
//
// when done, be a good dev and restore the global state
addCors = addCorsBefore
checkAPIKey = checkAPIKeyBefore
}
If you find yourself copy-n-pasting this boiler plate code often, move it to a global pattern within your unit tests:
package main
import (
"testing"
)
var (
addCorsBefore = addCors
checkAPIKeyBefore = checkAPIKey
)
func clearMiddleware() {
addCors = func(h http.Handler) http.Handler {
return h
}
checkAPIKey = func(h http.Handler) http.Handler {
return h
}
}
func restoreMiddleware() {
addCors = addCorsBefore
checkAPIKey = checkAPIKeyBefore
}
func TestXYZHandler(t *testing.T) {
clearMiddleware()
// arrange, test, assert, etc.
//
restoreMiddleware()
}
A side note on unit testing end points...
Since middleware should operate with sensible defaults (expected to pass normally and not mutex state of the underlying stream of data you want to test in func), I advise to unit test the middleware outside of the context of your actual main Handler function.
That way, you have one set of unit tests strictly for your middleware. And another set of tests focusing purely on the primary Handler of the url you are calling. It makes discovering the code much easier for newcomers.