Specman - BFM is created though it shouldn't - verification

I have a BFM in my tx agent (without a sequence driver).
extend uart_tx_agent_u{
uart_tx_monitor : TX uart_monitor_u is instance;
uart_tx_scb: uart_tx_scoreboard_u is instance;
when ACTIVE uart_tx_agent_u {
uart_bfm : uart_tx_bfm_u is instance;
};
};
When I run the test (I don't change the active_passive field) I can see that the uart_bfm was created (according messages which are printed).

Are you sure you want to change the constraint from test to test? Either the agent is active and sends data to the DUT, or it is passive and only collects data sent from the DUT.
What people usually do is to define the agent as ACTIVE, so that it has a sequence driver and a BFM.
In the test files, you define the scenarios for the tests, by defining the sequences for this agent.
So in tests that you do not want to send any data to the DUT from this agent, set the sequence to do nothing -
extend MAIN my_seq {
body() #drive.clock is only {
// do nothing, this agent should not send data in this test
};
};
If you really want to change the environment topology, and in some tests the agent is to be PASSIVE and in some to be ACTIVE, you can do this in the test file, by constraining the agent.
And this constraint has to be in the pre-run generation.
Through the test - the agent is either ACTIVE or not. This cannot be change during the run. You cannot say "i start the test with a PASSIVE agent, but during the test i want to make it ACTIVE".

If the bfm is instantiated, it must be that the uart_tx_agent_u is ACTIVE.
when you say you don not change it - maybe it was randomly set to ACTIVE?

a sequence struct is being generated after the test starts. and a sequence is generated only if there is a driver. and a driver is instantiated only if the agent is ACTIVE.
environment topology should be constrained top down, not to constrain a unit from a struct that is contained in it
if you know you want to agent to be ACTIVE, then constrain it -
extend rx_uart_agent_u {
keep active_passive == TRUE;
};
or from its parent
extend env {
keep me.rx_agent.active_passive == TRUE;
};

Related

Can we actually send out mails during semi-automatic testing?

We are using unit / integration tests during Shopware 6 development.
One technique we use is to disable database transaction behaviour to see the results for example of fixtures in the admin panel, for an easier debugging / understanding:
trait IntegrationTestBehaviour
{
use KernelTestBehaviour;
// use DatabaseTransactionBehaviour;
use FilesystemBehaviour;
use CacheTestBehaviour;
use BasicTestDataBehaviour;
use SessionTestBehaviour;
use RequestStackTestBehaviour;
}
Similar to this it would be helpful to send out actual emails during some tests (only for development, not in the CI and so on).
It is already possible to automatically test emails like this:
$eventDidRun = false;
$listenerClosure = function (MailSentEvent $event) use (&$eventDidRun): void {
$eventDidRun = true;
};
$this->addEventListener($dispatcher, MailSentEvent::class, $listenerClosure);
// do something that sends an email
static::assertTrue($eventDidRun, 'The mail.sent Event did not run');
But sometimes we want to manually see the actual email.
The .env.test already contains a valid mailer URL:
MAILER_URL=smtp://x:y#smtp.mailtrap.io:2525?encryption=tls&auth_mode=login
But still no mails get send during the test.
While I guess that this is fully intentional, is there some method to workaround the blockage of getting mails sent during testing?
The reason is the MAILER_URL variable is pre-set to null://localhost in the phpunit.xml.dist of the platform repository:
<server name="MAILER_URL" value="null://localhost"/>
You could set the MAILER_URL environment variable yourself before the tests of the class are executed:
/**
* #beforeClass
*/
public static function setMailerUrl(): void
{
$_SERVER['MAILER_URL'] = 'smtp://x:y#smtp.mailtrap.io:2525?encryption=tls&auth_mode=login';
}

SOAP UI - Set a node value in all test step's requests of all test cases in a test suites

I'm trying to set a node value in all test step's requests xml of all test cases in a test suite.
The groovy script is in the first test case and I get an error (XmlException: Unexpected Element: CDATA) as soon as the script try to edit the same tag in the second test case.
def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context )
def AlltestCases = testRunner.testCase.testSuite.project.testSuites[testRunner.testCase.testSuite.name]
0.upto(AlltestCases.getTestCaseCount()) {
AlltestCases.getTestCaseList().each{
it.getTestStepList().each{ if(it.getClass()==com.eviware.soapui.impl.wsdl.teststeps.WsdlTestRequestStep){
if(it.getName().toLowerCase().contains("verify")){
step = groovyUtils.getXmlHolder("${it.getName()}"+"#Request")
step.setNodeValue("//*:Name/text()", "\$"+"{#TestSuite#NAME_ID}")
step.updateProperty()
}
}
}
}
}
If I understand your question correctly, you want to "inject" a value in a number of requests?
I would advise against that. I would rather set some project property, and then let each of the requests simply use that particular variable.
The most important reason for me to prefer this approach, is to make it more tranparent what is happening in your testcase, should someone else at some point - like if you get a different job - would need to take over your SoapUI projects. Currently you have requests, which hold values that appear to come out of nowhere. I would advise to make it clear that the request contains some sort of variable, and where that variable comes from.
Besides you will then also get more flexibility. If a few reqeusts at some point changes the path or name of the entity you want to change, you will need to make your code above handle that kind of situation. Not so, if you are merely using a variable in each of your requests.

Running a main-like in a non-main package

We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.

Laravel Reset Database after Test

I have just started using Laravel Dusk to test my project and need some guidance. After I run all the tests available, I want to be able to reset my database back to before I run the tests. (If there were any entries in my database before I run the tests, I would still like to see them after I run the tests. However, any entires created during the test, I would not like to see them after the tests finish running.) Any pointers on how I would achieve this? Thank you!
Update:
<?php
namespace Tests\Browser;
use Tests\DuskTestCase;
use Laravel\Dusk\Browser;
use Illuminate\Foundation\Testing\DatabaseTransactions;
class UserRegisterTest extends DuskTestCase
{
use DatabaseTransactions;
/**
* A test for user registration.
* #group register
* #return void
*/
public function testRegisterUser()
{
//Register with all info filled out correctly
$this->browse(function ($browser){
$browser->visit('/register')
->type('firstName', 'JenLogin')
->type('lastName', 'Zhou')
->type('email', 'testLogin#gmail.com')
->type('bio', 'Hello, this user is for testing login purposes!')
->type('location_zip', '11111')
->type('password', '123456')
->type('password_confirmation', '123456')
->click('.btn-primary')
->assertPathIs('/home')
->click('.dropdown-toggle')
->click('.dropdown-menu li:last-child');
});
$this->assertDatabaseHas('users', ['firstName' => 'JenLogin', 'lastName' => 'Zhou', 'email' => 'testLogin#gmail.com']);
}
/**
* Register with duplicate user
* #group register
* #return void
*/
public function testRegisterDuplicateUser(){
$this->browse(function ($browser){
$browser->visit('/register')
->type('firstName', 'JenLoginDup')
->type('lastName', 'Zhou')
->type('email', 'testLogin#gmail.com')
->type('bio', 'Hello, this user is for testing login purposes!')
->type('location_zip', '11111')
->type('password', '123456')
->type('password_confirmation', '123456')
->click('.btn-primary')
->assertPathIs('/register')
->assertSee('The email has already been taken.');
});
$this->assertDatabaseMissing('users', ['firstName' => 'JenLoginDup', 'lastName' => 'Zhou', 'email' => 'testLogin#gmail.com']);
}
/**
* Register with incorrect password confirmation
* #group register
* #return void
*/
public function testRegisterUserNoPassConfirm(){
$this->browse(function ($browser){
$browser->visit('/register')
->type('firstName', 'JenLoginPass')
->type('lastName', 'Zhou')
->type('email', 'testLoginPass#gmail.com')
->type('bio', 'Hello, this user is for testing login purposes!')
->type('location_zip', '11111')
->type('password', '123456')
->type('password_confirmation', '888888')
->click('.btn-primary')
->assertPathIs('/register')
->assertSee('The password confirmation does not match.');
});
$this->assertDatabaseMissing('users', ['firstName' => 'JenLoginPass', 'lastName' => 'Zhou', 'email' => 'testLoginPass#gmail.com']);
}
}
You are looking for the DatabaseTransactions trait. Use it in your test class like this and it will automatically rollback all database transactions made during your tests.
use Illuminate\Foundation\Testing\DatabaseTransactions;
class ExampleTest extends TestCase
{
use DatabaseTransactions;
// test methods here
}
This will keep track of all transactions made during your test and undo them upon completion.
note: this trait only works on default database connections
First of all, when you are running tests you should use completely different database than your live (or dev) database. For this you should create .env.dusk and set in there:
DB_CONNECTION=mysql
DB_HOST=db
DB_PORT=3306
DB_DATABASE=testing_database
DB_USERNAME=root
DB_PASSWORD=pass
to database used for tests only.
Second thing is that for Laravel Dusk you cannot use just DatabaseTransactions. You should in fact use DatabaseMigrations for Dusk tests otherwise you will get unexpected results.
There is no sane workflow for running tests on live/dev db with data and reverting changes back, done by tests.
Therefore your approach fails here, instead you should:
Create separate test schema/db for tests
Switch to test db, before running tests - this can be somehow automated depending on your configuration in phpunit and .env.dusk, but it depends on your local setup.
Then in your tests you will create all from scratch on clean db (run migrations, seeds, factories)
Run tests against this test db
For development switch back to your base db with current data, which will not be affected by tests.
Next time you will run your tests all starts again from point zero - clean database, this will be done by in tests:
use CreatesApplication;
use DatabaseMigrations;
parent::setUp();etc.
Read more about these methods...
Side Notes:
With this approach, it will be easy, to test your app in CI environments also.
Never write your tests which depend on data on your dev/live db. For tests all required data should be provided by seeds or ewentually factories!
You can use the RefreshDatabase trait in your test classes.After each test the database will be like before test.
In Fact it will drop all tables and migrate again.
If you would not loose your data you can use one separate schema for test.
use Illuminate\Foundation\Testing\RefreshDatabase;
use Illuminate\Foundation\Testing\WithoutMiddleware;
use Tests\TestCase;
class ExampleTest extends TestCase
{
use RefreshDatabase;
}
For multiple databases, this helped me
class MyTest extends TestCase {
// Reset the DB between tests
use DatabaseTransactions;
// Setting this allows both DB connections to be reset between tests
protected $connectionsToTransact = ['mysql', 'myOtherConnection'];
}
I think this is a great question. I found an Artisan tool that may be what you are looking for. You can use it to take a snapshot of the database before you run the test and then use it again to load that snapshot restoring your database to the previous state. I gave it a run(using MYSQL) and it worked great. Hope this is what you are looking for. Here is a link...
https://github.com/spatie/laravel-db-snapshots
phpunit.xml file is your solution there, you can set a .env variables in this file like so
<env name="DB_CONNECTION" value="testing_mysql"/>
<env name="DB_DATABASE_TEST" value="test"/>
now you can run your tests on a separate database.
Plus you can run a .php file every time before tests in automation, you just need to tell it to unittests
<phpunit
...
bootstrap="tests/autoload.php"
>
You can put any cleaners or seeders there or something like
echo 'Migration -begin-' . "\n";
echo shell_exec('php artisan migrate:fresh --seed');
echo 'Migration -end-' . "\n";

How to skip teststep in QAF using TestStepListener?

I am using QAF as my Test Automation Framework.
I want to skip specific teststep in the production environment. How can I skip execution of BDD teststep using TestStepListener?
Here is an example use case:
For shopping cart application I have developed 200+ scenarios. I was executing all scenarios on the test environment. Now I want to execute all scenarios on production environment. Now I want to skip last steps of payment and order review on production environment. How can I do that?
Will you please provide details of use case? If my understanding is correct you don't want to execute specific step in the production environment. You can use step listener to jump to specific step index but not to skip current step. One of the way is group steps to high-level step. For example instead of writing detailed steps in bdd
Given some situation
When performing some action
Then step-1
And step-2 not for production
and step-3
You can have high level step
Given some situation
When performing some action
Then generic step for all environments
Here your generic step for all environments step can have implementation for different environments in different package. configure step provider package at runtime.
Another trick is set and reset dry-run mode in step listener. For example, in your step definition you can provide additional meta-data. In the step listener depends on meta-data if require set dry-run mode in before method and reset it after in method.
Step definition:
#MetaData("{'skip_prod':true}")
#QAFTestStep(description = "do payment")
public static void doPayment() {
//TODO: write your code here
}
Step listener code may look like:
public void beforExecute(StepExecutionTracker stepExecutionTracker) {
Map<String, Object> metadata = stepExecutionTracker.getStep().getMetaData();
if (null != metadata && metadata.containsKey("skip_prod") && "prod".equalsIgnoreCase(getBundle().getString("env"))) {
//do not run this step
getBundle().setProperty(ApplicationProperties.DRY_RUN_MODE.key,true);
}
}
public void afterExecute(StepExecutionTracker stepExecutionTracker) {
Map<String, Object> metadata = stepExecutionTracker.getStep().getMetaData();
if (null != metadata && metadata.containsKey("skip_prod") && "prod".equalsIgnoreCase(getBundle().getString("env"))) {
// this is not dry run so reset
getBundle().setProperty(ApplicationProperties.DRY_RUN_MODE.key,false);
}
}