I'm trying to build a test hierarchy where other test suites are executed so a new test suite picks up where the last suite left off. Is there some way I can run a test suite execution in the set up of my test suite?
#SetUp(skipped = false)
def setUp() {
// execute test suite here
}
You can write to a file in a teardown method of the first test suite, and read from the same file in the setup method of the next test suite.
My first idea was to use Global Variables to do that, but they are relevant only inside of a single test suite.
Related
I've started to use google test fixture and
in my test suite I need to open/close a file only once , open the file before the first test start and close it after that the last test has been executed.
I wonder if there is a method of the fixture that
allow to do an action only ones at the beginning/end of the test suite.
Declare static member variable to hold file object and define it outside the fixture class.
After that, define static void SetUpTestSuite() and use it to open your file and static void TearDownTestSuite() to close your file.
google test will call SetUpTestSuite() before first test and TearDownTestSuite() after last test.
You can also check Sharing Resources Between Tests in the Same Test Suite from the official documentation
As per cucumber 4 with TestNG:
When using TestNG in parallel mode, scenarios can be executed in
separate threads irrespective of which feature file it belongs too.
Different rows in a scenario outline can be also executed in separate
threads. The two scenarios in feature1.feature file will be executed
by two threads in two browsers. The single feature2.feature will be
executed by another thread in a separate browser.
Now suppose I have a scenario like below in feature1:
1st scenario : Create an user with some details.
2nd scenario : Edit an user with some details.
Now if in TestNG if both scenario invoke at the same time then my 2nd scenario will fail for sure as the user is not created yet.
Do I just switch to Junit as:
When using JUnit in parallel mode, all scenarios in a feature file
will be executed in the same thread. The two scenarios in
feature1.feature file will be executed in one browser. The single
feature2.feature will be executed by another thread in a separate
browser.
Below function just having the parameter to run it as parallel.
#Override
#DataProvider(parallel = true)
public Object[][] scenarios() {
return super.scenarios();
}
So my main question is how to configure my test in parallel so my test can run systematically. i.e execute parallel per feature file, or any tag which can mark scenario depended on another like we have in TestNG #Test(dependsOnMethods = { "testTwo" }).
Kindly suggest any configuration setting for cucumber or strategy which can be use for same.
I'm currently trying to execute a specific helperFunction after every testcase.
The problem with the beforeEach Function is, that the test is already flagged as successfully/passed (TestLifeCycle already finished).
Is their any configuration possibility to execute a helper Function after every testcase, without pasting it in every single test case?
I'm using the Intern Testframework with the BDD Testinterface.
The docs for the BDD interface used for Intern tests are here:
https://theintern.io/docs.html#Intern/4/api/lib%2Finterfaces%2Fbdd
You can use the afterEach() method to execute whatever you like once each test in your suite has finished.
I have a selenium test suite using TestNG and Reporter to log results on Jenkins. I use Reporter in all methods to log activity to the console, and this in turn appears for each test listed in the Reporter html output on Jenkins. What I'd like to do, is to only see the Reporter log output in the reports for the tests that fail. If a test passes, I'd like to see just the test case name in the report with no logging.
I thought I could do this in my TestNGWatcher class where I override the onTestSuccess(ITestResult result) method and added the following line:
Reporter.clear();
That single line did what I wanted for the passing tests, but also disables Reporter output for the failed tests. It seems to have turned of Reporter output entirely.
Is there a way to 'turn it on' when a test fails and turn it off when a test passes?
Huge thanks in advance!
In our test environment, there are some tests that fail irregularly on
certain circumstances.
So my question is, what can be done to rerun the failed Nunit tests only.
My idea is implement some steps in the Nunit TearDown to re-run the failed test as below
[TearDown]
public void TearDownTest()
{
TestStatus state = TestContext.Status;
if (state == TestStatus.Failed)
{
// if so, is it possible to rerun the test ??
}
}
My requierment is - I want to try to run my failed test at least three times, if it fails
for first and second time
Can anybody suggest your thoughts on this
Thanks in advance
Anil
Instead of using the teardown, I'd rather use the xml report, use some xslt to figure out the failing fixtures and refeed them to a build step running the tests.