Programmatic way to run a JUnit5 test repeatedly - junit5

I would like to run a JUnit5 test programmatically, and I would like to run it multiple times in a loop. I'd like to run class-wide setup and teardown (like BeforeAll methods) just once, and then run per-method actions (like BeforeEach and the test itself) repeatedly. I'd like the test to repeat until I choose to stop it.
Is this supported by the JUnit5 API? How would I go about doing it?
I'm aware of the launcher API, but that seems to be oriented toward running the tests just once. If I call this API repeatedly, it does all setup and teardown repeatedly, which is unnecessary and expensive for the tests I'm automating.

Related

Apache POI -Read from Excel Exceptions in Selenium

If an exception occurs while fetching the data from the excel, will the execution stops? Only the current test case or all the test cases?
TestNG behave differently for exceptions appearing on different stages, so it depends.
Basically, no matter which exception appeared (unless testng SkipException, but it's the edge case, so i miss this), you might get the next:
Before configurations
For this case all dependent test and configuration methods will be skipped (unless some of them have alwaysRun=true annotation attribute).
Test method
You'll get this test failed. Also all the tests, which depends on this method will be skipped.
After configurations
Usually this do not affect your test results, but may fail the build, even all the tests passed. And also after confirmation failures may potentially affect some ongoing tests, if they expect something (but this is not related to TestNG functionality).
DataProvider
All the related tests will be skipped, everything else will not be affected.
Test Class constructor
This will broke your run, no tests will be executed.
Factory method (need to recheck)
I don't remember the behaviour. This might fail the whole launch or just some test classes. But exception here is a serious issue, try to avoid.
TestNG Listeners
This will broke the whole your test launch. Try to implement them error-free, surrounding with try/catches.

How do I call a function when all tests are finished running? [duplicate]

In Rust, is there any way to execute a teardown function after all tests have been run (i.e. at the end of cargo test) using the standard testing library?
I'm not looking to run a teardown function after each test, as they've been discussed in these related posts:
How to run setup code before any tests run in Rust?
How to initialize the logger for integration tests?
These discuss ideas to run:
setup before each test
teardown before each test (using std::panic::catch_unwind)
setup before all tests (using std::sync::Once)
One workaround is a shell script that wraps around the cargo test call, but I'm still curious if the above is possible.
I'm not sure there's a way to have a global ("session") teardown with Rust's built-in testing features, previous inquiries seem to have yielded little, aside from "maybe a build script". Third-party testing systems (e.g. shiny or stainless) might have that option though, might be worth looking into their exact capabilities
Alternatively, if nightly is suitable there's a custom test frameworks feature being implemented, which you might be able to use for that purpose.
That aside, you may want to look at macro_rules! to cleanup some boilerplate, that's what folks like burntsushi do e.g. in the regex package.

Is there a way a test can have its TestCaseSource read data from outside source (like excel)?

I am writing new tests in Nunit. I would like the tests to get their TestCaseSource values from an excel sheet (Data-driven tests).
However, I noticed that the [SetUp] method is actually accessed AFTER the [Test] method is entered, therefore I cannot initialize the data I read from my excel sheet in the TestCaseSource.
How do I init my TestCaseSource from an excel file BEFORE each test is running?
Thanks
I have tried using a separate class like MyFactoryClass and then used
[Test, TestCaseSource(typeof(MyFactoryClass), "TestCases")]
However, this is reached Before the [Setup] method and does not recognize the name of the excel file that is named after each tests' name.
It's important, when using NUnit, to understand the stages that a test goes through as it is loaded and then run. Because I don't know what you are doing at each stage, I'll start by outlining those stages. I'll add to this answer after you post some code that shows what your factory class, your [SetUp] method and your actual tests are doing.
In brief, NUnit loads tests before it runs them. It may actually run tests multiple tiems for each load - this depends on the type of runner being used. Examples:
NUnit-console loads tests once and runs them once, then exits.
TestCentric GUI loads tests once and then runs them each time you select tests and click run. It can reload them using a menu option as well.
TestExplorer, using the NUnit 3 Test Adapter, loads tests and then runs them each time you click run.
Ideally, you should write your tests so that they will work under any runner. To do that, you should assume that they will be run multiple times for each load. Don't write code at load time, which you want to see repeated for each run. If you follow this rule, you'll have more robust tests.
So... what does NUnit do at each stage? Here it is...
Loading...
All the code in your [TestCaseSource] executes.
Running...
For each TestFixture (I'll ignore SetUpFixtures for simplicity)
Run any [OneTimeSetUp] method
For each Test or TestCase
Run any [SetUp] method
Run the test itself
Run any [TearDown] method
Run any [OneTimeTearDown] method
As you noticed, the code you write for any step can only depend on steps that have already executed. In particular, the action taken when loading the test can't depend on actions that are part of running it. This makes sense if you consider that "loading" really means creating the test that will be run.
In your [TestCaseSource] you should only call a factory that creates objects if you know in advance what objects to create. Usually, the best approach is to initialize those parameters that will be used to create objects. Those are then used to actually create the objects in the [OneTimeSetUp] or [SetUp] depending on the object lifetime you are aiming for.
That's enough (maybe too much) generalization! If you post some code, I'll add more specific suggestions to this answer.

TestNG & Selenium: Separate tests into "groups", run ordered inside each group

We use TestNG and Selenium WebDriver to test our web application.
Now our problem is that we often have several tests that need to run in a certain order, e.g.:
login to application
enter some data
edit the data
check that it's displayed correctly
Now obviously these tests need to run in that precise order.
At the same time, we have many other tests which are totally independent from the list of tests above.
So we'd like to be able to somehow put tests into "groups" (not necessarily groups in the TestNG sense), and then run them such that:
tests inside one "group" always run together and in the same order
but different test "groups" as a whole can run in any order
The second point is important, because we want to avoid dependencies between tests in different groups (so different test "groups" can be used and developed independently).
Is there a way to achieve this using TestNG?
Solutions we tried
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We considered writing a method interceptor. However, this has the disadvantage that running tests from inside an IDE becomes more difficult (because directly invoking a test on a class would not use the interceptor). Also, tests using dependsOnMethods cannot be ordered by the interceptor, so we'd have to stop using that. We'd probably have to create our own annotation to specify ordering, and we'd like to use standard TestNG features as far as possible.
The TestNG docs propose using preserve-order to order tests. That looks promising, but only works if you list every test method separately, which seems redundant and hard to maintain.
Is there a better way to achieve this?
I am also open for any other suggestions on how to handle tests that build on each other, without having to impose a total order on all tests.
PS
alanning's answer points out that we could simply keep all tests independent by doing the necessary setup inside each test. That is in principle a good idea (and some tests do this), however sometimes we need to test a complete workflow, with each step depending on all previous steps (as in my example). To do that with "independent" tests would mean running the same multi-step setup over and over, and that would make our already slow tests even slower. Instead of three tests doing:
Test 1: login to application
Test 2: enter some data
Test 3: edit the data
we would get
Test 1: login to application
Test 2: login to application, enter some data
Test 3: login to application, enter some data, edit the data
etc.
In addition to needlessly increasing testing time, this also feels unnatural - it should be possible to model a workflow as a series of tests.
If there's no other way, this is probably how we'll do it, but we are looking for a better solution, without repeating the same setup calls.
You are mixing "functionality" and "test". Separating them will solve your problem.
For example, create a helper class/method that executes the steps to log in, then call that class/method in your Login test and all other tests that require the user to be logged in.
Your other tests do not actually need to rely on your Login "Test", just the login class/method.
If later back-end modifications introduce a bug in the login process, all of the tests which rely on the Login helper class/method will still fail as expected.
Update:
Turns out this already has a name, the Page Object pattern. Here is a page with Java examples of using this pattern:
http://code.google.com/p/selenium/wiki/PageObjects
Try with depends on group along with depends on method. Add all methods in same class in one group.
For example
#Test(groups={"cls1","other"})
public void cls1test1(){
}
#Test(groups={"cls1","other"}, dependsOnMethods="cls1test1", alwaysrun=true)
public void cls1test2(){
}
In class 2
#Test(groups={"cls2","other"}, dependsOnGroups="cls1", alwaysrun=true)
public void cls2test1(){
}
#Test(groups={"cls2","other"}, dependsOnMethods="cls2test1", dependsOnGroups="cls1", alwaysrun=true)
public void cls2test2(){
}
There is an easy (whilst hacky) workaround for this if you are comfortable with your first approach:
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We had a similar problem: we need our tests to be run class-wise because we couldn't guarantee the test classes not interfering with each other.
This is what we did:
Put a
#Test( dependsOnGroups= { "dummyGroupToMakeTestNGTreatThisAsDependentClass" } )
Annotation on an Abstract Test Class or Interface that all your Tests inherit from.
This will put all your methods in the "first group" (group as described in this paragraph, not TestNG-groups). Inside the groups the ordering is class-wise.
Thanks to Cedric Beust, he provided a very quick answer for this.
Edit:
The group dummyGroupToMakeTestNGTreatThisAsDependentClass actually has to exist, but you can just add a dummy test case for that purpose..

How to stop further execution of Tests within a TestFixture if one of them fails in NUnit?

I want to stop further execution of Tests within a TestFixture if one of them fails in NUnit.
Of course the common and advised practice is to make tests independent of each other. However, the case I would like to use NUnit for, requires that all tests and test fixtures following one that failed, are not executed. In other words, test failure causes the whole NUnit execution to stop (or proceeds with the next [TestFixture] but both scenarios should be configurable).
The simple, yet not acceptable solution, would be to force NUnit termination by sending a signal of some kind to the NUnit process.
Is there a way to do this in an elegant way?
I believe you can use NAnt to do this. Specifically, the nunit or nunit2 tasks have a haltonfailure parameter that allows the test run to stop if a test fails.