Abort/ignore parameterized test in JUnit 5 - junit5

I have some parameterized tests
#ParameterizedTest
#CsvFileSource(resources = "testData.csv", numLinesToSkip = 1)
public void testExample(String parameter, String anotherParameter) {
// testing here
}
In case one execution fails, I want to ignore all following executions.

AFAIK there is no built-in mechanism to do this. The following does work, but is a bit hackish:
#TestInstance(Lifecycle.PER_CLASS)
class Test {
boolean skipRemaining = false;
#ParameterizedTest
#CsvFileSource(resources = "testData.csv", numLinesToSkip = 1)
void test(String parameter, String anotherParameter) {
Assumptions.assumeFalse(skipRemaining);
try {
// testing here
} catch (AssertionError e) {
skipRemaining = true;
throw e;
}
}
}
In contrast to a failed assertion, which marks a test as failed, an assumption results in an abort of a test. In addition, the lifecycle is switched from per method to per class:
When using this mode, a new test instance will be created once per test class. Thus, if your test methods rely on state stored in instance variables, you may need to reset that state in #BeforeEach or #AfterEach methods.
Depending on how often you need that feature, I would rather go with a custom extension.

Related

How to Take Screenshot when TestNG Assert fails?

String Actualvalue= d.findElement(By.xpath("//[#id=\"wrapper\"]/main/div[2]/div/div[1]/div/div[1]/div[2]/div/table/tbody/tr[1]/td[1]/a")).getText();
Assert.assertEquals(Actualvalue, "jumlga");
captureScreen(d, "Fail");
The assert should not be put before your capture screen. Because it will immediately shutdown the test process so your code
captureScreen(d, "Fail");
will be not reachable
This is how i usually do:
boolean result = false;
try {
// do stuff here
result = true;
} catch(Exception_class_Name ex) {
// code to handle error and capture screen shot
captureScreen(d, "Fail");
}
# then using assert
Assert.assertEquals(result, true);
1.
A good solution will be is to use a report framework like allure-reports.
Read here:allure-reports
2.
We don't our tests to be ugly by adding try catch in every test so we will use Listeners which are using an annotations system to "Listen" to our tests and act accordingly.
Example:
public class listeners extends commonOps implements ITestListener {
public void onTestFailure(ITestResult iTestResult) {
System.out.println("------------------ Starting Test: " + iTestResult.getName() + " Failed ------------------");
if (platform.equalsIgnoreCase("web"))
saveScreenshot();
}
}
Please note I only used the relevant method to your question and I suggest you read here:
TestNG Listeners
Now we will want to take a screenshot built in method by allure-reports every time a test fails so will add this method inside our listeners class
Example:
#Attachment(value = "Page Screen-Shot", type = "image/png")
public byte[] saveScreenshot(){
return ((TakesScreenshot)driver).getScreenshotAs(OutputType.BYTES);
}
Test example
#Listeners(listeners.class)
public class myTest extends commonOps {
#Test(description = "Test01: Add numbers and verify")
#Description("Test Description: Using Allure reports annotations")
public void test01_myFirstTest(){
Assert.assertEquals(result, true)
}
}
Note we're using at the beginning of the class an annotation of #Listeners(listeners.class) which allows our listeners to listen to our test, please mind the (listeners.class) can be any class you named your listeners.
The #Description is related to allure-reports and as the code snip suggests you can add additional info about the test.
Finally, our Assert.assertEquals(result, true) will take a screen shot in case the assertion fails because we enabled our listener.class to it.

Inoperation of testng Retry Analyzer

I was following several different Web Sites explaining how to use RetryAnalyzer (they all say basically the same thing, but I checked several to see whether there was any difference). I implemented as they did in the samples, and deliberately caused a failure the first run (which ended up being the only run). Even though it failed, the test was not repeated. I even put a breakpoint inside the analyzer at the first line (res = false). which never got hit. I tell it to try 3 times but it did not retry at all. Am I missing something?
My sample is below: Is it something to do with setting counter = 0? But the "res = false" at least should get hit?
public class RetryAnalyzer implements IRetryAnalyzer {
int counter = 0;
#Override
public boolean retry(ITestResult result) {
boolean res = false;
if (!result.isSuccess() && counter < 3) {
counter++;
res = true;
}
return res;
}
}
and
#Test(dataProvider = "dp", retryAnalyzer = RetryAnalyzer.class)
public void testA(TestContext tContext) throws IOException {
genericTest("A", "83701");
}
The test usually passes. I deliberately caused it to fail but it did not do a retry. Am I missing something?
===============================================
Default suite
Total tests run: 1, Failures: 1, Skips: 0
Try adding alwaysRun = true to your test method decorator.
#Test(dataProvider = "dp", retryAnalyzer = RetryAnalyzer.class, alwaysRun = true)
public void testA(TestContext tContext) throws IOException {
genericTest("A", "83701");
}
Also, before retrying, you may want to restart your driver instance, so that you start clean with your test. Otherwise, your second run will execute in the same browser instance.
Just do a driver.Quit() following by a reinstantiation of the browser driver.
RetryAnalyzer class has to be public. Also, if it is an inner class, it should be static. TestNg silently ignores retryAnalyzer otherwise.

Can collections / iterables / streams be passed into #ParamterizedTests?

In Junit5 5.0.0 M4 I could do this:
#ParameterizedTest
#MethodSource("generateCollections")
void testCollections(Collection<Object> collection) {
assertOnCollection(collection);
}
private static Iterator<Collection<Object>> generateCollections() {
Random generator = new Random();
// We'll run as many tests as possible in 500 milliseconds.
final Instant endTime = Instant.now().plusNanos(500000000);
return new Iterator<Collection<Object>>() {
#Override public boolean hasNext() {
return Instant.now().isBefore(endTime);
}
#Override public Collection<Object> next() {
// Dummy code
return Arrays.asList("this", "that", Instant.now());
}
};
}
Or any number of other things that ended up with collections of one type or another being passed into my #ParameterizedTest. This no longer works: I now get the error
org.junit.jupiter.api.extension.ParameterResolutionException:
Error resolving parameter at index 0
I've been looking through the recent commits to SNAPSHOT and I there's a few changes in the area, but I can't see anything that definitely changes this.
Is this a deliberate change? I'd ask this on a JUnit5 developer channel but I can't find one. And it's not a bug per se: passing a collection is not a documented feature.
If this is a deliberate change, then this is a definite use-case for #TestFactory...
See https://github.com/junit-team/junit5/issues/872
The next snapshot build should fix the regression.

How to run a simulation case using CaseRunner function?

I'm currently working on a Petrel plug-in in which I need to run a simulation case (through a "For Loop"), I create my case runner, export it and the run it...but after finishing the simulation and closing the console, I check the CaseRunner.IsRunning property and it shows true! This cause that the results have not been loaded to the petrel system.
I tried to load the results manually after finishing the Run of my case (using caserunner and also using a batch file in my code) and I can't see any results in the programming environment.
Does anybody have a solution for this situation?
This is the related part of my code:
Case theCase = arguments.TheCase;
Case Test2 = simroots.CreateCase(theCase, "FinalCase");
CaseRunner cRunners = SimulationSystem.GetCaseRunner(Test2);
cRunners.Export();
cRunners.Run();
bool b = cRunners.IsRunning;
actually I checked when the process finishes; after "cRunners.Run" the code waits for exit the process using:
System.Diagnostics.Process[] parray = System.Diagnostics.Process.GetProcesses();
foreach (System.Diagnostics.Process pr in parray)
{
if (pr.ProcessName == "cmd")
{
pr.WaitForExit();//just wait
}
}
and when the console closes itself, i checked the cRunners.IsRunning term.
However, I'm not so expert... can you show me an example of using CaseRunnerMonitor? both definition of the derived class and its implementation.
All I need is running a simulation case n times via a for loop and
after each Run access to its provided summary results.
I tried some different scenarios to get my desired results, I put here some of them
First I create my CaseRunnerMonitor class:
public class MyMonitor : CaseRunnerMonitor
{
//…
public override void RunCompleted()
{
// define arguments
foreach (Slb.Ocean.Petrel.DomainObject.Simulation.SummaryResult sr in simroot.SummaryResults)
{
IEnumerable ….
List ….
// some codes to change the input arguments according to the current step simulation summary results
}
PetrelLogger.InfoOutputWindow("MyMonitor is completed!");
}
//…
}
And then use it:
private void button1_Click(object sender, EventArgs e)
{
// Some codes that define some arguments…
for (int j = 0; j < 8; j++)
{
// some changes in the arguments
Case MyTest;
MyMonitor monit4 = new MyMonitor();
SimulationRoot simroot = SimulationRoot.Get(PetrelProject.PrimaryProject);
using (ITransaction trans = DataManager.NewTransaction())
{
trans.Lock(simroot);
MyTest = simroot.CreateCase(OriginalCase, MycaseNameFunc());
trans.Commit();
}
CaseRunner cRun = SimulationSystem.GetCaseRunner(MyTest);
cRun.Export();
cRun.Run(monit4);
//Wait(); //waits for current process to close
}
}
But the thing is that MyTest case results part are empty after my run is completed. in this case all the results loaded to the petrel when the 8th (last) simulation completes. If I don’t activate the Wait() function, all 8 runs are almost calling simultaneously…
I changed my scenario, my callback after each run is read the simulation results, change something and call next run so
I create my CaseRunnerMonitor class:
public class MyMonitor2 : CaseRunnerMonitor
{
//…
public override void RunCompleted()
{
// define arguments
index++;
if (index <=8)
{
foreach (Slb.Ocean.Petrel.DomainObject.Simulation.SummaryResult sr in simroot.SummaryResults)
{
IEnumerable ….
List ….
// some codes to change the input arguments according to the current step simulation summary results
}
Case MyTest;
MyMonitor monit4 = new MyMonitor();
SimulationRoot simroot = SimulationRoot.Get(PetrelProject.PrimaryProject);
using (ITransaction trans = DataManager.NewTransaction())
{
trans.Lock(simroot);
MyTest = simroot.CreateCase(OriginalCase, MycaseNameFunc());
trans.Commit();
}
CaseRunner cRun = SimulationSystem.GetCaseRunner(MyTest);
cRun.Export();
cRun.Run(monit4);
}
PetrelLogger.InfoOutputWindow("MyMonitor2 is completed!");
}
//…
}
And then use it:
private void button1_Click(object sender, EventArgs e)
{
Index=0;
// Some codes that define some arguments…
// some changes in the arguments
Case MyTest;
MyMonitor monit5 = new MyMonitor();
SimulationRoot simroot = SimulationRoot.Get(PetrelProject.PrimaryProject);
using (ITransaction trans = DataManager.NewTransaction())
{
trans.Lock(simroot);
MyTest = simroot.CreateCase(OriginalCase, MycaseNameFunc());
trans.Commit();
}
CaseRunner cRun = SimulationSystem.GetCaseRunner(MyTest);
cRun.Export();
cRun.Run(monit5);
}
in this situation no need to wait() function is required. But the problem is that I access to MyTest case results in one level before the current run completes. i.e, I can view the step 5 results via MyTest.Results when the run 6 is completed while step 6 results are empty despite of completion of its run.
I check the CaseRunner.IsRunning property and it shows true
This is because Caserunner.Run() is non-blocking; that is, it starts another thread to launch the run. Control flow then passes immediately to your cRunners.IsRunning check which is true as simulation is in progress.
cRunners.Run(); //non-blocking
bool b = cRunners.IsRunning;
You should look at CaseRunnerMonitor if you want a call-back when the simulation is complete.
Edit:
can you show me an example of using CaseRunnerMonitor? both definition of the derived class and its implementation.
Create your monitor class:
public class CustomCaseRunnerMonitor : CaseRunnerMonitor
{
//...
public override void RunCompleted()
{
//This is probably the callback you want
}
}
Use it:
Case myCase = WellKnownSimulators.ECLIPSE100.CreateSimulationCase(...);
CaseRunner runner = SimulationSystem.GetCaseRunner(myCase);
var myMonitor = new CustomCaseRunnerMonitor(...);
runner.Run(myMonitor);
//Your callbacks defined in your CustomCaseRunnerMonitor will now be called
See also "Running and monitoring a Simulation" in SimulationSystem API documentation.
Ah, OK. I didn't realise you were trying to load results with the CaseMonitor.
I'm afraid the short answer is "No, you can't know when Petrel has loaded results".
The long answer is Petrel will automatically load results if the option is set in the Case arguments. (Define Simulation Case -> Advance -> Automatically load results).
In API:
EclipseFormatSimulator.Arguments args = EclipseFormatSimulator.GetEclipseFormatSimulatorArguments(myCase);
EclipseFormatSimulator.Arguments.RuntimeArguments runtimeArgs = args.Runtime;
runtimeArgs.AutoLoadResults = true;
runtimeArgs.AutoLoadResultsInterval = 120; //How frequently in seconds Petrel polls sim dir.
You will have to poll SimulationRoot.SummaryResults (using the same API you are already using) after case has finished.
You should use the CaseRunnerMonitor we discussed to determine when to start doing this, rather than the System.Diagnostics.Process[] parray = System.Diagnostics.Process.GetProcesses(); code you currently have.

Using Rhino Mocks, why does invoking a mocked on a property during test initialization return Expected call #1, Actual call #0?

I currently have a test which tests the presenter I have in the MVP model. On my presenter I have a property which will call into my View, which in my test is mocked out. In the Initialization of my test, after I set my View on the Presenter to be the mocked View, I set my property on the Presenter which will call this method.
In my test I do not have an Expect.Call for the method I invoke, yet when I run I get this Rhino mock exception:
Rhino.Mocks.Exceptions.ExpectationViolationException: IView.MethodToInvoke(); Expected #1, Actual #0..
From what I understand with Rhino mocks, as long as I am invoking on the Mock outside the expecting block it should not be recording this. I would imagine the test to pass. Is there a reason it is not passing?
Below is some code to show my setup.
public class Presenter
{
public IView View;
public Presenter(IView view)
{
View = view
}
private int _property;
public int Property
get { return _property;}
set
{
_property = value;
View.MethodToInvoke();
}
}
... Test Code Below ...
[TestInitialize]
public void Initilize()
{
_mocks = new MockRepository();
_view = _mocks.StrictMock<IView>();
_presenter = new Presenter(_view);
_presenter.Property = 1;
}
[TestMethod]
public void Test()
{
Rhino.Mocks.With.Mocks(_mocks).Expecting(delegate
{
}).Verify(delegate
{
_presenter.SomeOtherMethod();
});
}
Why in the world would you want to test the same thing each time a test is run?
If you want to test that a specific thing happens, you should check that in a single test.
The pattern you are using now implies that you need to
- set up prerequisites for testing
- do behavior
- check that behavior is correct
and then repeat that several times in one test
You need to start testing one thing for each test, and that help make the tests clearer, and make it easier to use the AAA syntax.
There's several things to discuss here, but it certainly would be clearer if you did it something like:
[TestMethod]
ShouldCallInvokedMethodWhenSettingProperty()
{
var viewMock = MockRepository.GenerateMock<IView>()
var presenter = new Presenter(viewMock);
presenter.Property = 1;
viewMock.AssertWasCalled(view => view.InvokedMethod());
}
Read up more on Rhino Mocks 3.5 syntax here: http://ayende.com/Wiki/Rhino+Mocks+3.5.ashx
What exactly are you trying to test in the Test method?
You should try to avoid using strict mocks.
I suggest using the Rhino's AAA syntax (Arrange, Act, Assert).
The problem lied with me not understanding the record/verify that is going on with Strict mocks. In order to fix the issue I was having this is how I changed my TestInitilize function. This basicaly does a quick test on my intial state I'm setting up for all my tests.
[TestInitialize]
public void Initilize()
{
_mocks = new MockRepository();
_view = _mocks.StrictMock<IView>();
_presenter = new Presenter(_view);
Expect.Call(delegate { _presenter.View.InvokedMethod(); });
_mocks.ReplayAll();
_mocks.VerifyAll();
_mocks.BackToRecordAll();
_presenter.Property = 1;
}