In our test environment, there are some tests that fail irregularly on
certain circumstances.
So my question is, what can be done to rerun the failed Nunit tests only.
My idea is implement some steps in the Nunit TearDown to re-run the failed test as below
[TearDown]
public void TearDownTest()
{
TestStatus state = TestContext.Status;
if (state == TestStatus.Failed)
{
// if so, is it possible to rerun the test ??
}
}
My requierment is - I want to try to run my failed test at least three times, if it fails
for first and second time
Can anybody suggest your thoughts on this
Thanks in advance
Anil
Instead of using the teardown, I'd rather use the xml report, use some xslt to figure out the failing fixtures and refeed them to a build step running the tests.
Related
I am trying to add RunListener to Karate ParallelRunner class. I have done this for Karate runner using #Karate.class and add a custom runner. I am writing the data to infuxdb and generating reports is grafana, I am able to successfully achieve it in karate runner. Below is the code snippet for the same. I am running my karate runner using this custom runner where I have added this listener. I want to achieve the same for parallel runner.
#Override
public void run(RunNotifier notifier) {
notifier.addListener(new ExecutionListener());
super.run(notifier);
This is not directly possible, the parallel runner is a very specific implementation and has nothing to do with JUnit by design.
Since you seem to be experienced in adding JUnit listeners and the like, you can refer to this code in case it gives you any ideas.
CliExecutionHook.java.
For more details about the "ExecutionHook" refer this: https://github.com/intuit/karate/issues/970#issuecomment-557443551
But let me say I think you are un-necessary trying to put effort into reports that will not really get you any benefit in the long run except for "looking good" :) And if you feel something needs to change in Karate, please contribute, it is open-source.
Thanks for your suggestion peter. I was able to send scenario and test run details to influx db in order to generate reports in grafana. I just made use of karate results extracted all the values required and called it in Junit #After.
public void writeScenarioResults(List<ScenarioResult> results){
String status;
for (ScenarioResult a:results) {
status=((a.isFailed()==true)?"FAIL":"PASS");
gb.sendTestMethodStatus(a.getScenario().getName(),status,build);
}
}
So, in TearDown, I have got the info about the test outcome and the test result message, but I would like to handle things specifically on whether the test was run solo (a single test in the test session) or was started in a whole set of tests (e.g. "Run all tests/All tests from Solution").
The goal is to detect, whether the developer individually started the test (manually, from within Visual Studio) or it was started using a Continuous Integration system.
This is what I have so far:
/// <summary>
/// A helper function for resolving problems when string comparison fails.
/// </summary>
/// <remarks>
/// Intended to be used to analyze the detected differences.
/// </remarks>
[TearDown]
public void CompareNonMatchingStringsOnFailure() {
if (TestContext.CurrentContext.Result.Outcome.Status == TestStatus.Failed) {
string outputMessage = TestContext.CurrentContext.Result.Message;
if (outputMessage.StartsWith("Expected string to be ")) {
// do extended comparison
// This should only run on single runs, which were initiated manually from visual studio
//...
}
}
}
How to get info about the test run/session in the TearDown method?
You can't do this in the code of a teardown because (1) TearDown is still part of a test and (2) tests are not supposed to know anything about who ran them, why they are running etc. The execution environment knows about the test, but the test does not know the execution environment. In fact, NUnit goes to a lot of trouble to make sure things work the same in each environment. While there are ways to trick NUnit, they are generally bad ideas and version-dependent.
Here's what you can do...
Create a fixture that inherits from your fixture.
Put the logic you want in the new fixture TearDown method.
Mark the new fixture as [Explicit].
Do not add any categories to the new fixture.
Because of (3) the new fixture will not run as part of CI or even from the IDE when you run all tests.
It can only be run explicitly. Since it has no categories, that means it can only be run by name... i.e. by selecting the entire fixture or a single test.
That isn't quite what you asked for. If you run the entire fixture, you will get the full comparison for all the inherited test methods. However, it may be sufficient for what you are trying to accomplish.
I've just started exploring automated testing, specifically Codeception, as part of my QA work at a web design studio. The biggest issue I'm experiencing is having Codeception fail a test as soon as an assert fails, no matter where it's placed in the code. If my internet connection hiccups or is too slow, things can become difficult. I was wondering if there were methods to provide more control over when Codeception will fail and terminate a test session, or even better, a way to retry or execute a different block or loop of commands when an assert does fail. For example, I would like to do something similar to the following:
if ( $I->see('Foo') )
{
echo 'Pass';
}
else
{
echo 'Fail';
}
Does anyone have any suggestions that could help accomplish this?
You can use a conditional assertion:
$I->canSeeInCurrentUrl('/user/miles');
$I->canSeeCheckboxIsChecked('#agree');
$I->cantSeeInField('user[name]', 'Miles');
The codeception documentation says:
Sometimes you don't want the test to be stopped when an assertion fails. Maybe you have a long-running test and you want it to run to the end. In this case you can use conditional assertions. Each see method has a corresponding canSee method, and dontSee has a cantSee method.
I'm not sure, if I understand it correctly, but I think, you should try to use Cest.
$ php codecept.phar generate:cest suitename CestName
So you could write one test in one test function. If a test fails, it will abort. You can also configure codeception, that it will not abort and show only the one test which fails in a summary at the end of all tests.
See here in the documentation: https://github.com/Codeception/Codeception/blob/2.0/docs/07-AdvancedUsage.md
Maybe it's better to use:
$I::dontSee('Foo');
Regards
We have a build process including unittest launched via mstest.exe. Sometimes some unittest get stuck, a messagebox or send error dialog is stuck or entire process crashes. I don't know how to find which of tests is the bad one.
Is there any way how to find what is the name of currently running test in case the unittest is stuck?
Is there a way how to find the name of unittest which was runned the last time?
I don't want to set up timeout for every single test, because I am not sure, what is the suitable timeout.
A nice solution for me could be to log when unittests start a when finish. I could find what is the last logged unittest. Is there a way to log it?
You can use TestContext.TestName for it:
/// Use this method to run code before running each test.
[TestInitialize()]
public void TestInitialize()
{
YourLogger.LogFormat("Run test {0}", this.TestContext.TestName);
}
/// Use TestCleanup to run code after each test has run
[TestCleanup()]
public void MyTestCleanup()
{
YourLogger.LogFormat("Test {0} is done", this.TestContext.TestName);
}
I'm automating functional tests using JUnit. And I ran into a problem: if I follow the rule "one (significant) assert per test method" then I end up with a bunch of 6-line test methods per one test case (17 is the biggest number yet). If I put them all into one test method I have to comment out failing asserts or leave half of the test never launched.
I don't like the first way because it launches browser for too many times, and it appears that browser launch + login/logout are more "expensive" and time consuming than test run itself.
The second way is no better because it introduces a lot of manual work in any case of managing it.
So, my questions are:
1. What are the best practices for such cases?
2. Is there some way to postpone test failure till the end of test? I mean, less important assert, that doesn't stop test run but causes it to fail in the end nonetheless.
UPD: Yes, I'm using Selenium. And I have a parent class for every test class to unify their settings.
You can use #BeforeClass and #AfterClass to launch and shutdown the Browser once for each test class or you can create a Rule that launches your Browser and use it with #ClassRule.
ErrorCollector may be helpful.
#BeforeClass
public static void beforeClass(){
launchBrowser();
}
#Before
public void before() {
login();
}
#AfterClass
public static void beforeClass(){
killBrowser();
}
That could be the answer to your problem