I wrote a script in selenium with TestNg framework such that the failed test method will be rerun, and this is at Test Level by indicating the location of the class that implements the IRetryAnalyzer in #Test()
TestNG had successfully rerun the failed test method; the output has the following information: for the first run, Failed Test Method is marked as Failed, and after rerunning it, the method is marked as skipped. What does it mean? Could anyone help me to interpret the result?
Here is the screenshot of the result:
There is an ongoing issue on the official project.
You're welcome to follow:
https://github.com/cbeust/testng/issues/878
Related
When method GET
Then status 200
ERROR
Undefined step: When method GET
Undefined step: Then status 200
despite its getting StepDefs file of Karate.
Not able to execute the test case through Karate
From your comment, it is clear you are trying something which Karate does not support.
I am integrating with our existing Cucumber Framework.
This may be possible in a future version, but please read this ticket very carefully, and if needed continue the discussion there:
https://github.com/intuit/karate/issues/444
I have an existing project that uses Specflow and SpecRun to run some tests against Sauce Labs. I have a BeforeSenario hook that creates a RemoteWebDriver and an AfterScenario hook that closes this down.
I've now moved this into another project (copied the files over, just changed the namespace) and the first test runs fine but then get the following error:
An exception of type 'OpenQA.Selenium.WebDriverException' occurred in WebDriver.dll but was not handled in user code
Additional information: Unexpected error. The command you just sent (POST element) has no session ID.
This is generally caused by testing frameworks trying to run commands after the conclusion of a test.
For example, you may be trying to capture a screenshot or retrieve server logs
after selenium.stop() or driver.quit() was called in a tearDown method.
Please make sure this process happens before the session is ended.
I've compared the project and it's using the same version of SpecFlow, same .Net version. I can't see any difference between the two projects.
In my steps I have the following line:
public static IWebDriver driver = (IWebDriver)ScenarioContext.Current["driver"];
which I think is the issue as instead of getting a new instance of it from the ScenarioContext it's using the previous test's version which has now been disposed.
But I can't see why this is working in another project instead?
I am using the Specflow example in Github here
UPDATE
Looks like I've found the issue. In the Default.srprofile the testThreadCount was 1 whereas the value in the working solution was 10. I've now updated this to match and it works.
Not sure what this value should be though. I assume it shouldn't be the same number of tests, but then how do I get around my original issue of the shared driver context?
TestThreadCount specifics the number of threads used by SpecFlow+Runner (aka SpecRun) to execute the tests.
Each of the threads are separated. The default is AppDomain isolation, so every thread runs in a separate AppDomain.
In the SauceLab example there are 7 scenarios and the runner is configured to use 10 threads. This means, every scenario is executed in a different thread with its own AppDomain. As no thread executes a second scenario, you get this error not in the example
With only one thread, your thread is executing more than one scenario and you get this issue.
Easiest fix would be, if you remove the static from the field. For every scenario you get a new instance of the binding class. You do not have to remember it static.
For a better example how to use Selenium with SpecFlow & SpecFlow+ have a look here: https://github.com/techtalk/SpecFlow.Plus.Examples/tree/master/SeleniumWebTest
You have to adjust the WebDriver- class for using SauceLabs over the RemoteWebDriver.
I have a selenium test suite using TestNG and Reporter to log results on Jenkins. I use Reporter in all methods to log activity to the console, and this in turn appears for each test listed in the Reporter html output on Jenkins. What I'd like to do, is to only see the Reporter log output in the reports for the tests that fail. If a test passes, I'd like to see just the test case name in the report with no logging.
I thought I could do this in my TestNGWatcher class where I override the onTestSuccess(ITestResult result) method and added the following line:
Reporter.clear();
That single line did what I wanted for the passing tests, but also disables Reporter output for the failed tests. It seems to have turned of Reporter output entirely.
Is there a way to 'turn it on' when a test fails and turn it off when a test passes?
Huge thanks in advance!
In our test environment, there are some tests that fail irregularly on
certain circumstances.
So my question is, what can be done to rerun the failed Nunit tests only.
My idea is implement some steps in the Nunit TearDown to re-run the failed test as below
[TearDown]
public void TearDownTest()
{
TestStatus state = TestContext.Status;
if (state == TestStatus.Failed)
{
// if so, is it possible to rerun the test ??
}
}
My requierment is - I want to try to run my failed test at least three times, if it fails
for first and second time
Can anybody suggest your thoughts on this
Thanks in advance
Anil
Instead of using the teardown, I'd rather use the xml report, use some xslt to figure out the failing fixtures and refeed them to a build step running the tests.
The PHPUnit Selenium base class has an option to make a screenshot on failure, which is a huge help in finding out why the test failed. The selenium server, however, returns an error instead of a failure on any error condition other than explicit assert* calls (such us trying to do something with a non-existent element). If I try to make a screenshot after the server reports the error, I get another error saying that the server already discarded the session. Is there any way to change that behavior?
Update: this is because PHPUnit breaks the connection when it receives an error. I was able to change it by some (rather ugly) manipulation of the PHPUnit code.
Make those interactions as test cases.
For example in perl,
If it is written as below and fails due to a non-existent element. the script will error out
$sel->type("email-id","trial\#trial.com");
While if the above step is made as a test case by writing it as follows
$sel->type_ok("email-id","trial\#trial.com");
If there is a non-existent element, the test case will only fail, and the script will continue.
So using TAP (test any protocol) by using the module use Test::More; , if _ok is added after a function, the function return will be used to determine the fate of the test case.
ie. - A return of 'O' means the test Failed
and A return of '1' means the test Passed
It is not the Selenium server but the SeleniumTestCase class for PHPUnit 3.4 which automatically sends a stop command when it detects an error (Driver.php line 921). PHPUnit 3.6 seems handle errors better.
I think you can overwrite method 'travelbox' and make something like this:
public function onNotSuccessfulTest(Exception $e){
file_put_content('/xxx/xxx.jpg', $this->currentScreenshot());
}