Create screenshot after failed selenium command - selenium

The PHPUnit Selenium base class has an option to make a screenshot on failure, which is a huge help in finding out why the test failed. The selenium server, however, returns an error instead of a failure on any error condition other than explicit assert* calls (such us trying to do something with a non-existent element). If I try to make a screenshot after the server reports the error, I get another error saying that the server already discarded the session. Is there any way to change that behavior?
Update: this is because PHPUnit breaks the connection when it receives an error. I was able to change it by some (rather ugly) manipulation of the PHPUnit code.

Make those interactions as test cases.
For example in perl,
If it is written as below and fails due to a non-existent element. the script will error out
$sel->type("email-id","trial\#trial.com");
While if the above step is made as a test case by writing it as follows
$sel->type_ok("email-id","trial\#trial.com");
If there is a non-existent element, the test case will only fail, and the script will continue.
So using TAP (test any protocol) by using the module use Test::More; , if _ok is added after a function, the function return will be used to determine the fate of the test case.
ie. - A return of 'O' means the test Failed
and A return of '1' means the test Passed

It is not the Selenium server but the SeleniumTestCase class for PHPUnit 3.4 which automatically sends a stop command when it detects an error (Driver.php line 921). PHPUnit 3.6 seems handle errors better.

I think you can overwrite method 'travelbox' and make something like this:
public function onNotSuccessfulTest(Exception $e){
file_put_content('/xxx/xxx.jpg', $this->currentScreenshot());
}

Related

Browser not quitting when called from another scenario using Karate

I have a scenario which is a series of rest api calls but in the middle is a section that executes a few steps within a chrome browser. The browser steps are common to another scenario so I tried to extract the browser steps into a separate feature that could then be called from multiple scenarios.
When the main scenario executes it executes the browser feature but fails to auto-close the browser after execution. I read in the documentation "Karate will close the browser automatically after a Scenario unless the driver instance was created before entering the Scenario" . The configure driver code is in the callable scenario.
I also tried caling quit() but this resulted in the error: "The forked VM terminated without properly saying goodbye. VM crash or System.exit called?"
Does anyone know how I can ensure the browser closes in this circumstance?
UPDATE: As suggested by #PeterThomas I started to craft a full example to replicate this when I discovered that to replicate is actually quite simple.
If the UI feature is called like this then the browser is closed after execution:
* call read('classpath:/ui/callable/GoogleSearch.feature')
If called like this then the browser remains open:
* def result = call read('classpath:/ui/callable/GoogleSearch.feature')
My UI scenario scrapes a value from a web page which I then stored within a '* def ticket' within the called feature. I was hoping to access it via result.ticket. As I am unable to do this I am successfully using the following:
* def extractedTicket = { value: '' }
* call read('classpath:/ui/callable/GoogleSearch.feature')
* def ticket = extractedTicket.value
And within the called feature:
* set extractedTicket.value = karate.extract(val, '.ticket=(.*?)&', 1)
First, I think you should provide a way to replicate this, so that we can investigate and fix this for everyone. Please follow this process: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue
That said, maybe for just getting Chrome to do a few steps you should just use the Java API - and you can call it from wherever you want, even within a feature file using Java interop: https://github.com/karatelabs/karate#java-api
Also see if this answer gives you any pointers: https://stackoverflow.com/a/60387907/143475

Error running multiple tests in Specflow/Selenium

I have an existing project that uses Specflow and SpecRun to run some tests against Sauce Labs. I have a BeforeSenario hook that creates a RemoteWebDriver and an AfterScenario hook that closes this down.
I've now moved this into another project (copied the files over, just changed the namespace) and the first test runs fine but then get the following error:
An exception of type 'OpenQA.Selenium.WebDriverException' occurred in WebDriver.dll but was not handled in user code
Additional information: Unexpected error. The command you just sent (POST element) has no session ID.
This is generally caused by testing frameworks trying to run commands after the conclusion of a test.
For example, you may be trying to capture a screenshot or retrieve server logs
after selenium.stop() or driver.quit() was called in a tearDown method.
Please make sure this process happens before the session is ended.
I've compared the project and it's using the same version of SpecFlow, same .Net version. I can't see any difference between the two projects.
In my steps I have the following line:
public static IWebDriver driver = (IWebDriver)ScenarioContext.Current["driver"];
which I think is the issue as instead of getting a new instance of it from the ScenarioContext it's using the previous test's version which has now been disposed.
But I can't see why this is working in another project instead?
I am using the Specflow example in Github here
UPDATE
Looks like I've found the issue. In the Default.srprofile the testThreadCount was 1 whereas the value in the working solution was 10. I've now updated this to match and it works.
Not sure what this value should be though. I assume it shouldn't be the same number of tests, but then how do I get around my original issue of the shared driver context?
TestThreadCount specifics the number of threads used by SpecFlow+Runner (aka SpecRun) to execute the tests.
Each of the threads are separated. The default is AppDomain isolation, so every thread runs in a separate AppDomain.
In the SauceLab example there are 7 scenarios and the runner is configured to use 10 threads. This means, every scenario is executed in a different thread with its own AppDomain. As no thread executes a second scenario, you get this error not in the example
With only one thread, your thread is executing more than one scenario and you get this issue.
Easiest fix would be, if you remove the static from the field. For every scenario you get a new instance of the binding class. You do not have to remember it static.
For a better example how to use Selenium with SpecFlow & SpecFlow+ have a look here: https://github.com/techtalk/SpecFlow.Plus.Examples/tree/master/SeleniumWebTest
You have to adjust the WebDriver- class for using SauceLabs over the RemoteWebDriver.

Preventing asserts from failing tests in Codeception

I've just started exploring automated testing, specifically Codeception, as part of my QA work at a web design studio. The biggest issue I'm experiencing is having Codeception fail a test as soon as an assert fails, no matter where it's placed in the code. If my internet connection hiccups or is too slow, things can become difficult. I was wondering if there were methods to provide more control over when Codeception will fail and terminate a test session, or even better, a way to retry or execute a different block or loop of commands when an assert does fail. For example, I would like to do something similar to the following:
if ( $I->see('Foo') )
{
echo 'Pass';
}
else
{
echo 'Fail';
}
Does anyone have any suggestions that could help accomplish this?
You can use a conditional assertion:
$I->canSeeInCurrentUrl('/user/miles');
$I->canSeeCheckboxIsChecked('#agree');
$I->cantSeeInField('user[name]', 'Miles');
The codeception documentation says:
Sometimes you don't want the test to be stopped when an assertion fails. Maybe you have a long-running test and you want it to run to the end. In this case you can use conditional assertions. Each see method has a corresponding canSee method, and dontSee has a cantSee method.
I'm not sure, if I understand it correctly, but I think, you should try to use Cest.
$ php codecept.phar generate:cest suitename CestName
So you could write one test in one test function. If a test fails, it will abort. You can also configure codeception, that it will not abort and show only the one test which fails in a summary at the end of all tests.
See here in the documentation: https://github.com/Codeception/Codeception/blob/2.0/docs/07-AdvancedUsage.md
Maybe it's better to use:
$I::dontSee('Foo');
Regards

SoapUI with Groovy Script calling multiple APIs

I am using SoapUI with Groovy script and running into an issue when calling multiple APIs. In the system I am testing one WSDL/API handles the account registration, and returns an authenticator. I then use that returned authenticator to call a different WSDL/API and verify some information. I am able to call each of these WSDLs/APIs separate but when I put them together in a Groovy Script it doesn't work.
testRunner.runTestStepByName("RegisterUser");
testRunner.runTestStepByName("Property Transfer");
if(props.getPropertyValue("userCreated") == "success"){
testRunner.runTestStepByName("AuthenticateStoreUser");
To explain the first line will run the TestStep "RegisterUser". I then do a "Property Transfer" step which takes a few response values from "RegisterUser" - the first is "Status" to see if it succeeded or failed, second is the "Authenticator". I then do an if statement to check if "RegisterUser" succeeded then attempt to call "AuthenticateStoreUser". At this point everything looks fine. Though when it calls "AuthenticateStoreUser" it shows the thinking bar then fails like a timeout, and if I check the "raw" tab for the request it says
<missing xml data>.
Note, that if I try the "AuthenticateStoreUser" by itself the call works fine. It is only after calling "RegisterUser" in the Groovy Script that it behaves strange. I have tried this with a few different calls and believe it is an issue calling two different APIs.
Has anyone dealt with this scenario, or can provide further direction to what may be happening?
(I would have preferred to simply comment on the question, but I don't have enough rep yet)
Have you checked the Error log tab at the bottom when this occurs? If so, what does it say and is there a stacktrace you could share?

Getting error in selenium test cases

I'm running somebody written selenium test cases in my system.It showing the few errors like
> [error] Actual value 'null' did not match '[object Object]'
> [error] Threw an exception: this.browserbot.getUserWindow().map is undefined
> [error] Threw an exception: this.browserbot.getUserWindow().map is undefined
Is it the problem with selenium ide version which I'm using or other? I'm using Selenium 1.6.0
This issue is coming because you are trying to get hold of a window , which is not matching to the value you are passing.
So for this you need to investigate more on your locators. Also you can start using a new tool which can find automatically the locators for you , which is Selenium Builder. Hope following links will help you.
http://khyatisehgal.wordpress.com/2014/05/26/selenium-builder-exporting-and-execution/
http://khyatisehgal.wordpress.com/2014/05/25/selenium-builder/
Those errors you're getting are not because of Selenium version, but because the logic of your application changed and the tests expect different results from various actions.
The only thing you can do is to go through your tests, discover what were they trying to assert / achieve / test (you do have your testcases documented, right?) and if their current behaviour is wrong, fix them.
The other possibility is that your tests are ok, just the application began to behave differently (against the spec) and needs to be fixed. But from the context, I'd say that your unmaintained tests broke.