markTestSkipped() not working with sausage-based Selenium tests via Sauce Labs - selenium

I am using the sausage framework to run parallelized phpunit-based Selenium web driver tests through Sauce Labs. Everything is working well until I want to mark a test as skipped via markTestSkipped(). I have tried this via two methods:
setting markTestSkipped() in the test method itself:
class MyTest
{
public function setUp()
{
//Some set up
parent::setUp();
}
public function testMyTest()
{
$this->markTestSkipped('Skipping test');
}
}
In this case, the test gets skipped, but only after performing setUp, which performs a lot of unnecessary work for a skipped test. To top it off, phpunit does not track the test as skipped -- in fact it doesn't track the test at all. I get the following output:
Running phpunit in 4 processes with <PATH_TO>/vendor/bin/phpunit
Time: <num> seconds, Memory: <mem used>
OK (0 tests, 0 assertions)
The other method is by setting markTestSkipped() in the setUp method:
class MyTest
{
public function setUp()
{
if (!shouldRunTest()) {
$this->markTestSkipped('Skipping test');
} else {
parent::setUp();
}
}
protected function shouldRunTest()
{
$shouldrun = //some checks to see if test should be run
return $shouldrun;
}
public function testMyTest()
{
//run the test
}
}
In this case, setUp is skipped, but the test still fails to track the tests as skipped. phpunit still returns the above output. Any ideas why phpunit is not tracking my skipped test when they are executed in this fashion?

It looks like, at the moment, there is no support for logging markTestSkipped() and markTestIncomplete() results in PHPunit when using paratest. More accurately, PHPunit won't log tests which call markTestSkipped() or markTestIncomplete() if called with arguments['junitLogfile'] set -- and paratest calls PHPunit with a junitLogfile.
For more info, see: https://github.com/brianium/paratest/issues/60
I suppose I can hack away at either phpunit or paratest...

Related

Maintain context of Selenium WebDriver while running parallel tests in NUnit?

Using:C#NUnit 3.9
Selenium WebDriver 3.11.0
Chrome WebDriver 2.35.0
How do I maintain the context of my WebDriver while running parallel tests in NUnit?
When I run my tests with the ParallelScope.All attribute, my tests reuse the driver and fail
The Test property in my tests does not persist across the [Setup] - [Test] - [TearDown] without the Test being given a higher scope.
Test.cs
public class Test{
public IWebDriver Driver;
//public Pages pages;
//anything else I need in a test
public Test(){
Driver = new ChromeDriver();
}
//helper functions and reusable functions
}
SimpleTest.cs
[TestFixture]
[Parallelizable(ParallelScope.All)]
class MyTests{
Test Test;
[SetUp]
public void Setup()
{
Test = new Test();
}
[Test]
public void Test_001(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_002(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_003(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_004(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[TearDown]
public void TearDown()
{
string outcome = TestContext.CurrentContext.Result.Outcome.ToString();
TestContext.Out.WriteLine("#RESULT: " + outcome);
if (outcome.ToLower().Contains("fail"))
{
//Do something like take a screenshot which requires the WebDriver
}
Test.Driver.Quit();
Test.Driver.Dispose();
}
}
The docs state: "SetUpAttribute is now used exclusively for per-test setup."
Setting the Test property in the [Setup] does not seem to work.
If this is a timing issue because I'm re-using the Test property. How do I arrange my fixtures so the Driver is unique each test?
One solution is to put the driver inside the [Test]. But then, I cannot utilize the TearDown method which is a necessity to keep my tests organized and cleaned up.
I've read quite a few posts/websites, but nothing solves the problem. [Parallelizable(ParallelScope.Self)] seems to be the only real solution and that slows down the tests.
Thank you in advance!
The ParallelizableAttribute makes a promise to NUnit that it's safe to run certain tests in parallel, but it doesn't do anything to actually make it safe. That's up to you, the programmer.
Your tests (test methods) have shared state, i.e. the field Test. Not only that, but each test changes the shared state, because the SetUp method is called for each test. That means your tests may not safely be run in parallel, so you shouldn't tell NUnit to run them that way.
You have two ways to go... either use a lesser degree of parallelism or make the tests safe to run in parallel.
Using a lesser degree of parallelism is the easiest. Try using ParallelScope.Fixtures on the assembly or ParallelScope.Self (the default) on each fixture. If you have a large number of independent fixtures, this may give you as good a throughput as you will get doing something more complicated.
Alternatively, to run tests in parallel, each test must have a separate driver. You will have to create it and dispose of it in the test method itself.
In the future, NUnit may add a feature that will make this easier, by isolating each test method in a separate object. But with the current software, the above is the best you can do.

How to leave the browser open when a Behat/Mink test fails

I'm using the selenium2 driver to test my Drupal site using Behat/Mink in a docker container.
Using the Selenium Standalone-Chrome container, I can watch my behat tests fail, but the problem is that as soon as they fail, the browser is closed, which makes it harder for me to see what the problem is.
I'm running my tests like this:
behat --tags '#mystuff' --config=behat-myconfig.yml --strict --stop-on-failure
Is there a way to leave the remote-controlled browser open even when a test fails?
By default it is not possible.
Maybe you could find some hack to do it but it is not recommended, since each scenario should be isolated and this is not a good solution at least when running some suite with multiple tests.
For one time only see if you can use the logic for printscreen and use a breakpoint instead.
Anyway, you should use a verbose (-vvv for Behat 3) output + ide debugger to debug your code.
Finally I found a good solution for this: behat-fail-aid.
Add the fail aid to your FeatureContext and then run behat with the --wait-on-failure option:
the --wait-on-failure={seconds} option can be used to
investigate/inspect failures in the browser.
You can take a screenshot whenever an error occurs using Behat hook "AfterStep".
Consider having a look at the Panther Driver or DChrome Driver.
Here you are a shortened example considering also non javascript tests (which are faster):
use Behat\Mink\Driver\Selenium2Driver;
/** Context Class Definition ... */
/**
* #AfterStep
*/
public function takeScreenShotAfterFailedStep(AfterStepScope $scope)
{
if (99 !== $scope->getTestResult()->getResultCode()) {
return;
}
$this->takeAScreenShot('error');
}
private function takeAScreenShot($prefix = 'screenshot')
{
$baseName= sprintf('PATH_FOR_YOUR_SCREENSHOTS/%s-%s', $prefix, (new \DateTime())->format('Y_m_d_H_i_s'));
if ($this->supportsJavascript()) {
$extension = '.png';
$content = $this->session->getScreenshot();
} else {
$extension = '.html';
$content = $this->getSession()->getPage()->getOuterHtml();
}
file_put_contents(sprintf('%s%s', $baseName, $extension), $content);
}
private function supportsJavascript()
{
return $this->getSession()->getDriver() instanceof Selenium2Driver;
}

Can't perform a Laravel 4 action/route test more than once

I'm running into a Laravel 4 testing issue: An action/route test can only be run once, and it has to be the first test run. Any subsequent action/route test will fail with an exception before the assert is called.
route/action tests run as long as they are the first test run.
Non-route/action tests run normally, although they cause subsequent route/action tests to throw an exception
It's important to note that the tests in question don't fail, they throw an exception when the action is fired, for example:
Symfony\Component\Routing\Exception\RouteNotFoundException: Unable to generate a URL for the named route "home" as such route does not exist.
Sample test class:
class ExampleTest extends TestCase {
// passes
public function testOne()
{
$class = MyApp::ApiResponse();
$this->assertInstanceOf('\MyApp\Services\ApiResponse', $class);
}
// this fails unless moved the top of the file
public function testRoute()
{
$this->route('GET','home');
$this->assertTrue($this->client->getResponse()->isOk());
}
// passes
public function testTwo()
{
$class = MyApp::ProjectService();
$this->assertInstanceOf('\MyApp\Services\ProjectService', $class);
}
}
This is implementation-specific, a fresh Laravel 4 project does not exhibit the issue. What could be causing this behaviour? How would you go about tracking down the problem?
In this case, the routes file was being called using an include_once call. When subsequent tests were run the routes were empty.
Changing to include() fixed the issue exhibited in the question

Do I need to recreate my driver for each test?

I've just started using Selenium - currently I'm only interested in IE as it's an intranet site and not for public consumption. I'm using IEDriverServer.exe to set my browser sessions up, but I'm unsure as to whether I need to recreate it for each test, or if it will maintain atomicity of the browser sessions/tests automatically. I've not been able to find any information on this as most of the examples are for a single test rather than a batch of unit tests.
So currently I have
[TestInitialize]
public void SetUp()
{
_driver = new InternetExplorerDriver();
}
and
[TestCleanup]
public void TearDown()
{
_driver.Close();
_driver.Quit();
}
Is this correct or am I doing extra unnecessary work for each test? Should I just initialise it when it's declared? If so, how do I manage its lifecycle? I presume I can call .Close() for each test to kill the browser window, but what about .Quit()?
I use Selenium with NUnit, but you don't need to recreate it every time. Since you are using MSTest, I would do something like this:
[ClassInitialize]
public void SetUp()
{
_driver = new InternetExplorerDriver();
}
[ClassCleanup]
public void TearDown()
{
_driver.Close();
_driver.Quit();
}
ClassInitialize will call code once per test class initialisation, and ClassCleanup will call code once per test class teardown / dispose.
Although this is still not guaranteed because the test runner may make several threads of the test:
http://blogs.msdn.com/b/nnaderi/archive/2007/02/17/explaining-execution-order.aspx
You must also think about what kind of state you want to tests to start at each time. The most common reason for shutting down and starting a new browser session each time is then you can have a clean slate to work with.
Sometimes this is unnecessary work, as you've pointed out, but what is your tests starting point?
For me, I have one browser per test class, with a method to sign out of my web application and keep at the login page at the end of each test.

phpunit database test after teardown

I want to execute several tests in a testcase / testsuite (via selenium) and hook a database test onto the end of every tearDown (with assert which can't be called in tearDown).
So the workflow would be:
Setup the Connection to the database and the Schema in setUpBeforeClass()
Setup the Database (only the contents) in setUp()
Execute test01
TearDown contents
Assert if every Table in Database has a rowCount of Zero.
So is there a way to hook a additional assert onto the end of every tearDown?
I tried to do the Setup in assertPreConditions and the tearDown in assertPostConditions but thats kind of ugly.
Thx in advance
It seems you can use an assert anywhere, even in tearDown(). This test case (save as testTearDown.php, run with phpunit testTearDown.php) correctly gives a fail:
class TearDownTest extends PHPUnit_Framework_TestCase
{
/** */
public function setUp(){
echo "In setUp\n";
//$this->assertTrue(false);
}
/** */
public function tearDown(){
echo "In tearDown\n";
$this->assertTrue(false);
}
/** */
public function assertPreConditions(){
echo "In assertPreConditions\n";
//$this->assertTrue(false);
}
/** */
public function assertPostConditions(){
echo "In assertPostConditions\n";
//$this->assertTrue(false);
}
/**
*/
public function testAdd(){
$this->assertEquals(3, 1+2);
}
}
But, one rule of thumb I have is: if the software is making my life difficult, maybe I'm doing something wrong. You wrote that after the tearDown code has run you want to: "Assert if every Table in Database has a rowCount of Zero."
This sounds like you want to validate your unit test code has been written correctly, in this case that tearDown has done its job correctly? It is not really anything to do with the code that you are actually testing. Using the phpUnit assert mechanisms would be confusing and misleading; in my sample above, when tearDown asserts it tells me that testAdd() has failed. If it is actually code in tearDown() that is not working correctly, I want to be told that instead. So, for validating your unit test code, why not use PHP's asserts:
So I wonder if the tearDown() function you want could look something like this:
public function tearDown(){
tidyUpDatabase();
$cnt=selectCount("table1");
assert($cnt==0);
$cnt=selectCount("table2");
assert($cnt==0);
}