Use same browser session while using selenium with phpUnit - selenium

I have an application made in php for which am using selenium for unit testing using phpUnit. The problem is that I have to set the environment before I can go for tests. For eg. I have to set session variables, login and fetch data from remote server. All this takes a lot of time and it is not feasible to re-set this in every test function.
I am looking for a method so that I can use the same browser session for running all the tests in it. I tried looking for resources online, but couldn't find any good sources for this. The code I have written is
protected function setUp()
{
parent::setUp();
$this->setBrowserUrl("http://localhost/devel/");
}
public function start()
{
parent::start();
$this->open("");
//Setting up the environment here
}
public function testFunction()
{
//A test function
}
public function testFunction2()
{
//Another test function
}
But this is opening browser instance for both the functions. Is there any work around for this? Or is there any command line parameter while launching selenium server for this?

"[I am] using selenium for unit testing using phpUnit"
No, you're not. You're using PHPUnit with selenium for functional testing. :-)
But since it's probably not in your best interest to re-invent that wheel, you want Mink: http://mink.behat.org/
It wraps around Guzzle and lets you do session-based acceptance testing using a bunch of different drivers. It has Goutte for a headless browser, and can work with Selenium and Sahi and a bunch of others.
Also of note, depending on your needs, is Behat: http://behat.org/
It lets you write client-readable test documents that can be turned into Mink-based acceptance tests.
HTH.

Question already answered.
The unaccepted answer did the job for me.
#see How do I run a PHPUnit Selenium test without having a new browser window run for each function?

Related

How can I paramaterize my selenium tests to run through multiple scenarios using saucelabs

I have a selenium automation framework which uses junit to run tests locally on a browser of my choice. I currently use junitparams to parameterize some of my tests. e.g
#RunWith(JUnitParamsRunner.class)
public class loginPage extends BaseTestClass{
#Test
#FileParameters(value = "src/test/resources/Test data/login.csv", mapper = CsvWithHeaderMapper.class)
public void login(String username, String pwd) throws Exception{
}
}
There are tests I have for logging into a website and I use junitparams with a csv file to run through multiple different login scenarios. I am now looking to start using saucelabs to run my tests across multiple different browser/os combinations simultaneously. My question is how do I achieve both the saucelabs parallel tests and parametrized tests at the same time? I have seen examples for saucelabs like the following:
https://github.com/saucelabs-sample-test-frameworks/Java-Junit-Selenium
But the issue I will run into is that I cannot use multiple different runners. I need to use a single runner as the Junit #RunWith annotatation requires. Is there an easy way to combine both the ConcurrentParameterized.class runner used in the saucelabs example and the JUnitParamsRunner.class I am currently utilising for local execution?
EDIT:
I found the following that confirms I cannot use 2 separate runners and appears to suggest merging two runners would be very difficult. Instead I'm guessing I will have to change the way sauce labs integration is handled. https://github.com/Pragmatists/junitparams-spring-integration-example
I would suggest taking a look at SauceryJ. It integrates Jenkins, the Sauce OnDemand plugin, and your testing code with SauceLabs.
Example class here.
Full disclosure: I wrote and maintain SauceryJ

Protractor flakiness

I maintain a complex Angular (1.5.x) application that is being E2E tested using Protractor (2.5.x). I am experiencing a problem with this approach, which presents primarily in the way the tests seem flaky. Tests that worked perfectly well in one pull request fail in another. This concerns simple locators, such as by.linkTest(...). I debugged the failing tests and the app is on the correct page, the links are present and accessible.
Has anyone else experienced these consistency problems? Knows of a cause or workaround?
Just Say No to More End-to-End Tests!
That said, here are the few things you can do to tackle our mutual merciless "flakiness" enemy:
update to the latest Protractor (currently 4.0.0) which also brings latest selenium and chromedriver with it
turn off Angular animations
use dragons browser.wait() with a set of built-in or custom Expected Conditions. This is probably by far the most reliable way to approach the problem. Unfortunately, this is use-case and problem specific, you would need to modify your actual tests in the problematic places. For example, if you need to click an element, wait for it to be clickable:
var EC = protractor.ExpectedConditions;
var elm = $("#myid");
browser.wait(EC.elementToBeClickable(elm), 5000);
elm.click();
maximize the browser window (to avoid random element not visible or not clickable errors). Put this to onPrepare():
browser.driver.manage().window().maximize();
increase the Protractor and Jasmine timeouts
slow Protractor down by tweaking the Control Flow (not sure if it works for 4.0.0, please test)
manually call browser.waitForAngular(); in problematic places. I am not sure why this helps but I've seen reports where it definitely helped to fix a flaky test.
use the jasmine done() callback in your specs. This may help to, for example, not to start the it() block until done is called in beforeEach()
return a promise from the onPrepare() function. This usually helps to make sure things are prepared for the test run
use protractor-flake package that would automatically re-run failed tests. More like a quick workaround to the problem
There are also other problem-specific "tricks" like slow typing into the text box, clicking via JavaScript etc.
Yes, I think all of us experienced such flakiness issue.
Actually, the flakiness is quite common issue with any browser automation tool. However, this is supposed to be less in case of Protractor as Protractor has built-in wait consideration which performs actions only after loading the dom properly. But, in few cases you might have to use some explicit waits if you see intermittent failures.
I prefer to use few intelligent wait methods like:
function waitForElementToClickable(locator) {
var domElement = element(by.css(locator)),
isClickable = protractor.ExpectedConditions.elementToBeClickable(domElement);
return browser.wait(isClickable, 2000)
.then(function () {
return domElement;
});
}
Where 2000 ms is used as timeout, you can make it configurable using a variable.Sometimes I also go with browser.sleep() when none of my intelligent wait works.
It's been my experience that some methods (eg. sendKeys()) do not always fire at the expected time, within the controlFlow() queue, and will cause tests to be flakey. I work around this by specifically adding them to the controlFlow(). Eg:
this.enterText = function(input, text) {
return browser.controlFlow().execute(function() {
input.sendKeys(text);
});
};
A workaround that my team has been using is to re-run only failed tests using the plugin protractor-errors. Using this tool, we can identify real failures versus flakey tests within 2-3 runs. To add the plugin, just add a require statement to the bottom of the Protractor config's onPrepare function:
exports.config = {
...
onPrepare: function() {
require('protractor-errors');
}
}
You will need to pass these additional parameters when to run your tests with the plugin:
protractor config.js --params.errorsPath 'jasmineReports' --params.currentTime (timestamp) --params.errorRun (true or false)
There is also a cli tool that will handle generating the currentTime if you don't have an easy way to pass in a timestamp.

Selenium, questions about code re-factoring

With Selenium IDE I generate a sample script for test the log to a website and a value in the website after logging. So my script is (Java) :
#test
public void mytest() throws Exception{
// Load the home page
...
// complete the log form
...
// check if the log work
...
// Logged : click on some element in the page
...
// Logged ; check the information X (if one HTML element contains child or not
...
}
I use JUnit for run the test class from a main class. My question is : What is the best way for re-factoring my code ? I would like create one class by "step", is it possible ? by example :
Class for load page and check there isn't error 404
Class for complete the log form, submit and check if the user is logged
Class for navigate in the website and get the information I want
Is it the best way? There isn't a real goal. Just, I want know how organize the code for a maximum of reuse (sorry for my bad English x) )
There are couple of reasons why you do not want to use Selenium IDE, do the recording for the test cases and refactor the code afterwards. Most of the time selenium IDE will provide you the selectors that are not stable enough. For rerunning the tests you want to make sure the selectors are stable enough and will not possibly depend on html structure. Second, as the test suite getting larger you want to reduce the code duplication as much as possible. Using Selenium IDE there is no way to understand which code blocks can be reused.
So, bottom line is for a good test suite start building a framework from scratch instead of using Selenium IDE. There are a lot of example out there how to start. I have one with TestNG here if that helps.

Browser session in setUp(), tearDown(), no per testcase setup?

I've previously written some selenium tests using ruby/rspec, and found it quite powerful. Now, I'm using Selenium with PHPUnit, and there are a couple of things I'm missing, it might just be because of inexperience. In Ruby/RSpec, I'm used to being able to define a "global" setup, for each test case, where I, among other things, open up the browser window and log into my site.
I feel that PHPUnit is a bit lacking here, in that 1) you only have setUp() and tearDown(), which are run before and after each individual test, and that 2) it seems that the actual browser session is set up between setUp() and the test, and closed before tearDown().
This makes for a bit more clutter in the tests themselves, because you explicitly have to open the page at the beginning, and perform cleanups at the end. In every single test. It also seems like unnecessary overhead to close and reopen the browser for every single test, in stead of just going back to the landing page.
Are there any alternative ways of achieving what I'm looking for?
What I have done in the past is to make a protected method that returns an object for the session like so:
protected function initBrowserSession() {
if (!$this->browserSession) {
$this->setBrowser('*firefox');
$this->setBrowserUrl('http://www.example.com/');
//Initialize Session
$this->open('http://www.example.com/login.php');
// Do whatever other setup you need here
}
$this->browserSession = true;
}
public function testSomePage() {
$this->initBrowserSession();
//Perform your test here
}
You can't really use the setupBefore/AfterClass functions since they are static (and as such you won't have access to the instance).
Now, with that said, I would question your motivation for doing so. By having a test that re-uses a session between tests you're introducing the possibility of having side-effects between the tests. By re-opening a new session for each test you're isolating the effects down to just that of the test. Who cares about the performance (to a reasonable extent at least) of re-opening the browser? Doing so actually increases the validity of the test since it's isolated. Then again, there could be something to be said for testing a prolonged session. But if that was the case, I would make that a separate test case/class to the individual functionality test...
Although I agree with #ircmaxell that it might be best to reset the session between tests, I can see the case where tests would go from taking minutes to taking hours just to restart the browser.
Therefore, I did some digging, and found out that you can override the start() method in a base class. In my setup, I have the following:
<?php
require_once 'PHPUnit/Extensions/SeleniumTestCase.php';
class SeleniumTestCase extends PHPUnit_Extensions_SeleniumTestCase
{
public function setUp() {
parent::setUp();
// Set browser, URL, etc.
$this->setBrowser('firefox');
$this->setBrowserUrl('http://www.example.com');
}
public function start() {
parent::start();
// Perform any setup steps that depend on
// the browser session being started, like logging in/out
}
}
This will automatically affect any classes that extend SeleniumTestCase, so you don't have to worry about setting up the environment in every single test.
I haven't tested, but it seems likely that there is a stop() method called before tearDown() as well.
Hope this helps.

NAnt, MbUnit, CruiseControl, Selenium - passing settings to the test assembly

I am putting together some ideas for our automated testing platform and have been looking at Selenium for the test runner.
I am wrapping the recorded Selenium C# scripts in an MbUnit test, which is being triggered via the MbUnit NAnt task. The Selenium test client is created as follows:
selenium = new DefaultSelenium("host", 4444, "*iexplore", "http://[url]/");
How can I pass the host, port and url settings into the test so their values can be controlled via the NAnt task?
For example, I may have multiple Selenium RC servers listening and I want to use the same test code passing in each server address instead of embedding the settings within the tests themselves.
I have an approach mocked up using a custom NAnt task I have written but it is not the most elegant solution at present and I wondered if there was an easier way to accomplish what I want to do.
Many thanks if anyone can help.
Thanks for the responses so far.
Environment variables could work, however, we could be running parallel tests via a single test assembly so I wouldn't want settings to be overwritten during execution, which could break another test. Interesting line of thought though, thanks, I reckon I could use that in other areas.
My current solution involves a custom NAnt task build on top of the MbUnit task, which allows me to specify the additional host, port, url settings as attributes. These are then saved as a config file within the build directory and then read in by the test assemblies. This feels a bit "clunky" to me as my tests need to inherit from a specific class. Not too bad but I'd like to have less dependencies and concentrate on the testing.
Maybe I am worrying too much!!
I have a base class for all test fixtures which has the following setup code:
[FixtureSetUp]
public virtual void TestFixtureSetup ()
{
BrowserType = (BrowserType) Enum.Parse (typeof (BrowserType),
System.Configuration.ConfigurationManager.AppSettings["BrowserType"],
true);
testMachine = System.Configuration.ConfigurationManager.AppSettings["TestMachine"];
seleniumPort = int.Parse (System.Configuration.ConfigurationManager.AppSettings["SeleniumPort"],
System.Globalization.CultureInfo.InvariantCulture);
seleniumSpeed = System.Configuration.ConfigurationManager.AppSettings["SeleniumSpeed"];
browserUrl = System.Configuration.ConfigurationManager.AppSettings["BrowserUrl"];
targetUrl = new Uri (System.Configuration.ConfigurationManager.AppSettings["TargetUrl"]);
string browserExe;
switch (BrowserType)
{
case BrowserType.InternetExplorer:
browserExe = "*iexplore";
break;
case BrowserType.Firefox:
browserExe = "*firefox";
break;
default:
throw new NotSupportedException ();
}
selenium = new DefaultSelenium (testMachine, seleniumPort, browserExe, browserUrl);
selenium.Start ();
System.Console.WriteLine ("Started Selenium session (browser type={0})",
browserType);
// sets the speed of execution of GUI commands
if (false == String.IsNullOrEmpty (seleniumSpeed))
selenium.SetSpeed (seleniumSpeed);
}
I then simply supply the test runner with a config. file:
For MSBuild I use environment variables, I create those in my CC.NET config then they would be available in the script. I think this would work for you too.
Anytime I need to integrate with an external entity using NAnt I either end up using the exec task or writing a custom task. Given the information you posted it would seem that writing your own would indeed be a good solution, However you state you're not happy with it. Can you elaborate a bit on why you don't think you current solution is an elegant one?
Update
Not knowing internal details it seems like you've solved it pretty well with a custom task. From what I've heard, that's how I would have done it.
Maybe a new solution will show itself in time, but for now be light on yourself!