Execute single selenium test multiple times in parallel - selenium

I was wondering if anyone had any advice of an easy way or executing a single selenium test multiple times in parallel.
I have 1 test that I would like to execute, it will then spin up 10 chrome instances and run the test 10 times in parallel, its an idea to test load/performance.
I could split the test up into individual classes and get them to run in parallel but this is a bit overkill, is there a simpler way of running this with Nunit?
Tests are written in c# and we are using NUnit at the test runner, we are using BDDfy for the test language.
Bit of a difficult question to write down but hope some people understand what I am trying to achieve

You can do it by adding multiple test cases on the same test, even if you don't have any parameters to pass to the test, and use ParallelScope.All on fixture to run all testcases within the fixture in parallel.
[TestFixture, Parallelizable(ParallelScope.All)]
public class MyTestFixture
{
private static readonly IEnumerable<string> TestCases = new List<string>(new string[10]);
[TestCaseSource(nameof(TestCases))]
public void SingleTestRepeatedMultipleTimes(string testCase)
{
//test steps
}
}

Related

Karate API framework- test dependency

In my regression suite I have 600+ test cases. All those tests have #RegressionTest tag. See below, how I am running.
_start = LocalDateTime.now();
//see karate-config.js files for env options
_logger.info("karate.env = " + System.getProperty("karate.env"));
System.setProperty("karate.env", "test");
Results results = Runner.path("classpath:functional/Commercial/").tags("#RegressionTest").reportDir(reportDir).parallel(5);
generateReport(results.getReportDir());
assertEquals(0, results.getFailCount(), results.getErrorMessages());
I am thinking that, I can create 1 test and give it a tag #smokeTest. I want to be able to run that test 1st and only if that test passes then run the entire Regression suite. How can I achieve this functionality? I am using Junit5 and Karate.runner.
I think the easiest thing to do is run one test in JUnit itself, and if that fails, throw an Exception or skip running the actual tests.
So use the Runner two times.
Otherwise consider this not supported directly in Karate but code-contributions are welcome.
Also refer to the answers to this question: How to rerun failed features in karate?

How to write same codeception acceptance test case with many different set of inputs

In codeception acceptance testing, how to run/write same test case for many different set of inputs.
Here is my sample acceptance test (I am using page object oncept)
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,$m);
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,"test#gmail.com");
$I->fillField(login::$password,"test");
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
How do I run same login with multiple set of inputs in acceptance test.
However, I have tried passing inputs in an array by calling the test case in for loop by passing array values as input parameter. In acceptance.php, multiple set of inputs can be passed using if loop.
This runs the test as only 1 test case with different assertions.
But, it runs the test case until it fails for any inputs/assertion. If it fails for any of the assertions, then test case stops executing further & says test case failed.
You can pass parameters through to your login function just as you would with any php function:
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,"test#gmail.com","test");
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I,$username,$password)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,$username);
$I->fillField(login::$password,$password);
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
You'd then want to create a separate cept for each aspect of login that you are looking to test.
Edit:
What you're looking for in relation to one test running through a number of assertions, this breaks the conventions of automated testing. Each test (or cept in this case) should only ever test one aspect. For instance in logging in, you might have one for invalid username, invalid password, too many attempts, etc... Then when/if one test fails, you as the developer knows exactly what aspect has failed and which continue to pass. If all the aspects are wrapped up in one test, then you as the developer don't know the full picture until you start to debug.

How to set a timeout for every selenium test case?

I want my test cases to run only for a particular amount of time. Example: execution of every selenium test case should be restricted to 10 mins, it should not run beyond 10 mins... I tried out #Test, timeout=**milliseconds, but it didn't work out as per my example, I could not know where I am going wrong, can any of you kindly help me in sorting out this???? Thanks in advance.
It really depends on your language/testing environment as Selenium is only an automation framework, not an unit testing framework, so it doesn't have any concept about unit test or anything like that.
If you're using Visual Studio Unit Testing, you can use this to set unit test timeout (sample in C#):
[TestMethod(), Timeout(10000)]
public void TestLogin()
{
}
If, you are using TestNG or Junit test framework therefore #Test (Timeout = 10000) timeout syntax will be same for both.
#Test (Timeout = 10000)
public void testLogin(String username, String password){
driver.findElement(By.name("username")).sendKeys("username");
driver.findElement(By.name("password")).sendKeys("password");
driver.findElement(By.cssSelecter("submit")).click();
}

Selenium Grid on Multiple Browsers: should each test case have separate class for each browser?

I'm trying to put together my first Data Driven Test Framework that runs tests through Selenium Grid/WebDriver on multiple browsers. Right now, I have each test case in it's own class, and I parametrize the browser, so it runs each test case once with each browser.
Is this common on big test frameworks? Or, should each test case be copied and fine tuned to each browser in it's own class? So, if I'm testing chrome, firefox, and IE, should there be classes for each, like: "TestCase1Chrome", "TestCase1FireFox", "TestCase1IE"? Or just "TestCase1" and parametrize the test to run 3 times with each browser? Just wondering how others do it.
Parameterizing the tests into a single class per test case makes it easier to maintain the non-browser specific code, while duplicating classes, one for each browser case, makes it easier to maintain the browser-specific code. When I say browser specific code, for example, clicking an item. On ChromeDriver, you cannot click in the middle of some elements, where on FirefoxDriver, you can. So, you potentially need two different blocks of code just to click an element (when it's not clickable in the middle).
For those of you that are employed QA Engineers that use Selenium, what would be best practice here?
I am currently working on a project which runs around 75k - 90k tests on daily basis. We pass the browser as a parameter to the tests. Reasons being:
As you mentioned in your question, this helps in maintenance.
We don't see too many browser-specific code. If you are having too much of browser specific code, then I would say there is a problem with the webdriver itself. Because, one of the advantages of selenium/webdriver is write code once and run it against any supported browser.
The difference I see between my code structure and the one you mentioned in question is, I don't have a test class for each test case. Tests are divided based on the features that I test and each feature will have a class. And that class will hold all the tests as methods. I use testNG so that these methods can be invoked in parallel. May be this won't suite your AUT.
If you keep the code structure that you mention in the question, sooner or later maintaining it will become a nightmare. Try to stick to the rule: the same test code (written once) for all browsers (environments).
This condition will force you to solve two issues:
1) how to run the tests for all chosen browsers
2) how to apply specific browser workarounds without polluting the test code
Actually, this seems to be your question.
Here is how I solved the first issue.
First, I defined all the environments that I am going to test. I call 'environments' all the conditions under which I want to run my tests: browser name, version number, OS, etc. So, separately from test code, I created an enum like this:
public enum Environments {
FF_18_WIN7("firefox", "18", Platform.WINDOWS),
CHR_24_WIN7("chrome", "24", Platform.WINDOWS),
IE_9_WIN7("internet explorer", "9", Platform.WINDOWS)
;
private final DesiredCapabilities capabilities;
private final String browserName;
private final String version;
private final Platform platform;
Environments(final String browserName, final String version, final Platform platform) {
this.browserName = browserName;
this.version = version;
this.platform = platform;
capabilities = new DesiredCapabilities();
}
public DesiredCapabilities capabilities() {
capabilities.setBrowserName(browserName);
capabilities.setVersion(version);
capabilities.setPlatform(platform);
return this.capabilities;
}
public String browserName() {
return browserName;
}
}
It's easy to modify and add environments whenever you need to. As you can notice, I am using this to create and retrieve the DesiredCapabilities that later will be used to create a specific WebDriver.
In order to make the tests run for all the defined environments, I used JUnit's (4.10 in my case) org.junit.experimental.theories:
#RunWith(MyRunnerForSeleniumTests.class)
public class MyWebComponentTestClassIT {
#Rule
public MySeleniumRule selenium = new MySeleniumRule();
#DataPoints
public static Environments[] enviroments = Environments.values();
#Theory
public void sample_test(final Environments environment) {
Page initialPage = LoginPage.login(selenium.driverFor(environment), selenium.getUserName(), selenium.getUserPassword());
// your test code here
}
}
The tests are annotated as #Theory (not as #Test, like in normal JUnit tests) and are passed a parameter. Each test will run then for all the defined values of this parameter, which should be an array of values annotated as #DataPoints. Also, you should use a runner that extends from org.junit.experimental.theories.Theories. I use org.junit.rules to prepare my tests, putting there all the necessary plumbing. As you can see I get the specific capabilities driver through the Rule, too. Though you could use the following code right in your test:
RemoteWebDriver driver = new RemoteWebDriver(new URL(some_url_string), environment.capabilities());
The point is that having it in the Rule you write the code once and use it for all your tests.
As for Page class, it is a class where I put all the code that uses driver's functionality (find an element, navigate, etc.). This way, again, the test code stays neat and clear and, again, you write it once and use it in all your tests.
So, this is the solution for the first issue. (I know that you can do a similar thing with TestNG, but I didn't try it.)
To solve the second issue, I created a special package where I keep all the code of browser specific workarounds. It consists of an abstract class, e.g. BrowserSpecific, that contains the common code which happens to be different (or have a bug) in some browser. In the same package I have classes specific for every browser used in tests and each of them extends BrowserSpecific.
Here is how it works for the Chrome driver bug that you mention. I create a method clickOnButton in BrowserSpecific with the common code for the affected behaviour:
public abstract class BrowserSpecific {
protected final RemoteWebDriver driver;
protected BrowserSpecific(final RemoteWebDriver driver) {
this.driver = driver;
}
public static BrowserSpecific aBrowserSpecificFor(final RemoteWebDriver driver) {
BrowserSpecific browserSpecific = null;
if (Environments.FF_18_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new FireFoxSpecific(driver);
}
if (Environments.CHR_24_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new ChromeSpecific(driver);
}
if (Environments.IE_9_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new InternetExplorerSpecific(driver);
}
return browserSpecific;
}
public void clickOnButton(final WebElement button) {
button.click();
}
}
and then I override this method in the specific class, e.g. ChromeSpecific, where I place the workaround code:
public class ChromeSpecific extends BrowserSpecific {
ChromeSpecific(final RemoteWebDriver driver) {
super(driver);
}
#Override
public void clickOnButton(final WebElement button) {
// This is the Chrome workaround
String script = MessageFormat.format("window.scrollTo(0, {0});", button.getLocation().y);
driver.executeScript(script);
// Followed by common behaviour of all the browsers
super.clickOnButton(button);
}
}
When I have to take into account the specific behaviour of some browser, I do the following:
aBrowserSpecificFor(driver).clickOnButton(logoutButton);
instead of:
button.click();
This way, in my common code, I can identify easily where the workaround has been applied and I keep the workarounds isolated from the common code. I find it easy to maintain, as the bugs are usually being solved and the workarounds may or should be changed or eliminated.
One last word about executing the tests. As you are going to use Selenium Grid you will want to use the possibility to run the tests in parallel, so remember to configure this feature for your JUnit tests (available since v. 4.7).
We use testng in our organization and we use the parameter option that testng gives to specify the enviroment, i.e. the browser to use, the machine to run on and any other config that is required for env config. The browsername is sent through the xml file which controls what needs to run and where. It is set as a global variable. What we have done as an extra is, we have our custom annotations which can override these global variables i.e. if a test is very specifically only to be run on chrome and no other browser, then we specify the same on the custom annotation. So, no matter even if the parameter is say run on FF, if it is annotated with chrome, it would always run on chrome.
I somehow believe making one class for each browser is not a good idea. Imagine the flow changes or there is a bit of here and there and you have 3 classes to change instead of one. And if the number of browsers increase, then one more class.
What I would suggest is to have code that is browserspecific to be extracted out. So, if the click behavior is browser specific, then override to it to do appropriate checks or failure handlings based on browsers.
I do it like this but keep in mind that this is pure WebDriver without the Grid or RC in mind:
// Utility class snippet
// Test classes import this with: import static utility.*;
public static WebDriver driver;
public static void initializeBrowser( String type ) {
if ( type.equalsIgnoreCase( "firefox" ) ) {
driver = new FirefoxDriver();
} else if ( type.equalsIgnoreCase( "ie" ) ) {
driver = new InternetExplorerDriver();
}
driver.manage().timeouts().implicitlyWait( 10000, TimeUnit.MILLISECONDS );
driver.manage().window().setPosition(new Point(200, 10));
driver.manage().window().setSize(new Dimension(1200, 800));
}
Now, using JUnit 4.11+ your parameters file needs to look something like this:
firefox, test1, param1, param2
firefox, test2, param1, param2
firefox, test3, param1, param2
ie, test1, param1, param2
ie, test2, param1, param2
ie, test3, param1, param2
Then, using a single .CSV parameterized test class (that you intend to start multiple browser types with), in the #Before annotated method, do this:
If the current parameter test is the first test of this browser type, and no already open windows exist, open a new browser window of the current type.
If a browser is already open and the browser type is the same, then just re-use the same driver object.
if a browser is open of a different type that the current test, then close the browser and re-open a browser of the correct type.
Of course, my answer doesn't tell you how to handle the parameters: I leave that for you to figure out.

How to test Jobs in playframework?

I have:
#OnApplicationStart
public class SomeClass {
.. doJob() ...
}
How I can test it in my Unit Test that doJob() actually launched when application started?
I would argue that this is not a unit test, but an integration test.
You can test your Job, by simply calling it using the syntax new MyJob().now();, but as you are looking to test the #OnApplicationStart function, the you would be better off doing this as a Selenium test, and checking the data that you expect to be made available from the bootstrap job is present.