Is this valid code?
selenium = new DefaultSelenium("localhost", 4444, "*iehta",
"http://www.google.com/");
selenium.start();
...
selenium.stop();
...
selenium.start();
...
selenium.stop();
There's nothing wrong with having multiple browsers open (what you call "seleniums"). In fact, it's the only way you can test certain applications. Imagine an application that has an administrative UI and an end-user UI, where you make changes on the admin side and verify their effects on the user side. You can either write your test to jump back and forth between the two on the same browser session, or you can open two browsers, one for each aspect of the application. The former is the usual technique, but the latter is much cleaner.
And why do you think it shouldn't be safe? unless it works ok it's fine, If it doesn't than recreate the DefaultSelenium object again, it won't slow down your code anyway
You should usually keep start() and stop() as you set up and tear down methods. While using TestNG you can annonate then with #BeforeClass and #AfterClass annonations. Hence browser would be launched and shut down only before and after a test method in a class.
b/w did you support Selenium Proposal on area51 - http://area51.stackexchange.com/proposals/4693/selenium
This proposal is backed by SeleniumHQ and we need more users to commit to it to make it see day of light.
That was my fault.
Unexpected behaviour caused by this code and occurs because I stop selenium two times (selenium object never become null):
public class SeleniumController {
private static Selenium selenium;
public static Selenium startNewSelenium(){
// if already exists stop it and replace with new one
if(selenium != null){
selenium.stop();
}
selenium = createNewSelenium(getCurContext());
return selenium;
}
public static void stopSelenium() {
if(selenium != null){
selenium.stop();
}
}
private static Selenium createNewSelenium(TestContext testContext){
TestProperties testProps = new TestProperties(testContext);
ExtendedSelenium selenium = new ExtendedSelenium("localhost", RemoteControlConfiguration.DEFAULT_PORT,
testProps.getBrowser(), testProps.getServerUrl());
selenium.start();
selenium.useXpathLibrary("javascript-xpath");
selenium.allowNativeXpath("false");
return selenium;
}
}
The correct class code is:
public class SeleniumController {
private static Selenium selenium;
public static Selenium startNewSelenium(){
// if already exists stop it and replace with new one
stopSelenium();
selenium = createNewSelenium(getCurContext());
return selenium;
}
public static void stopSelenium() {
if(selenium != null){
selenium.stop();
selenium = null;
}
}
private static Selenium createNewSelenium(TestContext testContext){
TestProperties testProps = new TestProperties(testContext);
ExtendedSelenium selenium = new ExtendedSelenium("localhost", RemoteControlConfiguration.DEFAULT_PORT,
testProps.getBrowser(), testProps.getServerUrl());
selenium.start();
selenium.useXpathLibrary("javascript-xpath");
selenium.allowNativeXpath("false");
return selenium;
}
}
Related
How would I set assembly initialize and tear down and then test methods for selenium mobile.
I have tried to follow the sequence as we do for selenium but for mobile it didn't work out.
Here is the code I am currently using to start my driver. I would like to run this setup before each test, and appropriate teardown steps after the test is complete:
// start appium service
var builder = new AppiumServiceBuilder();
var appiumLocalService = builder.UsingAnyFreePort().Build();
appiumLocalService.Start();
// create appium driver capabilities
var options = new AppiumOptions { PlatformName = "Android" };
options.AddAdditionalCapability("deviceName", "Pixel 3a Pie 9.0 - API 28");
// add app or appPackage / appActivity depending on preference
options.AddAdditionalCapability("appPackage", "org.mozilla.firefox");
options.AddAdditionalCapability("appActivity", "org.mozilla.gecko.BrowserApp");
options.AddAdditionalCapability("udid", "emulator-5554");
options.AddAdditionalCapability("automationName", "UiAutomator2"); // this one is important
// these are optional, but I find them to be helpful -- see DesiredCapabilities Appium docs to learn more
options.AddAdditionalCapability("autoGrantPermissions", true);
options.AddAdditionalCapability("allowSessionOverride", true);
// start the driver
var driver = new AndroidDriver<IWebElement>(appiumLocalService.ServiceUrl, options);
If you are using C# with NUnit, you can use the NUnit included [SetUp] and [TearDown] attributes to accomplish this. If you would like to apply this scenario to all of your tests, you can put these methods into a separate Fixture class that will inherit each of these methods.
Here's a very basic setup to get you started:
public class Fixture
{
public AndroidDriver<IWebElement> Driver { get; }
private AppiumLocalService _appiumLocalService;
[SetUp]
public void StartDriver()
{
// start appium service
var builder = new AppiumServiceBuilder();
_appiumLocalService = builder.UsingAnyFreePort().Build();
_appiumLocalService.Start();
// create appium driver capabilities
var options = new AppiumOptions { PlatformName = "Android" };
options.AddAdditionalCapability("deviceName", "Pixel 3a Pie 9.0 - API 28");
// add app or appPackage / appActivity depending on preference
options.AddAdditionalCapability("appPackage", "org.mozilla.firefox");
options.AddAdditionalCapability("appActivity", "org.mozilla.gecko.BrowserApp");
options.AddAdditionalCapability("udid", "emulator-5554");
options.AddAdditionalCapability("automationName", "UiAutomator2"); // this one is important
// these are optional, but I find them to be helpful -- see DesiredCapabilities Appium docs to learn more
options.AddAdditionalCapability("autoGrantPermissions", true);
options.AddAdditionalCapability("allowSessionOverride", true);
// set the driver global variable
Driver = new AndroidDriver<IWebElement>(appiumLocalService.ServiceUrl, options);
}
[TearDown]
public void CloseDriver()
{
Driver.Close(); // may need to change Driver.CloseApp();
Driver.Quit();
// stop appium service
_appiumLocalService.Stop();
}
Now, when you create a class for a test case, it will look like this:
public class MyTestClass : Fixture
{
[Test]
public void RunTest()
{
// perform test functions here such as FindElement and SendKeys
Driver.FindElement("myElement");
}
[Test]
public void RunAnotherTest()
{
// these tests use different driver instances, but that code will never have to be duplicated!
}
}
Note that you can create as many test classes as you want to inherit from Fixture and you will never have to duplicate the driver declaration code, or even call it.
Now, let's break down what is going on here. NUnit [Setup] and [TearDown] attributes designate a method that will run before everything tagged with [Test]. So, if NUnit runs a [Test] method, it will run [SetUp] > [Test] > [TearDown]. This is very useful, because you do not have to duplicate code for actions that need to be repeated over and over again.
In Fixture, we have a global Driver variable that represents the AndroidDriver<> instance for this particular test. The Driver instances is created from scratch in [SetUp] before the test, used during the [Test] method, and once the [Test] is finished, [TearDown] will destroy the Driver instance. The process repeats for every test. This type of usage where we create exactly one instance of an object for re-use throughout a process is called the Singleton Pattern.
This ensures your Driver instance does not get re-used between test cases, which is preferred practice in test automation.
We have also declared _appiumLocalService as a private variable, because while we do not need to use this variable outside of Fixture, we still need to re-use the variable between [SetUp] and [TearDown] so we can stop the Appium service once our test is finished.
Using:C#NUnit 3.9
Selenium WebDriver 3.11.0
Chrome WebDriver 2.35.0
How do I maintain the context of my WebDriver while running parallel tests in NUnit?
When I run my tests with the ParallelScope.All attribute, my tests reuse the driver and fail
The Test property in my tests does not persist across the [Setup] - [Test] - [TearDown] without the Test being given a higher scope.
Test.cs
public class Test{
public IWebDriver Driver;
//public Pages pages;
//anything else I need in a test
public Test(){
Driver = new ChromeDriver();
}
//helper functions and reusable functions
}
SimpleTest.cs
[TestFixture]
[Parallelizable(ParallelScope.All)]
class MyTests{
Test Test;
[SetUp]
public void Setup()
{
Test = new Test();
}
[Test]
public void Test_001(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_002(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_003(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_004(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[TearDown]
public void TearDown()
{
string outcome = TestContext.CurrentContext.Result.Outcome.ToString();
TestContext.Out.WriteLine("#RESULT: " + outcome);
if (outcome.ToLower().Contains("fail"))
{
//Do something like take a screenshot which requires the WebDriver
}
Test.Driver.Quit();
Test.Driver.Dispose();
}
}
The docs state: "SetUpAttribute is now used exclusively for per-test setup."
Setting the Test property in the [Setup] does not seem to work.
If this is a timing issue because I'm re-using the Test property. How do I arrange my fixtures so the Driver is unique each test?
One solution is to put the driver inside the [Test]. But then, I cannot utilize the TearDown method which is a necessity to keep my tests organized and cleaned up.
I've read quite a few posts/websites, but nothing solves the problem. [Parallelizable(ParallelScope.Self)] seems to be the only real solution and that slows down the tests.
Thank you in advance!
The ParallelizableAttribute makes a promise to NUnit that it's safe to run certain tests in parallel, but it doesn't do anything to actually make it safe. That's up to you, the programmer.
Your tests (test methods) have shared state, i.e. the field Test. Not only that, but each test changes the shared state, because the SetUp method is called for each test. That means your tests may not safely be run in parallel, so you shouldn't tell NUnit to run them that way.
You have two ways to go... either use a lesser degree of parallelism or make the tests safe to run in parallel.
Using a lesser degree of parallelism is the easiest. Try using ParallelScope.Fixtures on the assembly or ParallelScope.Self (the default) on each fixture. If you have a large number of independent fixtures, this may give you as good a throughput as you will get doing something more complicated.
Alternatively, to run tests in parallel, each test must have a separate driver. You will have to create it and dispose of it in the test method itself.
In the future, NUnit may add a feature that will make this easier, by isolating each test method in a separate object. But with the current software, the above is the best you can do.
I am trying to execute a large suite of selenium tests via xUnit console runner in parallel.
These have executed and I see 3 chrome windows open, however the first send key commands simply executes 3 times to one window, resulting in test failure.
I have registered my driver in an objectcontainer before each scenario as below:
[Binding]
public class WebDriverSupport
{
private readonly IObjectContainer _objectContainer;
public WebDriverSupport(IObjectContainer objectContainer)
{
_objectContainer = objectContainer;
}
[BeforeScenario]
public void InitializeWebDriver()
{
var driver = GetWebDriverFromAppConfig();
_objectContainer.RegisterInstanceAs<IWebDriver>(driver);
}
And then call the driver in my specflow step defintions as:
_driver = (IWebDriver)ScenarioContext.Current.GetBindingInstance(typeof(IWebDriver));
ScenarioContext.Current.Add("Driver", _driver);
However this has made no difference and it seems as if my tests are trying to execute all commands to one driver.
Can anyone advise where I have gone wrong ?
You shouldn't be using ScenarioContext.Current in a parallel execution context. If you're injecting the driver through _objectContainer.RegisterInstanceAs you will receive it through constructor injection in your steps class' constructor, like so:
public MyScenarioSteps(IWebDriver driver)
{
_driver = driver;
}
More info:
https://github.com/techtalk/SpecFlow/wiki/Parallel-Execution#thread-safe-scenariocontext-featurecontext-and-scenariostepcontext
https://github.com/techtalk/SpecFlow/wiki/Context-Injection
In my opinion this is horribly messy.
This might not be an answer, but is too big for a comment.
why are you using the IObjectContainer if you are just getting it from the current scenario context and not injecting it via the DI mechanism? I would try this:
[Binding]
public class WebDriverSupport
{
[BeforeScenario]
public void InitializeWebDriver()
{
var driver = GetWebDriverFromAppConfig();
ScenarioContext.Current.Add("Driver",driver);
}
}
then in your steps:
_driver = (IWebDriver)ScenarioContext.Current.Get("Driver");
As long as GetWebDriverFromAppConfig returns a new instance you should be ok...
I am new to Selenium, While practicing I come up with one issue, I am doing testing for my own application which has been deployed in tomcat server. So after opening my application I am testing validations in one method and page change in one method. Now My point is I am doing both testing for my both methods at same page.
Why do I need to write same code both methods,
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
How can I directly perform operations, If I remove above two lines in my second method It is getting failed. How to solve this?
#Test
public void analysisValidation()
{
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
driver.findElement(By.id("Analysis")).click();
WebElement webElement = driver.findElement(By.id("modelForm.errors"));
String alertMsg = webElement.getText();
System.out.println(alertMsg);
Assert.assertEquals("Please select a Survey Id to perform Aggregate Analysis", alertMsg);
}
#Test
public void testAnalysisPage()
{
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
new Select(driver.findElement(By.id("surveyId"))).selectByVisibleText("Apollo");
driver.findElement(By.id("Analysis")).click();
System.out.println(driver.getTitle());
String pageTitle = driver.getTitle();
Assert.assertEquals("My JSP 'analysis.jsp' starting page", pageTitle);
}
How can I directly perform operations, If I remove above two lines in
my second method It is getting failed. How to solve this
The tests fail because each #Test test is executed independently. The code you remove is needed to initialize the driver and load the page.
You can fix this as follows:
Create a function, setUp() with the #beforemethod annotation. Populate it with the driver initialization and loading-page calls.
Create a function, teardown() with the #AfterMethod annotation. Populate it with the driver cleanup calls.
For example, here is some pseudocode (modify this as per taste)
#BeforeMethod
public void setUp() throws Exception {
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
}
#AfterMethod
public void teardown() throws Exception {
driver.quit()
}
The advantage of the #BeforeMethod and #AfterMethod annotations is that the code will be run before / after each #Test method executes. You can therefore avoid having to duplicate your code.
I have my application automated, which has some 70+ scripts and runs against Selenium Grid which is open to other applications too.
My question is, is there any connection pooling api for WebDriver? so that i can re use webdriver objects efficiently across my scripts. I don't want my scripts wait for IE slots and fail because of time out errors if they cannot get one.
Also, i beleive it would enhance the performance of scripts execution.
Thanks.
Our tests are very small and so fast that the webdrivers took longer to instantiate than the tests. So we pooled the webdrivers, like #premganz suggested, but using Apache Commons Pool. We considered writing our own webdriver list-managed pool but found using the well-established Apache Pool was simple to implement, robust and scalable. Our tests run over 80 WebDrivers concurrently.
Example WebdriverFactory:
public class WebdriverFactory extends BasePooledObjectFactory<RemoteWebDriver> {
private FirefoxOptions firefoxOptions = new FirefoxOptions();
public WebdriverFactory(boolean headless, int implicit_timeout_seconds) {
super();
firefoxOptions.setHeadless(headless)
.setPageLoadStrategy(PageLoadStrategy.EAGER)
.setLogLevel(FirefoxDriverLogLevel.ERROR);
}
#Override
public RemoteWebDriver create() {
FirefoxDriver webDriver = new FirefoxDriver(firefoxOptions);
webDriver.manage()
.timeouts()
.implicitlyWait(implicit_timeout_seconds, TimeUnit.SECONDS);
return webDriver;
}
/**
* Use the default PooledObject implementation.
*/
#Override
public PooledObject<RemoteWebDriver> wrap(RemoteWebDriver webDriver) {
return new DefaultPooledObject<>(webDriver);
}
/**
* When a webdriver is returned to the pool, clean it up.
*/
#Override
public void passivateObject(PooledObject<RemoteWebDriver> webDriver) {
WebDriver driver = webDriver.getObject();
try {
// close all tabs except the first
String originalHandle = driver.getWindowHandle();
for(String handle : driver.getWindowHandles()) {
if (!handle.equals(originalHandle)) {
driver.switchTo().window(handle);
driver.close();
}
}
driver.switchTo().window(originalHandle);
} catch (Exception e) {
// ...
} finally {
// ensure session data is not re-used
driver.manage().deleteAllCookies();
}
}
#Override
public boolean validateObject(PooledObject<RemoteWebDriver> webDriver) {
return true;
}
#Override
public void activateObject(PooledObject<RemoteWebDriver> webDriver) throws Exception {
}
}
I agree that WebDriver pooling could enhance performance of the application. On the other hand, if you are using selenium webdriver, the driver becomes stateful making it less reusable. I did a logic something like this:
Create a Driver factory that wraps a linked list of size say 10 (which implements a list and a queue).
When asked for an instance provide the middle (i==5) one from the list
Use another thread to recycle the drivers in the queue, removing ones from the head and adding new ones to the tail.
This way you can implement a constantly recycled pool and your code does not have to block on driver.create or driver.quit.