I am new to Selenium, While practicing I come up with one issue, I am doing testing for my own application which has been deployed in tomcat server. So after opening my application I am testing validations in one method and page change in one method. Now My point is I am doing both testing for my both methods at same page.
Why do I need to write same code both methods,
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
How can I directly perform operations, If I remove above two lines in my second method It is getting failed. How to solve this?
#Test
public void analysisValidation()
{
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
driver.findElement(By.id("Analysis")).click();
WebElement webElement = driver.findElement(By.id("modelForm.errors"));
String alertMsg = webElement.getText();
System.out.println(alertMsg);
Assert.assertEquals("Please select a Survey Id to perform Aggregate Analysis", alertMsg);
}
#Test
public void testAnalysisPage()
{
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
new Select(driver.findElement(By.id("surveyId"))).selectByVisibleText("Apollo");
driver.findElement(By.id("Analysis")).click();
System.out.println(driver.getTitle());
String pageTitle = driver.getTitle();
Assert.assertEquals("My JSP 'analysis.jsp' starting page", pageTitle);
}
How can I directly perform operations, If I remove above two lines in
my second method It is getting failed. How to solve this
The tests fail because each #Test test is executed independently. The code you remove is needed to initialize the driver and load the page.
You can fix this as follows:
Create a function, setUp() with the #beforemethod annotation. Populate it with the driver initialization and loading-page calls.
Create a function, teardown() with the #AfterMethod annotation. Populate it with the driver cleanup calls.
For example, here is some pseudocode (modify this as per taste)
#BeforeMethod
public void setUp() throws Exception {
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
}
#AfterMethod
public void teardown() throws Exception {
driver.quit()
}
The advantage of the #BeforeMethod and #AfterMethod annotations is that the code will be run before / after each #Test method executes. You can therefore avoid having to duplicate your code.
Related
Using:C#NUnit 3.9
Selenium WebDriver 3.11.0
Chrome WebDriver 2.35.0
How do I maintain the context of my WebDriver while running parallel tests in NUnit?
When I run my tests with the ParallelScope.All attribute, my tests reuse the driver and fail
The Test property in my tests does not persist across the [Setup] - [Test] - [TearDown] without the Test being given a higher scope.
Test.cs
public class Test{
public IWebDriver Driver;
//public Pages pages;
//anything else I need in a test
public Test(){
Driver = new ChromeDriver();
}
//helper functions and reusable functions
}
SimpleTest.cs
[TestFixture]
[Parallelizable(ParallelScope.All)]
class MyTests{
Test Test;
[SetUp]
public void Setup()
{
Test = new Test();
}
[Test]
public void Test_001(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_002(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_003(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_004(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[TearDown]
public void TearDown()
{
string outcome = TestContext.CurrentContext.Result.Outcome.ToString();
TestContext.Out.WriteLine("#RESULT: " + outcome);
if (outcome.ToLower().Contains("fail"))
{
//Do something like take a screenshot which requires the WebDriver
}
Test.Driver.Quit();
Test.Driver.Dispose();
}
}
The docs state: "SetUpAttribute is now used exclusively for per-test setup."
Setting the Test property in the [Setup] does not seem to work.
If this is a timing issue because I'm re-using the Test property. How do I arrange my fixtures so the Driver is unique each test?
One solution is to put the driver inside the [Test]. But then, I cannot utilize the TearDown method which is a necessity to keep my tests organized and cleaned up.
I've read quite a few posts/websites, but nothing solves the problem. [Parallelizable(ParallelScope.Self)] seems to be the only real solution and that slows down the tests.
Thank you in advance!
The ParallelizableAttribute makes a promise to NUnit that it's safe to run certain tests in parallel, but it doesn't do anything to actually make it safe. That's up to you, the programmer.
Your tests (test methods) have shared state, i.e. the field Test. Not only that, but each test changes the shared state, because the SetUp method is called for each test. That means your tests may not safely be run in parallel, so you shouldn't tell NUnit to run them that way.
You have two ways to go... either use a lesser degree of parallelism or make the tests safe to run in parallel.
Using a lesser degree of parallelism is the easiest. Try using ParallelScope.Fixtures on the assembly or ParallelScope.Self (the default) on each fixture. If you have a large number of independent fixtures, this may give you as good a throughput as you will get doing something more complicated.
Alternatively, to run tests in parallel, each test must have a separate driver. You will have to create it and dispose of it in the test method itself.
In the future, NUnit may add a feature that will make this easier, by isolating each test method in a separate object. But with the current software, the above is the best you can do.
In my below code - Report always show testcase as Pass though i failed the testcase at BeforeMethod. Please help me to fix this problem
public class practice extends Test_CommonLib {
WebDriver driver;
ExtentReports logger;
String Browser="FireFox";
#BeforeMethod
public void setUp() throws Exception{
logger=eno_TestResport(this.getClass().getName(),Browser);
logger.startTest(this.getClass().getSimpleName());
Assert.assertTrue(false); //intentionally failing my BeforeMethod
}
#Test
public void CreateObject() throws Exception{
System.out.println("Test");
}
#AfterMethod(alwaysRun=true)
public void tearDown(ITestResult result) throws Exception{
if (ITestResult.FAILURE==result.getStatus()) {
logger.log(LogStatus.FAIL, "Test case failed");
}else if(ITestResult.SKIP==result.getStatus()){
logger.log(LogStatus.SKIP, "Test case skipped");
}else {
logger.log(LogStatus.PASS, "Aweosme Job");
}
}
}
with the same code i got result as given below:-
Well, what you are observing is correct. When an Assertion fails the rest of the code is not executed. The same is happening in your case as well. Irrespective of your Assertion whether it passes/fails driver is no more executing any code within that method & straightly comes out of #BeforeMethod Annotation and moves to the methods under #Test Annotation.
Further, your report will always show Testcase as "Pass" as your Testcase within #Test Annotation will successfully execute.
#AnandKakhandaki Here you need to follow certain guidelines of TestNG following this page - https://www.tutorialspoint.com/testng/testng_basic_annotations.htm
It is worth mentioning that, the piece of code within BeforeMethod Annotation will be executed everytime before executing any method. Likewise for BeforeSuite, BeforeClass, BeforeTest & BeforeGroups. Similarly, the piece of code within AfterMethod Annotation will be executed everytime after executing any method. Likewise for AfterSuite, AfterClass, AfterTest & AfterGroups. The code within these mentioned Annotation should be used to configure the Application aka System under test before and after the actual test execution begins/ends. These Annotation may include code for choosing the browser for test execution, opening/closing the browser with certain attributes, opening a url, switch to other url, closing the url, etc which are mandatory configurations to run the test execution.
Validation/Verification or Assertion should never be part of these Annotations. Rather, Assertions should be within Test Annotation. To be precise, Assertions should be kept out of Test Annotation as well, in a separate library. So your code with in Test Annotation contains only Testing Steps.
Let me know if this answers your question.
I am using Selenium WebDriver with TestNG framework for running test suite on Windows and MAC platform on different browsers - Chrome, IE, firefox and Safari. I have around 300 test cases in my test suite.
The problem is that some of the test cases gets skipped in between where I believe driver becomes unresponsive. However the logs failed to capture any details why the test cases are getting skipped.
The reporter class extends TestListenerAdapter and hence the skipped test cases gets listed in the log file with the use of onConfigurationSkip method. It only prints that a particular test case has been skipped.
Below are some code snippets for reference
Code from Reporter Class
#Override
public void onConfigurationSkip(ITestResult testResult) {
LOGGER.info(String.format("Skipped Configuration for : %s", testResult.getMethod().getMethodName()));
}
Sample Test Class
public class TestClass {
private WebDriver driver;
#Parameters({ "platform", "browser"})
#BeforeClass
public void setUp(String platform, String browser) {
// Creates a new driver instance based on platform and browser details
driver = WebDriverFactory.getDriver(platform, browser);
// Login to web application
Utils.login(driver, username, password);
}
#Test
public void sampleTestMethod() {
// scripts for validating Web elements
}
#AfterClass
public void tearDown() {
driver.quit();;
}
}
Observations:
driver.quit() doesn't guarantee that driver instance has been closed because I can still see driver instance running in task manager. Again this is intermittent and happens sometimes only.
This issue is observed on all platform and browser
This is an intermittent issue and probability of its occurrence increases as the number of test cases increase in test suite
There is no definite pattern of skipped test cases. The test cases get randomly skipped on some browser and platform
The probability of occurance of skip test cases increases with subsequent run of test suite. I believe the reason is that more and more driver instances that were not properly closed keep running in the back ground
Normally a test Class has 5-15 test methods and new driver instance is created every time in #BeforeClass method and is closed in #AfterClass
Any Suggestions? Thanks in Advance
If you're fine with opening and closing the browsers around every test then you should use #BeforeMethod or #AfterMethod instead of #BeforeClass and #AfterClass
If you follow the following code and its output then you'll find that #BeforeMethod executes before every test annotated methods however, #BeforeClass does only once for all methods in the class.
Since I don't have your full code to analyze then I can just assume that your tests are trying to reuse the wrong driver instances. So the best bet would be to close it down after every test execution finishes.
Code:
package com.autmatika.testng;
import org.testng.annotations.*;
public class FindIt {
#BeforeClass
public void beforeClass(){
System.out.println("Before Class");
}
#AfterClass
public void afterClass(){
System.out.println("After Class");
}
#BeforeMethod
public void beforeTest(){
System.out.println("Before Test");
}
#AfterMethod
public void afterTest(){
System.out.println("After Test");
}
#Test
public void test1(){
System.out.println("test 1");
}
#Test
public void test2(){
System.out.println("test 2");
}
#Test
public void test3(){
System.out.println("test 3");
}
#Test
public void test4(){
System.out.println("test 4");
}
}
Output:
Before Class
Before Test
test 1
After Test
Before Test
test 2
After Test
Before Test
test 3
After Test
Before Test
test 4
After Test
After Class
===============================================
Default Suite
Total tests run: 4, Failures: 0, Skips: 0
===============================================
The most common reason for the test cases getting skipped with Selenium using TestNG is if your methods are dependent on other method and a method you depend on is failed.
To get information on why the test got skipped you can implement that in after test method as below :-
#AfterMethod
public void afterTest(ItestResult result) {
Throwable t = result.getThrowable();
// with the object of t you can get the stacktrace and log it into your reporter
}
and to avoid tests getting skipped you can have alwaysRun parameter to be true after the #Test annotation
#Test(alwaysRun=true)
to avoid the webdriver getting retarted do the driver cleanups in methods with #BeforeMethod and #AfterMethod annotations, so try changing the annotation of setUp and tearDown methods to #BeforeMethod and #AfterMethod respectively
I am trying to execute a large suite of selenium tests via xUnit console runner in parallel.
These have executed and I see 3 chrome windows open, however the first send key commands simply executes 3 times to one window, resulting in test failure.
I have registered my driver in an objectcontainer before each scenario as below:
[Binding]
public class WebDriverSupport
{
private readonly IObjectContainer _objectContainer;
public WebDriverSupport(IObjectContainer objectContainer)
{
_objectContainer = objectContainer;
}
[BeforeScenario]
public void InitializeWebDriver()
{
var driver = GetWebDriverFromAppConfig();
_objectContainer.RegisterInstanceAs<IWebDriver>(driver);
}
And then call the driver in my specflow step defintions as:
_driver = (IWebDriver)ScenarioContext.Current.GetBindingInstance(typeof(IWebDriver));
ScenarioContext.Current.Add("Driver", _driver);
However this has made no difference and it seems as if my tests are trying to execute all commands to one driver.
Can anyone advise where I have gone wrong ?
You shouldn't be using ScenarioContext.Current in a parallel execution context. If you're injecting the driver through _objectContainer.RegisterInstanceAs you will receive it through constructor injection in your steps class' constructor, like so:
public MyScenarioSteps(IWebDriver driver)
{
_driver = driver;
}
More info:
https://github.com/techtalk/SpecFlow/wiki/Parallel-Execution#thread-safe-scenariocontext-featurecontext-and-scenariostepcontext
https://github.com/techtalk/SpecFlow/wiki/Context-Injection
In my opinion this is horribly messy.
This might not be an answer, but is too big for a comment.
why are you using the IObjectContainer if you are just getting it from the current scenario context and not injecting it via the DI mechanism? I would try this:
[Binding]
public class WebDriverSupport
{
[BeforeScenario]
public void InitializeWebDriver()
{
var driver = GetWebDriverFromAppConfig();
ScenarioContext.Current.Add("Driver",driver);
}
}
then in your steps:
_driver = (IWebDriver)ScenarioContext.Current.Get("Driver");
As long as GetWebDriverFromAppConfig returns a new instance you should be ok...
Is this valid code?
selenium = new DefaultSelenium("localhost", 4444, "*iehta",
"http://www.google.com/");
selenium.start();
...
selenium.stop();
...
selenium.start();
...
selenium.stop();
There's nothing wrong with having multiple browsers open (what you call "seleniums"). In fact, it's the only way you can test certain applications. Imagine an application that has an administrative UI and an end-user UI, where you make changes on the admin side and verify their effects on the user side. You can either write your test to jump back and forth between the two on the same browser session, or you can open two browsers, one for each aspect of the application. The former is the usual technique, but the latter is much cleaner.
And why do you think it shouldn't be safe? unless it works ok it's fine, If it doesn't than recreate the DefaultSelenium object again, it won't slow down your code anyway
You should usually keep start() and stop() as you set up and tear down methods. While using TestNG you can annonate then with #BeforeClass and #AfterClass annonations. Hence browser would be launched and shut down only before and after a test method in a class.
b/w did you support Selenium Proposal on area51 - http://area51.stackexchange.com/proposals/4693/selenium
This proposal is backed by SeleniumHQ and we need more users to commit to it to make it see day of light.
That was my fault.
Unexpected behaviour caused by this code and occurs because I stop selenium two times (selenium object never become null):
public class SeleniumController {
private static Selenium selenium;
public static Selenium startNewSelenium(){
// if already exists stop it and replace with new one
if(selenium != null){
selenium.stop();
}
selenium = createNewSelenium(getCurContext());
return selenium;
}
public static void stopSelenium() {
if(selenium != null){
selenium.stop();
}
}
private static Selenium createNewSelenium(TestContext testContext){
TestProperties testProps = new TestProperties(testContext);
ExtendedSelenium selenium = new ExtendedSelenium("localhost", RemoteControlConfiguration.DEFAULT_PORT,
testProps.getBrowser(), testProps.getServerUrl());
selenium.start();
selenium.useXpathLibrary("javascript-xpath");
selenium.allowNativeXpath("false");
return selenium;
}
}
The correct class code is:
public class SeleniumController {
private static Selenium selenium;
public static Selenium startNewSelenium(){
// if already exists stop it and replace with new one
stopSelenium();
selenium = createNewSelenium(getCurContext());
return selenium;
}
public static void stopSelenium() {
if(selenium != null){
selenium.stop();
selenium = null;
}
}
private static Selenium createNewSelenium(TestContext testContext){
TestProperties testProps = new TestProperties(testContext);
ExtendedSelenium selenium = new ExtendedSelenium("localhost", RemoteControlConfiguration.DEFAULT_PORT,
testProps.getBrowser(), testProps.getServerUrl());
selenium.start();
selenium.useXpathLibrary("javascript-xpath");
selenium.allowNativeXpath("false");
return selenium;
}
}