I am using Selenium WebDriver with TestNG framework for running test suite on Windows and MAC platform on different browsers - Chrome, IE, firefox and Safari. I have around 300 test cases in my test suite.
The problem is that some of the test cases gets skipped in between where I believe driver becomes unresponsive. However the logs failed to capture any details why the test cases are getting skipped.
The reporter class extends TestListenerAdapter and hence the skipped test cases gets listed in the log file with the use of onConfigurationSkip method. It only prints that a particular test case has been skipped.
Below are some code snippets for reference
Code from Reporter Class
#Override
public void onConfigurationSkip(ITestResult testResult) {
LOGGER.info(String.format("Skipped Configuration for : %s", testResult.getMethod().getMethodName()));
}
Sample Test Class
public class TestClass {
private WebDriver driver;
#Parameters({ "platform", "browser"})
#BeforeClass
public void setUp(String platform, String browser) {
// Creates a new driver instance based on platform and browser details
driver = WebDriverFactory.getDriver(platform, browser);
// Login to web application
Utils.login(driver, username, password);
}
#Test
public void sampleTestMethod() {
// scripts for validating Web elements
}
#AfterClass
public void tearDown() {
driver.quit();;
}
}
Observations:
driver.quit() doesn't guarantee that driver instance has been closed because I can still see driver instance running in task manager. Again this is intermittent and happens sometimes only.
This issue is observed on all platform and browser
This is an intermittent issue and probability of its occurrence increases as the number of test cases increase in test suite
There is no definite pattern of skipped test cases. The test cases get randomly skipped on some browser and platform
The probability of occurance of skip test cases increases with subsequent run of test suite. I believe the reason is that more and more driver instances that were not properly closed keep running in the back ground
Normally a test Class has 5-15 test methods and new driver instance is created every time in #BeforeClass method and is closed in #AfterClass
Any Suggestions? Thanks in Advance
If you're fine with opening and closing the browsers around every test then you should use #BeforeMethod or #AfterMethod instead of #BeforeClass and #AfterClass
If you follow the following code and its output then you'll find that #BeforeMethod executes before every test annotated methods however, #BeforeClass does only once for all methods in the class.
Since I don't have your full code to analyze then I can just assume that your tests are trying to reuse the wrong driver instances. So the best bet would be to close it down after every test execution finishes.
Code:
package com.autmatika.testng;
import org.testng.annotations.*;
public class FindIt {
#BeforeClass
public void beforeClass(){
System.out.println("Before Class");
}
#AfterClass
public void afterClass(){
System.out.println("After Class");
}
#BeforeMethod
public void beforeTest(){
System.out.println("Before Test");
}
#AfterMethod
public void afterTest(){
System.out.println("After Test");
}
#Test
public void test1(){
System.out.println("test 1");
}
#Test
public void test2(){
System.out.println("test 2");
}
#Test
public void test3(){
System.out.println("test 3");
}
#Test
public void test4(){
System.out.println("test 4");
}
}
Output:
Before Class
Before Test
test 1
After Test
Before Test
test 2
After Test
Before Test
test 3
After Test
Before Test
test 4
After Test
After Class
===============================================
Default Suite
Total tests run: 4, Failures: 0, Skips: 0
===============================================
The most common reason for the test cases getting skipped with Selenium using TestNG is if your methods are dependent on other method and a method you depend on is failed.
To get information on why the test got skipped you can implement that in after test method as below :-
#AfterMethod
public void afterTest(ItestResult result) {
Throwable t = result.getThrowable();
// with the object of t you can get the stacktrace and log it into your reporter
}
and to avoid tests getting skipped you can have alwaysRun parameter to be true after the #Test annotation
#Test(alwaysRun=true)
to avoid the webdriver getting retarted do the driver cleanups in methods with #BeforeMethod and #AfterMethod annotations, so try changing the annotation of setUp and tearDown methods to #BeforeMethod and #AfterMethod respectively
Related
Using:C#NUnit 3.9
Selenium WebDriver 3.11.0
Chrome WebDriver 2.35.0
How do I maintain the context of my WebDriver while running parallel tests in NUnit?
When I run my tests with the ParallelScope.All attribute, my tests reuse the driver and fail
The Test property in my tests does not persist across the [Setup] - [Test] - [TearDown] without the Test being given a higher scope.
Test.cs
public class Test{
public IWebDriver Driver;
//public Pages pages;
//anything else I need in a test
public Test(){
Driver = new ChromeDriver();
}
//helper functions and reusable functions
}
SimpleTest.cs
[TestFixture]
[Parallelizable(ParallelScope.All)]
class MyTests{
Test Test;
[SetUp]
public void Setup()
{
Test = new Test();
}
[Test]
public void Test_001(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_002(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_003(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_004(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[TearDown]
public void TearDown()
{
string outcome = TestContext.CurrentContext.Result.Outcome.ToString();
TestContext.Out.WriteLine("#RESULT: " + outcome);
if (outcome.ToLower().Contains("fail"))
{
//Do something like take a screenshot which requires the WebDriver
}
Test.Driver.Quit();
Test.Driver.Dispose();
}
}
The docs state: "SetUpAttribute is now used exclusively for per-test setup."
Setting the Test property in the [Setup] does not seem to work.
If this is a timing issue because I'm re-using the Test property. How do I arrange my fixtures so the Driver is unique each test?
One solution is to put the driver inside the [Test]. But then, I cannot utilize the TearDown method which is a necessity to keep my tests organized and cleaned up.
I've read quite a few posts/websites, but nothing solves the problem. [Parallelizable(ParallelScope.Self)] seems to be the only real solution and that slows down the tests.
Thank you in advance!
The ParallelizableAttribute makes a promise to NUnit that it's safe to run certain tests in parallel, but it doesn't do anything to actually make it safe. That's up to you, the programmer.
Your tests (test methods) have shared state, i.e. the field Test. Not only that, but each test changes the shared state, because the SetUp method is called for each test. That means your tests may not safely be run in parallel, so you shouldn't tell NUnit to run them that way.
You have two ways to go... either use a lesser degree of parallelism or make the tests safe to run in parallel.
Using a lesser degree of parallelism is the easiest. Try using ParallelScope.Fixtures on the assembly or ParallelScope.Self (the default) on each fixture. If you have a large number of independent fixtures, this may give you as good a throughput as you will get doing something more complicated.
Alternatively, to run tests in parallel, each test must have a separate driver. You will have to create it and dispose of it in the test method itself.
In the future, NUnit may add a feature that will make this easier, by isolating each test method in a separate object. But with the current software, the above is the best you can do.
In my below code - Report always show testcase as Pass though i failed the testcase at BeforeMethod. Please help me to fix this problem
public class practice extends Test_CommonLib {
WebDriver driver;
ExtentReports logger;
String Browser="FireFox";
#BeforeMethod
public void setUp() throws Exception{
logger=eno_TestResport(this.getClass().getName(),Browser);
logger.startTest(this.getClass().getSimpleName());
Assert.assertTrue(false); //intentionally failing my BeforeMethod
}
#Test
public void CreateObject() throws Exception{
System.out.println("Test");
}
#AfterMethod(alwaysRun=true)
public void tearDown(ITestResult result) throws Exception{
if (ITestResult.FAILURE==result.getStatus()) {
logger.log(LogStatus.FAIL, "Test case failed");
}else if(ITestResult.SKIP==result.getStatus()){
logger.log(LogStatus.SKIP, "Test case skipped");
}else {
logger.log(LogStatus.PASS, "Aweosme Job");
}
}
}
with the same code i got result as given below:-
Well, what you are observing is correct. When an Assertion fails the rest of the code is not executed. The same is happening in your case as well. Irrespective of your Assertion whether it passes/fails driver is no more executing any code within that method & straightly comes out of #BeforeMethod Annotation and moves to the methods under #Test Annotation.
Further, your report will always show Testcase as "Pass" as your Testcase within #Test Annotation will successfully execute.
#AnandKakhandaki Here you need to follow certain guidelines of TestNG following this page - https://www.tutorialspoint.com/testng/testng_basic_annotations.htm
It is worth mentioning that, the piece of code within BeforeMethod Annotation will be executed everytime before executing any method. Likewise for BeforeSuite, BeforeClass, BeforeTest & BeforeGroups. Similarly, the piece of code within AfterMethod Annotation will be executed everytime after executing any method. Likewise for AfterSuite, AfterClass, AfterTest & AfterGroups. The code within these mentioned Annotation should be used to configure the Application aka System under test before and after the actual test execution begins/ends. These Annotation may include code for choosing the browser for test execution, opening/closing the browser with certain attributes, opening a url, switch to other url, closing the url, etc which are mandatory configurations to run the test execution.
Validation/Verification or Assertion should never be part of these Annotations. Rather, Assertions should be within Test Annotation. To be precise, Assertions should be kept out of Test Annotation as well, in a separate library. So your code with in Test Annotation contains only Testing Steps.
Let me know if this answers your question.
I have two methods as shown below.
I'm executing the suite using testng.xml by keeping thread-count="2" parallel="methods" so that all #Test methods will be executed parallelly.
#Test
//this method will be executed in firefox
public void method1(){
WebDriver driver=new FirefoxDriver();
driver.get("https://google.co.in");
line2;
line3;
}
#Test
//this method will be executed other window of firefox
public void method2(){
WebDriver driver=new FirefoxDriver();
driver.get("https:gmail.com"); //has to be executed only after the opening of google in method1
line2; //has to be executed after the line2 of method1
line3; //has to be executed after the line3 of method1
}
Two methods will run parallelly without depending on each other. But as per my requirement (mentioned in the code comments) is it possible to make the execution of method2 to depend on the execution of method1?
Add the below dependsOnMethods in #Test
#Test(dependsOnMethods = { "method1" })
public void method2(){
.....
}
I am new to Selenium, While practicing I come up with one issue, I am doing testing for my own application which has been deployed in tomcat server. So after opening my application I am testing validations in one method and page change in one method. Now My point is I am doing both testing for my both methods at same page.
Why do I need to write same code both methods,
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
How can I directly perform operations, If I remove above two lines in my second method It is getting failed. How to solve this?
#Test
public void analysisValidation()
{
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
driver.findElement(By.id("Analysis")).click();
WebElement webElement = driver.findElement(By.id("modelForm.errors"));
String alertMsg = webElement.getText();
System.out.println(alertMsg);
Assert.assertEquals("Please select a Survey Id to perform Aggregate Analysis", alertMsg);
}
#Test
public void testAnalysisPage()
{
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
new Select(driver.findElement(By.id("surveyId"))).selectByVisibleText("Apollo");
driver.findElement(By.id("Analysis")).click();
System.out.println(driver.getTitle());
String pageTitle = driver.getTitle();
Assert.assertEquals("My JSP 'analysis.jsp' starting page", pageTitle);
}
How can I directly perform operations, If I remove above two lines in
my second method It is getting failed. How to solve this
The tests fail because each #Test test is executed independently. The code you remove is needed to initialize the driver and load the page.
You can fix this as follows:
Create a function, setUp() with the #beforemethod annotation. Populate it with the driver initialization and loading-page calls.
Create a function, teardown() with the #AfterMethod annotation. Populate it with the driver cleanup calls.
For example, here is some pseudocode (modify this as per taste)
#BeforeMethod
public void setUp() throws Exception {
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
}
#AfterMethod
public void teardown() throws Exception {
driver.quit()
}
The advantage of the #BeforeMethod and #AfterMethod annotations is that the code will be run before / after each #Test method executes. You can therefore avoid having to duplicate your code.
I am using selwnium web driver. When I am using selenium and nunit to run my test cases I find that every time a test case starts it open a new page and when its done the mew page will be destoryed. Therefore I have to open new page and login in every test case.
I want to have my test cases share one single webpage so that they can be performed in sequence.
Is a selenium limitation, or is it a way to implement it?
Thank you!
Try to declare Webdriver instance variable as static inside your test class and initialize only once. Your behavior is because different webdriver instances does not share the same session, so therefore you have always to login into desired page.
You probably using #Before , #After annotation.
Try use #BeforeClass, #AfterClass instead. e.g:
....
static WebDriver driver;
#BeforeClass
public static void firefoxSetUp() throws MalformedURLException {
driver = new FirefoxDriver();
driver.manage().timeouts().implicitlyWait(20, TimeUnit.SECONDS);
driver.manage().timeouts().pageLoadTimeout(30, TimeUnit.SECONDS);
driver.manage().window().setSize(new Dimension(1920, 1080));
}
#Before
public void homePageRefresh() throws IOException {
driver.manage().deleteAllCookies();
driver.get(propertyKeysLoader("login.base.url"));
}
#AfterClass
public static void closeFirefox(){
driver.quit();
}
.....