Creating a JUnit graph from test results - selenium

Does JUnit have an OOB tool to plot the test results of a suite? Specifically, I am using the Selenium 2 webdriver, and I want to plot passed vs failed tests. Secondly, I want to have my tests suite continue even with a failed test, how would I go about doing this? I tried researching the topic, but none of the answers fully addresses my question.
Thanks in advance!
Should probably put my code in here as well:
#Test
public void test_Suite() throws Exception {
driver.get("www.my-target-URL.com");
test_1();
test_2();
}
#Test
public void test_1() throws Exception {
//perform test
assertTrue(myquery);
}
#Test
public void test_2() throws Exception {
//perform test
assertTrue(myquery);
}

If you using Jenkins as your CI server, you got Junit Plugin that will allow you to publish the results in the end of the test. And you got Junit Graph to display them.

Related

Maintain context of Selenium WebDriver while running parallel tests in NUnit?

Using:C#NUnit 3.9
Selenium WebDriver 3.11.0
Chrome WebDriver 2.35.0
How do I maintain the context of my WebDriver while running parallel tests in NUnit?
When I run my tests with the ParallelScope.All attribute, my tests reuse the driver and fail
The Test property in my tests does not persist across the [Setup] - [Test] - [TearDown] without the Test being given a higher scope.
Test.cs
public class Test{
public IWebDriver Driver;
//public Pages pages;
//anything else I need in a test
public Test(){
Driver = new ChromeDriver();
}
//helper functions and reusable functions
}
SimpleTest.cs
[TestFixture]
[Parallelizable(ParallelScope.All)]
class MyTests{
Test Test;
[SetUp]
public void Setup()
{
Test = new Test();
}
[Test]
public void Test_001(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_002(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_003(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_004(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[TearDown]
public void TearDown()
{
string outcome = TestContext.CurrentContext.Result.Outcome.ToString();
TestContext.Out.WriteLine("#RESULT: " + outcome);
if (outcome.ToLower().Contains("fail"))
{
//Do something like take a screenshot which requires the WebDriver
}
Test.Driver.Quit();
Test.Driver.Dispose();
}
}
The docs state: "SetUpAttribute is now used exclusively for per-test setup."
Setting the Test property in the [Setup] does not seem to work.
If this is a timing issue because I'm re-using the Test property. How do I arrange my fixtures so the Driver is unique each test?
One solution is to put the driver inside the [Test]. But then, I cannot utilize the TearDown method which is a necessity to keep my tests organized and cleaned up.
I've read quite a few posts/websites, but nothing solves the problem. [Parallelizable(ParallelScope.Self)] seems to be the only real solution and that slows down the tests.
Thank you in advance!
The ParallelizableAttribute makes a promise to NUnit that it's safe to run certain tests in parallel, but it doesn't do anything to actually make it safe. That's up to you, the programmer.
Your tests (test methods) have shared state, i.e. the field Test. Not only that, but each test changes the shared state, because the SetUp method is called for each test. That means your tests may not safely be run in parallel, so you shouldn't tell NUnit to run them that way.
You have two ways to go... either use a lesser degree of parallelism or make the tests safe to run in parallel.
Using a lesser degree of parallelism is the easiest. Try using ParallelScope.Fixtures on the assembly or ParallelScope.Self (the default) on each fixture. If you have a large number of independent fixtures, this may give you as good a throughput as you will get doing something more complicated.
Alternatively, to run tests in parallel, each test must have a separate driver. You will have to create it and dispose of it in the test method itself.
In the future, NUnit may add a feature that will make this easier, by isolating each test method in a separate object. But with the current software, the above is the best you can do.

Report showing as Pass though i intenationally failed it in beforemethod

In my below code - Report always show testcase as Pass though i failed the testcase at BeforeMethod. Please help me to fix this problem
public class practice extends Test_CommonLib {
WebDriver driver;
ExtentReports logger;
String Browser="FireFox";
#BeforeMethod
public void setUp() throws Exception{
logger=eno_TestResport(this.getClass().getName(),Browser);
logger.startTest(this.getClass().getSimpleName());
Assert.assertTrue(false); //intentionally failing my BeforeMethod
}
#Test
public void CreateObject() throws Exception{
System.out.println("Test");
}
#AfterMethod(alwaysRun=true)
public void tearDown(ITestResult result) throws Exception{
if (ITestResult.FAILURE==result.getStatus()) {
logger.log(LogStatus.FAIL, "Test case failed");
}else if(ITestResult.SKIP==result.getStatus()){
logger.log(LogStatus.SKIP, "Test case skipped");
}else {
logger.log(LogStatus.PASS, "Aweosme Job");
}
}
}
with the same code i got result as given below:-
Well, what you are observing is correct. When an Assertion fails the rest of the code is not executed. The same is happening in your case as well. Irrespective of your Assertion whether it passes/fails driver is no more executing any code within that method & straightly comes out of #BeforeMethod Annotation and moves to the methods under #Test Annotation.
Further, your report will always show Testcase as "Pass" as your Testcase within #Test Annotation will successfully execute.
#AnandKakhandaki Here you need to follow certain guidelines of TestNG following this page - https://www.tutorialspoint.com/testng/testng_basic_annotations.htm
It is worth mentioning that, the piece of code within BeforeMethod Annotation will be executed everytime before executing any method. Likewise for BeforeSuite, BeforeClass, BeforeTest & BeforeGroups. Similarly, the piece of code within AfterMethod Annotation will be executed everytime after executing any method. Likewise for AfterSuite, AfterClass, AfterTest & AfterGroups. The code within these mentioned Annotation should be used to configure the Application aka System under test before and after the actual test execution begins/ends. These Annotation may include code for choosing the browser for test execution, opening/closing the browser with certain attributes, opening a url, switch to other url, closing the url, etc which are mandatory configurations to run the test execution.
Validation/Verification or Assertion should never be part of these Annotations. Rather, Assertions should be within Test Annotation. To be precise, Assertions should be kept out of Test Annotation as well, in a separate library. So your code with in Test Annotation contains only Testing Steps.
Let me know if this answers your question.

Setting up selenium webdriver for parallel execution

I am trying to execute a large suite of selenium tests via xUnit console runner in parallel.
These have executed and I see 3 chrome windows open, however the first send key commands simply executes 3 times to one window, resulting in test failure.
I have registered my driver in an objectcontainer before each scenario as below:
[Binding]
public class WebDriverSupport
{
private readonly IObjectContainer _objectContainer;
public WebDriverSupport(IObjectContainer objectContainer)
{
_objectContainer = objectContainer;
}
[BeforeScenario]
public void InitializeWebDriver()
{
var driver = GetWebDriverFromAppConfig();
_objectContainer.RegisterInstanceAs<IWebDriver>(driver);
}
And then call the driver in my specflow step defintions as:
_driver = (IWebDriver)ScenarioContext.Current.GetBindingInstance(typeof(IWebDriver));
ScenarioContext.Current.Add("Driver", _driver);
However this has made no difference and it seems as if my tests are trying to execute all commands to one driver.
Can anyone advise where I have gone wrong ?
You shouldn't be using ScenarioContext.Current in a parallel execution context. If you're injecting the driver through _objectContainer.RegisterInstanceAs you will receive it through constructor injection in your steps class' constructor, like so:
public MyScenarioSteps(IWebDriver driver)
{
_driver = driver;
}
More info:
https://github.com/techtalk/SpecFlow/wiki/Parallel-Execution#thread-safe-scenariocontext-featurecontext-and-scenariostepcontext
https://github.com/techtalk/SpecFlow/wiki/Context-Injection
In my opinion this is horribly messy.
This might not be an answer, but is too big for a comment.
why are you using the IObjectContainer if you are just getting it from the current scenario context and not injecting it via the DI mechanism? I would try this:
[Binding]
public class WebDriverSupport
{
[BeforeScenario]
public void InitializeWebDriver()
{
var driver = GetWebDriverFromAppConfig();
ScenarioContext.Current.Add("Driver",driver);
}
}
then in your steps:
_driver = (IWebDriver)ScenarioContext.Current.Get("Driver");
As long as GetWebDriverFromAppConfig returns a new instance you should be ok...

Issue with methods ( Test cases ) in Selenium Webdriver

I am new to Selenium, While practicing I come up with one issue, I am doing testing for my own application which has been deployed in tomcat server. So after opening my application I am testing validations in one method and page change in one method. Now My point is I am doing both testing for my both methods at same page.
Why do I need to write same code both methods,
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
How can I directly perform operations, If I remove above two lines in my second method It is getting failed. How to solve this?
#Test
public void analysisValidation()
{
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
driver.findElement(By.id("Analysis")).click();
WebElement webElement = driver.findElement(By.id("modelForm.errors"));
String alertMsg = webElement.getText();
System.out.println(alertMsg);
Assert.assertEquals("Please select a Survey Id to perform Aggregate Analysis", alertMsg);
}
#Test
public void testAnalysisPage()
{
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
new Select(driver.findElement(By.id("surveyId"))).selectByVisibleText("Apollo");
driver.findElement(By.id("Analysis")).click();
System.out.println(driver.getTitle());
String pageTitle = driver.getTitle();
Assert.assertEquals("My JSP 'analysis.jsp' starting page", pageTitle);
}
How can I directly perform operations, If I remove above two lines in
my second method It is getting failed. How to solve this
The tests fail because each #Test test is executed independently. The code you remove is needed to initialize the driver and load the page.
You can fix this as follows:
Create a function, setUp() with the #beforemethod annotation. Populate it with the driver initialization and loading-page calls.
Create a function, teardown() with the #AfterMethod annotation. Populate it with the driver cleanup calls.
For example, here is some pseudocode (modify this as per taste)
#BeforeMethod
public void setUp() throws Exception {
driver.get("http://localhost:8070/");
driver.findElement(By.xpath("//div[#id='actions']/div[2]/a/span")).click();
driver.findElement(By.linkText("/ReportGenerator")).click();
}
#AfterMethod
public void teardown() throws Exception {
driver.quit()
}
The advantage of the #BeforeMethod and #AfterMethod annotations is that the code will be run before / after each #Test method executes. You can therefore avoid having to duplicate your code.

Do I need to recreate my driver for each test?

I've just started using Selenium - currently I'm only interested in IE as it's an intranet site and not for public consumption. I'm using IEDriverServer.exe to set my browser sessions up, but I'm unsure as to whether I need to recreate it for each test, or if it will maintain atomicity of the browser sessions/tests automatically. I've not been able to find any information on this as most of the examples are for a single test rather than a batch of unit tests.
So currently I have
[TestInitialize]
public void SetUp()
{
_driver = new InternetExplorerDriver();
}
and
[TestCleanup]
public void TearDown()
{
_driver.Close();
_driver.Quit();
}
Is this correct or am I doing extra unnecessary work for each test? Should I just initialise it when it's declared? If so, how do I manage its lifecycle? I presume I can call .Close() for each test to kill the browser window, but what about .Quit()?
I use Selenium with NUnit, but you don't need to recreate it every time. Since you are using MSTest, I would do something like this:
[ClassInitialize]
public void SetUp()
{
_driver = new InternetExplorerDriver();
}
[ClassCleanup]
public void TearDown()
{
_driver.Close();
_driver.Quit();
}
ClassInitialize will call code once per test class initialisation, and ClassCleanup will call code once per test class teardown / dispose.
Although this is still not guaranteed because the test runner may make several threads of the test:
http://blogs.msdn.com/b/nnaderi/archive/2007/02/17/explaining-execution-order.aspx
You must also think about what kind of state you want to tests to start at each time. The most common reason for shutting down and starting a new browser session each time is then you can have a clean slate to work with.
Sometimes this is unnecessary work, as you've pointed out, but what is your tests starting point?
For me, I have one browser per test class, with a method to sign out of my web application and keep at the login page at the end of each test.