How do I use the junit5 platform launcher api to discover tests from a queue? - junit5

I'm looking to distribute tests to multiple instances of the junit5 standalone console whereby each instance reads off of a queue. Each instance of the runner would use the same test.jar on the classpath, so I'm not trying to distribute the byte code of the actual tests here, just the names of the tests / filter pattern strings.
From the junit 5 advanced topics doc, I think the appropriate place to extend junit 5 to do this is using the platform launcher api. I cobbled this snippet together largely with the sample code in the guide. I think this is what I need to write but I'm concerned I'm oversimplifying the effort involved here:
// keep pulling test classes off the queue until its empty
while(myTestQueue.isNotEmpty()) {
String classFromQueue = myTestQueue.next(); //returns "org.myorg.foo.fooTests"
LauncherDiscoveryRequest request = LauncherDiscoveryRequestBuilder.request()
.selectors(selectClass(classFromQueue)).build();
SummaryGeneratingListener listener = new SummaryGeneratingListener();
try (LauncherSession session = LauncherFactory.openSession()) {
Launcher launcher = session.getLauncher();
launcher.registerTestExecutionListeners(listener);
TestPlan testPlan = launcher.discover(request);
launcher.execute(testPlan);
}
TestExecutionSummary summary = listener.getSummary();
addSummary(summary);
}
Questions:
Will repeatedly discovering and executing in a while loop violate the normal test lifecycle? I'm a little fuzzy on whether discovery is a one time thing that's supposed to happen before all executions.
If I assume that it's ok to repeatedly discover then execute, I see the HierarchicalTestEngine may be an even better place to read from a queue since that seems to be used for implementing parallel execution. Is this more suitable for my use case? Would the implementation be essentially the same as what I have above except maybe I wouldn't need to handle accumulating test summaries?
Approaches I do not want to take:
I am not looking to use the new features of junit 5 aimed at parallelizing test execution within the same jvm. I'm also not looking to divide the tests or classes up ahead of time; starting each console runnner instance with a pre-determined subset of tests.

Short Answer
The code posted in the question (loosely) illustrates a valid approach. There is no need to create a custom engine. Leveraging the platform launcher api to repeatedly discover and execute tests does work. I think it's worth highlighting that you do not have to extend junit5 This isn't executed through an extension that you need to register as I'd originally assumed. You're just simply leveraging the platform launcher api to discover and execute tests.
Long Answer
Here is some sample code with a simple queue of tests class names that exist on the class path. While the queue is not empty, an instance of the testNode class will discover and execute each of the three test classes and write a LegacyXmlReport.
TestNode Code:
package org.sample.node;
import org.junit.platform.launcher.Launcher;
import org.junit.platform.launcher.LauncherDiscoveryRequest;
import org.junit.platform.launcher.LauncherSession;
import org.junit.platform.launcher.TestPlan;
import org.junit.platform.launcher.core.LauncherConfig;
import org.junit.platform.launcher.core.LauncherDiscoveryRequestBuilder;
import org.junit.platform.launcher.core.LauncherFactory;
import org.junit.platform.launcher.listeners.SummaryGeneratingListener;
import org.junit.platform.reporting.legacy.xml.LegacyXmlReportGeneratingListener;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.PrintWriter;
import java.nio.file.Paths;
import java.util.LinkedList;
import java.util.Queue;
import static org.junit.platform.engine.discovery.DiscoverySelectors.selectClass;
public class TestNode {
public void run() throws FileNotFoundException {
// keep pulling test classes off the queue until its empty
Queue<String> queue = getQueue();
while(!queue.isEmpty()) {
String testClass = queue.poll();
LauncherDiscoveryRequest request = LauncherDiscoveryRequestBuilder.request()
.selectors(selectClass(testClass)).build();
LauncherConfig launcherConfig = LauncherConfig.builder()
.addTestExecutionListeners(new LegacyXmlReportGeneratingListener(Paths.get("target"), new PrintWriter(new FileOutputStream("log.txt"))))
.build();
SummaryGeneratingListener listener = new SummaryGeneratingListener();
try (LauncherSession session = LauncherFactory.openSession(launcherConfig)) {
Launcher launcher = session.getLauncher();
launcher.registerTestExecutionListeners(listener);
TestPlan testPlan = launcher.discover(request);
launcher.execute(testPlan);
}
}
}
private Queue<String> getQueue(){
Queue<String> queue = new LinkedList<>();
queue.add("org.sample.tests.Class1");
queue.add("org.sample.tests.Class2");
queue.add("org.sample.tests.Class3");
return queue;
}
public static void main(String[] args) throws FileNotFoundException {
TestNode node = new TestNode();
node.run();
}
}
Tests executed by TestNode
I'm just showing 1 of the three test classes since they're all the same thing with different class names.
They reside in src/main/java and NOT src/test/java. This is an admittedly weird yet common pattern in maven for packaging tests into a fat jar.
package org.sample.tests;
import org.junit.jupiter.api.Test;
public class Class1 {
#Test
void test1() {
System.out.println("Class1 Test 1");
}
#Test
void test2() {
System.out.println("Class1 Test 2");
}
#Test
void test3() {
System.out.println("Class1 Test 3");
}
}

Related

Maintain context of Selenium WebDriver while running parallel tests in NUnit?

Using:C#NUnit 3.9
Selenium WebDriver 3.11.0
Chrome WebDriver 2.35.0
How do I maintain the context of my WebDriver while running parallel tests in NUnit?
When I run my tests with the ParallelScope.All attribute, my tests reuse the driver and fail
The Test property in my tests does not persist across the [Setup] - [Test] - [TearDown] without the Test being given a higher scope.
Test.cs
public class Test{
public IWebDriver Driver;
//public Pages pages;
//anything else I need in a test
public Test(){
Driver = new ChromeDriver();
}
//helper functions and reusable functions
}
SimpleTest.cs
[TestFixture]
[Parallelizable(ParallelScope.All)]
class MyTests{
Test Test;
[SetUp]
public void Setup()
{
Test = new Test();
}
[Test]
public void Test_001(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_002(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_003(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_004(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[TearDown]
public void TearDown()
{
string outcome = TestContext.CurrentContext.Result.Outcome.ToString();
TestContext.Out.WriteLine("#RESULT: " + outcome);
if (outcome.ToLower().Contains("fail"))
{
//Do something like take a screenshot which requires the WebDriver
}
Test.Driver.Quit();
Test.Driver.Dispose();
}
}
The docs state: "SetUpAttribute is now used exclusively for per-test setup."
Setting the Test property in the [Setup] does not seem to work.
If this is a timing issue because I'm re-using the Test property. How do I arrange my fixtures so the Driver is unique each test?
One solution is to put the driver inside the [Test]. But then, I cannot utilize the TearDown method which is a necessity to keep my tests organized and cleaned up.
I've read quite a few posts/websites, but nothing solves the problem. [Parallelizable(ParallelScope.Self)] seems to be the only real solution and that slows down the tests.
Thank you in advance!
The ParallelizableAttribute makes a promise to NUnit that it's safe to run certain tests in parallel, but it doesn't do anything to actually make it safe. That's up to you, the programmer.
Your tests (test methods) have shared state, i.e. the field Test. Not only that, but each test changes the shared state, because the SetUp method is called for each test. That means your tests may not safely be run in parallel, so you shouldn't tell NUnit to run them that way.
You have two ways to go... either use a lesser degree of parallelism or make the tests safe to run in parallel.
Using a lesser degree of parallelism is the easiest. Try using ParallelScope.Fixtures on the assembly or ParallelScope.Self (the default) on each fixture. If you have a large number of independent fixtures, this may give you as good a throughput as you will get doing something more complicated.
Alternatively, to run tests in parallel, each test must have a separate driver. You will have to create it and dispose of it in the test method itself.
In the future, NUnit may add a feature that will make this easier, by isolating each test method in a separate object. But with the current software, the above is the best you can do.

Fitnesse wiki unable to call selenium method correctly

I am trying to write a simple fixture that opens the browser and navigates to www.google.com. When I run the wiki page, it passes with all green, but the browser never opens up (I don't think the method even gets called by the wiki). Can someone take a look at my fixture and wiki to see what I am doing wrong? Many thanks in advance,
Here is the Wiki -
!|SeleniumFitness|
|URL |navigateToSite?|
|http://www.google.com| |
After Running -
!|SeleniumFitnesse| java.lang.NoSuchMethodError: org.openqa.selenium.remote.service.DriverCommandExecutor.<init>(Lorg/openqa/selenium/remote/service/DriverService;Ljava/util/Map;)V
|URL |The instance decisionTable_4.setURL. does not exist|navigateToSite?
|http://www.google.com|!The instance decisionTable_4.navigateToSite. does not exist |
Here is the Fixture -
package FitNesseConcept.fitNesse;
import java.util.Properties;
import org.junit.BeforeClass;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.annotations.BeforeMethod;
//import com.google.common.base.Preconditions.*;
//import com.google.common.collect.Lists;
import fit.ColumnFixture;
public class SeleniumFitnesse extends ColumnFixture {
public static ChromeDriver driver = null;
private String navigateToSite = "";
public String URL = "";
public SeleniumFitnesse() {
Properties props = System.getProperties();
props.setProperty("webdriver.chrome.driver", "/home/ninad/eclipse-workspace/chromedriver");
driver = new ChromeDriver();
}
// SET-GET Methods
public String getURL() {
return URL;
}
public void setURL(String uRL) {
URL = uRL;
}
public String getNavigateToSite() {
return navigateToSite;
}
public void setNavigateToSite(String navigateToSite) {
this.navigateToSite = navigateToSite;
}
// Navigate to URL
public void navigateToSite() throws Throwable {
System.out.println("Navigating to Website");
try {
driver.navigate().to(URL);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
You are getting some good recommendations as comments - but to answer your question directly, for an old-style ColumnFixture, which is what you have written, the method "navigateToSite" is indeed not going to be called.
These styles of fixtures are not often used anymore, Slim is preferred, and your fitnesse instance in its documentation will show you how to use Slim style. However, for a column fixture as you have written, if you want a method to be called it needs to be a "?" following name of the method in the header row.
See basic docs for column fixture:
http://fitnesse.org/FitNesse.UserGuide.FixtureGallery.BasicFitFixtures.ColumnFixture
You are mis-using column fixture, even granted the old style though. Column fixture's pattern is "here is a series of columns that represent inputs, now here is a method call I want to make to get the output and check result". Navigating a website does not often fit that pattern. In old style fitnesse it would probably be approached by an ActionFixture:
http://fitnesse.org/FitNesse.UserGuide.FixtureGallery.BasicFitFixtures.ActionFixture
In the newer Slim style, a good fit for navigation and checking where you are would be a Scenario Table.
http://www.fitnesse.org/FitNesse.UserGuide.WritingAcceptanceTests.SliM.ScenarioTable
In general doing WebDriver / Selenium tests through a wiki is worth extra thought as to whether it's your best medium. Fitnesse is really designed to be a collaborative tool for documenting and verifying business requirements, directly against source code.
Here's an example of how to do with a ColumnFixture, although again ColumnFixture not exactly appropriate:
|url|navigateToUrl?|
|www.google.com| |
java class:
public String url;
public void navigateToUrl() {
}
You could return an "OK" if it navigates alright, or return the title of the page as opposed to void if you wanted.

Selenium : Code not up to the mark comments on my code

Recently I have started learning selenium and I have applied to the job.
They asked me to write code for CRUD operation for this website
http://computer-database.herokuapp.com/computers
I am pasting the code here.Although it was running fine on my machine and I used framework as well.
Can anyone help me what is not up to the mark because I have been asked to write down code for the second interview and I don't want to repeat my mistake.
Looking forward for your help.
public class Add {
public static WebDriver driver;
public static WebDriver getdriver(){
System.setProperty("webdriver.gecko.driver",
"/Users/sonali/Downloads/geckodriver");
driver = new FirefoxDriver();
return driver;
}
#Test(priority=1) //Create a computer
public static void create(){
driver=getdriver();
driver.get("http://computer-database.herokuapp.com/computers?f=ACE");
driver.manage().window().maximize();
driver.findElement(By.xpath(".//*[#id='add']")).click();
driver.findElement(By.xpath(".//*[#id='name']")).sendKeys("newtest");
driver.findElement(By.xpath(".//*[#id='introduced']")).sendKeys("2017-03-20");
driver.findElement(By.xpath(".//*[#id='discontinued']")).sendKeys("2017-03-29");
Select s= new Select(driver.findElement(By.id("company")));
s.selectByValue("2");
driver.findElement(By.xpath(".//*[#id='main']/form/div/input")).click();
driver.findElement(By.xpath(".//*[#id='main']/div[1]")).isDisplayed();
System.out.println("Creating data is working");
}
#Test(priority=2) //Search for a computer and check its available
public static void read(){
driver.findElement(By.xpath(".//*[#id='searchbox']")).sendKeys("newtest");
driver.findElement(By.xpath(".//*[#id='searchsubmit']")).click();
driver.findElement(By.linkText("newtest")).click();
System.out.println("Reading data is working");
}
#Test(priority=3) // Update a computer name and company
public static void update(){
driver.findElement(By.id("name")).sendKeys("one");
Select s= new Select(driver.findElement(By.id("company")));
s.selectByValue("5");
driver.findElement(By.xpath(".//*[#id='main']/form[1]/div/input")).click();
driver.findElement(By.xpath(".//*[#id='main']/div[1]")).isDisplayed();
System.out.println("Updating computer is working fine");
}
#Test(priority=4) // Deleting computer from the list
public static void delete(){
driver.findElement(By.xpath(".//*[#id='searchbox']")).sendKeys("newtestone");
driver.findElement(By.xpath(".//*[#id='searchsubmit']")).click();
driver.findElement(By.linkText("newtestone")).click();
driver.findElement(By.xpath(".//*[#id='main']/form[2]/input")).click();
driver.findElement(By.xpath(".//*[#id='main']/div[1]")).isDisplayed();
System.out.println("Deleting computer is working fine");
}
}
For the code to be actually useful, it needs to be :
Readable
Maintainable
Structured properly
Try developing a framework for the tests i.e.:
Separate the driver generation to a driver factory class.
Separate the selectors and the respective actions to some other functional or page based classes.
Use assertions to verify. (an exception not appearing does not mean the functionality is working)
e.g.
driver.findElement(By.xpath(".//*[#id='main']/div[1]")).isDisplayed();
It will not matter what it returns as the code does not do anything with it.
It should be -
Assert.assertTrue(driver.findElement(By.xpath(".//*[#id='main']/div[1]")).isDisplayed());
Or better yet-
Assert.assertTrue(updatePage.isupdateDisplayed());
Try putting comments in the code which might make it more easy to understand
Run tests through a runner / xml.
I agree with what #amita said in her answer.
There is a lot you can do to improve on your code but it will take a bit of study, so not sure if you'll be able to follow my advice in time but hopefully this will be helpful to further your understanding of test automation with Selenium WebDriver.
Learn about the Page Object Model design pattern. This is probably the most popular design pattern used with WebDriver. There are lots of free tutorials online but here's a good one to get you started.
Once you have a good idea of Page Object Model you can enhance it by learning about LoadableComponent. Again lots of free tutorials online, here's one I picked at random
You're using XPath extremely heavily. Make sure you understand the preferred hierarchy of locator methods and use them appropriately. ID should be your first preference. XPath and CSS you last resort. For example,
By.xpath(".//*[#id='add']") //Never use this
By.id("add") //When you could have just used the ID locator
Study how TestNG works, in particular Annotations like #BeforeTest,
#AfterTest, and so on. Your tests should be independent, so avoid setting them up in such a way that they require you to force their priority order. It's well documented here but again lots of tutorials online to help you through it.
There's more but if you get your head round all that you'll have a very good base from which to build further. I wish you all the best with your interview
Here is the Answer to your Question:
Considering it as a Interview Question and writing it at a Interview Venue, I think you have done a commendable work.
A few words about the solution:
As you integrated TestNG, apart from #Test Annotation consider using #BeforeTest and #AfterTest Annotation and Assert Class too.
Consider generating some log messages to the console which helps in debugging your own code.
Consider moving the WebDriver instance initialization within #BeforeTest annotation.
Consider starting your #Test with priority=0.
Your xpath like .//*[#id='add'] are not proper, consider using valid xpath e.g. //input[#id='name']
Induce proper ExplicitWait i.e. WebDriverWait while trying to search for elements on new webpages.
When perform some #Test try to validate the Result through Assert Class.
Consider adding the imports wisely.
Once you create a WebDriver instance consider releasing it after your #Test within #AfterTest Annotation.
Here is the minimal code to Create a Computer by the name Debanjan
package demo;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.Select;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.testng.Assert;
import org.testng.annotations.AfterTest;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;
public class Q44852473_GOOD_CODE_sonali_arjun
{
public static WebDriver driver;
String myname = "Debanjan";
#BeforeTest
public void initDriver()
{
System.out.println("=====Test Started=====");
System.setProperty("webdriver.gecko.driver", "C:/Utility/BrowserDrivers/geckodriver.exe");
System.out.println("=====Initializing Webdriver=====");
driver = new FirefoxDriver();
}
//Create a Computer
#Test(priority=0)
public void create()
{
driver.get("http://computer-database.herokuapp.com/computers");
driver.manage().window().maximize();
driver.findElement(By.id("add")).click();
WebDriverWait wait1 = new WebDriverWait (driver, 10);
WebElement name = wait1.until(ExpectedConditions.elementToBeClickable(By.xpath("//input[#id='name']")));
name.sendKeys(myname);
driver.findElement(By.xpath("//input[#id='introduced']")).sendKeys("2017-07-01");
driver.findElement(By.xpath("//input[#id='discontinued']")).sendKeys("2017-07-01");
Select select = new Select(driver.findElement(By.id("company")));
select.selectByValue("1");
driver.findElement(By.xpath("//input[#class='btn primary']")).click();
WebDriverWait wait2 = new WebDriverWait (driver, 10);
WebElement searchbox = wait2.until(ExpectedConditions.elementToBeClickable(By.xpath("//input[#id='searchbox']")));
WebElement add_success_ele = driver.findElement(By.xpath("//section[#id='main']/div[#class='alert-message warning']/strong"));
String success = add_success_ele.getText();
Assert.assertTrue(success.contains("Done"));
System.out.println("Computer "+myname+" - created Successfully");
}
#AfterTest
public void tearDown()
{
driver.quit();
System.out.println("=====Test Completed=====");
}
}
Enhancements:
This Solution can be modified by using other TestNG annotations documented here.
This Solution can be enhanced by implementing through POM (Page Object Model) documented here.
This Solution can be further enhanced by implementing POM through PageFactory documented here.
Let me know if this Answers your Question.

Programmatically execute Gatling tests

I want to use something like Cucumber JVM to drive performance tests written for Gatling.
Ideally the Cucumber features would somehow build a scenario dynamically - probably reusing predefined chain objects similar to the method described in the "Advanced Tutorial", e.g.
val scn = scenario("Scenario Name").exec(Search.search("foo"), Browse.browse, Edit.edit("foo", "bar")
I've looked at how the Maven plugin executes the scripts, and I've also seen mention of using an App trait but I can't find any documentation for the later and it strikes me that somebody else will have wanted to do this before...
Can anybody point (a Gatling noob) in the direction of some documentation or example code of how to achieve this?
EDIT 20150515
So to explain a little more:
I have created a trait which is intended to build up a sequence of, I think, ChainBuilders that are triggered by Cucumber steps:
trait GatlingDsl extends ScalaDsl with EN {
private val gatlingActions = new ArrayBuffer[GatlingBehaviour]
def withGatling(action: GatlingBehaviour): Unit = {
gatlingActions += action
}
}
A GatlingBehaviour would look something like:
object Google {
class Home extends GatlingBehaviour {
def execute: ChainBuilder =
exec(http("Google Home")
.get("/")
)
}
class Search extends GatlingBehaviour {...}
class FindResult extends GatlingBehaviour {...}
}
And inside the StepDef class:
class GoogleStepDefinitions extends GatlingDsl {
Given( """^the Google search page is displayed$""") { () =>
println("Loading www.google.com")
withGatling(Home())
}
When( """^I search for the term "(.*)"$""") { (searchTerm: String) =>
println("Searching for '" + searchTerm + "'...")
withGatling(Search(searchTerm))
}
Then( """^"(.*)" appears in the search results$""") { (expectedResult: String) =>
println("Found " + expectedResult)
withGatling(FindResult(expectedResult))
}
}
The idea being that I can then execute the whole sequence of actions via something like:
val scn = Scenario(cucumberScenario).exec(gatlingActions)
setup(scn.inject(atOnceUsers(1)).protocols(httpConf))
and then check the reports or catch an exception if the test fails, e.g. response time too long.
It seems that no matter how I use the 'exec' method it tries to instantly execute it there and then, not waiting for the scenario.
Also I don't know if this is the best approach to take, we'd like to build some reusable blocks for our Gatling tests that can be constructed via Cucumber's Given/When/Then style. Is there a better or already existing approach?
Sadly, it's not currently feasible to have Gatling directly start a Simulation instance.
Not that's it's not technically feasible, but you're just the first person to try to do this.
Currently, Gatling is usually in charge of compiling and can only be passed the name of the class to load, not an instance itself.
You can maybe start by forking io.gatling.app.Gatling and io.gatling.core.runner.Runner, and then provide a PR to support this new behavior. The former is the main entry point, and the latter the one can instanciate and run the simulation.
I recently ran into a similar situation, and did not want to fork gatling. And while this solved my immediate problem, it only partially solves what you are trying to do, but hopefully someone else will find this useful.
There is an alternative. Gatling is written in Java and Scala so you can call Gatling.main directly and pass it the arguments you need to run the Gatling Simulation you want. The problem is, the main explicitly calls System.exit so you have to also use a custom security manager to prevent it from actually exiting.
You need to know two things:
the class (with the full package) of the Simulation you want to run
example: com.package.your.Simulation1
the path where the binaries are compiled.
The code to run a Simulation:
protected void fire(String gatlingGun, String binaries){
SecurityManager sm = System.getSecurityManager();
System.setSecurityManager(new GatlingSecurityManager());
String[] args = {"--simulation", gatlingGun,
"--results-folder", "gatling-results",
"--binaries-folder", binaries};
try {
io.gatling.app.Gatling.main(args);
}catch(SecurityException se){
LOG.debug("gatling test finished.");
}
System.setSecurityManager(sm);
}
The simple security manager i used:
public class GatlingSecurityManager extends SecurityManager {
#Override
public void checkExit(int status){
throw new SecurityException("Tried to exit.");
}
#Override
public void checkPermission(Permission perm) {
return;
}
}
The problem is then getting the information you want out of the simulation after it has been run.

Is it possible to skip a scenario with Cucumber-JVM at run-time

I want to add a tag #skiponchrome to a scenario, this should skip the scenario when running a Selenium test with the Chrome browser. The reason to-do this is because some scenario's work in some environments and not in others, this might not even be browser testing specific and could be applied in other situation for example OS platforms.
Example hook:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
// Skip scenario code here
}
}
I know it is possible to define ~#skiponchrome in the cucumber tags to skip the tag, but I would like to skip a tag at run-time. This way I don't have to think about which steps to skip in advance when I starting a test run on a certain environment.
I would like to create a hook that catches the tag and skips the scenario without reporting a fail/error. Is this possible?
I realized that this is a late update to an already answered question, but I want to add one more option directly supported by cucumber-jvm:
#Before //(cucumber one)
public void setup(){
Assume.assumeTrue(weAreInPreProductionEnvironment);
}
"and the scenario will be marked as ignored (but the test will pass) if weAreInPreProductionEnvironment is false."
You will need to add
import org.junit.Assume;
The major difference with the accepted answer is that JUnit assume failures behave just like pending
Important Because of a bug fix you will need cucumber-jvm release 1.2.5 which as of this writing is the latest. For example, the above will generate a failure instead of a pending in cucumber-java8-1.2.3.jar
I really prefer to be explicit about which tests are being run, by having separate run configurations defined for each environment. I also like to keep the number of tags I use to a minimum, to keep the number of configurations manageable.
I don't think it's possible to achieve what you want with tags alone. You would need to write a custom jUnit test runner to use in place of #RunWith(Cucumber.class). Take a look at the Cucumber implementation to see how things work. You would need to alter the RuntimeOptions created by the RuntimeOptionsFactory to include/exclude tags depending on the browser, or other runtime condition.
Alternatively, you could consider writing a small script which invokes your test suite, building up a list of tags to include/exclude dynamically, depending on the environment you're running in. I would consider this to be a more maintainable, cleaner solution.
It's actually really easy. If you dig though the Cucumber-JVM and JUnit 4 source code, you'll find that JUnit makes skipping during runtime very easy (just undocumented).
Take a look at the following source code for JUnit 4's ParentRunner, which Cucumber-JVM's FeatureRunner (which is used in Cucumber, the default Cucumber runner):
#Override
public void run(final RunNotifier notifier) {
EachTestNotifier testNotifier = new EachTestNotifier(notifier,
getDescription());
try {
Statement statement = classBlock(notifier);
statement.evaluate();
} catch (AssumptionViolatedException e) {
testNotifier.fireTestIgnored();
} catch (StoppedByUserException e) {
throw e;
} catch (Throwable e) {
testNotifier.addFailure(e);
}
}
This is how JUnit decides what result to show. If it's successful it will show a pass, but it's possible to #Ignore in JUnit, so what happens in that case? Well, an AssumptionViolatedException is thrown by the RunNotifier (or Cucumber FeatureRunner in this case).
So your example becomes:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
throw new AssumptionViolatedException("Not supported on Chrome")
}
}
If you've used vanilla JUnit 4 before, you'd remember that #Ignore takes an optional message that is displayed when a test is ignored by the runner. AssumptionViolatedException carries the message, so you should see it in your test output after a test is skipped this way without having to write your own custom runner.
I too had the same challenge, where in I need to skip a scenario from running based on a flag which I obtain from the application dynamically in run-time, which tells whether the feature to be tested is enabled on the application or not..
so this is how I wrote my logic in the scenarios file, where we have the glue code for each step.
I have used a unique tag '#Feature-01AXX' to mark my scenarios that need to be run only when that feature(code) is available on the application.
so for every scenario, the tag '#Feature-01XX' is checked first, if its present then the check for the availability of the feature is made, only then the scenario will be picked for running. Else it will be merely skipped, and Junit will not mark this as failure, instead it will me marked as Pass. So the final result if these tests did not run due to the un-availability of the feature will be pass, that's cool...
#Before
public void before(final Scenario scenario) throws Exception {
/*
my other pre-setup tasks for each scenario.
*/
// get all the scenario tags from the scenario head.
final ArrayList<String> scenarioTags = new ArrayList<>();
scenarioTags.addAll(scenario.getSourceTagNames());
// check if the feature is enabled on the appliance, so that the tests can be run.
if (checkForSkipScenario(scenarioTags)) {
throw new AssumptionViolatedException("The feature 'Feature-01AXX' is not enabled on this appliance, so skipping");
}
}
private boolean checkForSkipScenario(final ArrayList<String> scenarioTags) {
// I use a tag "#Feature-01AXX" on the scenarios which needs to be run when the feature is enabled on the appliance/application
if (scenarioTags.contains("#Feature-01AXX") && !isTheFeatureEnabled()) { // if feature is not enabled, then we need to skip the scenario.
return true;
}
return false;
}
private boolean isTheFeatureEnabled(){
/*
my logic to check if the feature is available/enabled on the application.
in my case its an REST api call, I parse the JSON and check if the feature is enabled.
if it is enabled return 'true', else return 'false'
*/
}
I've implemented a customized junit runner as below. The idea is to add tags during runtime.
So say for a scenario we need new users, we tag the scenarios as "#requires_new_user". Then if we run our test in an environment (say production environment which dose not allow you to register new user easily), then we will figure out that we are not able to get new user. Then the ""not #requires_new_user" will be added to cucumber options to skip the scenario.
This is the most clean solution I can imagine now.
public class WebuiCucumberRunner extends ParentRunner<FeatureRunner> {
private final JUnitReporter jUnitReporter;
private final List<FeatureRunner> children = new ArrayList<FeatureRunner>();
private final Runtime runtime;
private final Formatter formatter;
/**
* Constructor called by JUnit.
*
* #param clazz the class with the #RunWith annotation.
* #throws java.io.IOException if there is a problem
* #throws org.junit.runners.model.InitializationError if there is another problem
*/
public WebuiCucumberRunner(Class clazz) throws InitializationError, IOException {
super(clazz);
ClassLoader classLoader = clazz.getClassLoader();
Assertions.assertNoCucumberAnnotatedMethods(clazz);
RuntimeOptionsFactory runtimeOptionsFactory = new RuntimeOptionsFactory(clazz);
RuntimeOptions runtimeOptions = runtimeOptionsFactory.create();
addTagFiltersAsPerTestRuntimeEnvironment(runtimeOptions);
ResourceLoader resourceLoader = new MultiLoader(classLoader);
runtime = createRuntime(resourceLoader, classLoader, runtimeOptions);
formatter = runtimeOptions.formatter(classLoader);
final JUnitOptions junitOptions = new JUnitOptions(runtimeOptions.getJunitOptions());
final List<CucumberFeature> cucumberFeatures = runtimeOptions.cucumberFeatures(resourceLoader, runtime.getEventBus());
jUnitReporter = new JUnitReporter(runtime.getEventBus(), runtimeOptions.isStrict(), junitOptions);
addChildren(cucumberFeatures);
}
private void addTagFiltersAsPerTestRuntimeEnvironment(RuntimeOptions runtimeOptions)
{
String channel = Configuration.TENANT_NAME.getValue().toLowerCase();
runtimeOptions.getTagFilters().add("#" + channel);
if (!TestEnvironment.getEnvironment().isNewUserAvailable()) {
runtimeOptions.getTagFilters().add("not #requires_new_user");
}
}
...
}
Or you can extends the official Cucumber Junit test runner cucumber.api.junit.Cucumber and override method
/**
* Create the Runtime. Can be overridden to customize the runtime or backend.
*
* #param resourceLoader used to load resources
* #param classLoader used to load classes
* #param runtimeOptions configuration
* #return a new runtime
* #throws InitializationError if a JUnit error occurred
* #throws IOException if a class or resource could not be loaded
* #deprecated Neither the runtime nor the backend or any of the classes involved in their construction are part of
* the public API. As such they should not be exposed. The recommended way to observe the cucumber process is to
* listen to events by using a plugin. For example the JSONFormatter.
*/
#Deprecated
protected Runtime createRuntime(ResourceLoader resourceLoader, ClassLoader classLoader,
RuntimeOptions runtimeOptions) throws InitializationError, IOException {
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
return new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
}
You can manipulate runtimeOptions here as you wish. But the method is marked as deprecated, so use it with caution.
If you're using Maven, you could read use a browser profile and then set the appropriate ~ exclude tags there?
Unless you're asking how to run this from command line, in which case you tag the scenario with #skipchrome and then when you run cucumber set the cucumber options to tags = {"~#skipchrome"}
If you wish simply to temporarily skip a scenario (for example, while writing the scenarios), you can comment it out (ctrl+/ in Eclipse or Intellij).