phpunit selenium usage - selenium

My question is about phpunit+selenium usage.
The standard usage of this union is
class BlaBlaTest extends PHPUnit_Extensions_SeleniumTestCase
{... }
OR
class BlaBlaTest extends PHPUnit_Extensions_Selenium2TestCase
{...}
The first one (PHPUnit_Extensions_SeleniumTestCase) is not very convinient to use
(e.g. there is no such thing as $this->elements('xpath')).
Second(PHPUnit_Extensions_Selenium2TestCase) also has limited functionality
(e.g. there is no such functions as waitForPageToLoad() or clickAndWait(),
and using something like $this->timeouts()->implicitWait(10000) looks for me like
complete nonsense).
Is it possible to use the functional
PHPUnit_Extensions_SeleniumTestCase + PHPUnit_Extensions_Selenium2TestCase
in one test class?
Maybe smb knows good alternatives to phpunit+selenium?

Inspired by Dan I've written this for use in PHPUnit_Extensions_Selenium2TestCase and it seems to work ok:
/**
* #param string $id - DOM id
* #param int $wait - maximum (in seconds)
* #retrn element|false - false on time-out
*/
protected function waitForId($id, $wait=30) {
for ($i=0; $i <= $wait; $i++) {
try{
$x = $this->byId($id);
return $x;
}
catch (Exception $e) {
sleep(1);
}
}
return false;
}

Sorry for resurrecting this but I'd like to hopefully clear up some confusion for anyone stumbling across this.
You're saying that you wanted functionality from RC and WebDriver combined where there are workarounds to it, but I wouldn't recommend it. Firstly you'll need to understand the difference between both frameworks.
My brief definitions...
Selenium RC (PHPUnit_Extensions_SeleniumTestCase) is script oriented. By that I mean it will run your tests exactly how you expect the page to respond. This often will require more explicit commands such as the waitForPageToLoad() that you have mentioned when waiting for elements to appear and/or pages to loads.
Selenium WebDriver (PHPUnit_Extensions_Selenium2TestCase) uses a more native approach. It cuts off 'the middle-man' and runs your tests through your chosen browsers driver. Using the waitForPageToLoad() example, you wont need to explicitly put that wherever you open a page in your code because the WebDriver already knows when the page is loading and will resume the test when the page load request is complete.
If you need to define an implicit timeout in WebDriver, you will only need to place that in the setUp() method within a base Selenium class that will be extended in your test classes;
class BaseSelenium extends PHPUnit_Extensions_Selenium2TestCase {
protected function setUp() {
// Whatever else needs to be in here like setting
// the host url and port etc.
$this->setSeleniumServerRequestsTimeout( 100 ); // <- seconds
}
}
That will happily span across all of your tests and will timeout whenever a page takes longer than that to load.
Although I personally prefer WebDriver over RC (mainly because it's a lot faster!) there is a big difference between the methods available. Whenever I got stuck when recently converting a lot a RC tests to WebDriver I always turned to this first. It's a great reference to nearly every situation.
I hope that helps?

For functions such as waitForPageToLoad() and clickAndWait(), which are unavailable in Selenium2, you can reproduce those functions by using try catch blocks, in conjunction with implicit waits and explicit sleeps.
So, for a function like clickAndWait(), you can define what element you are waiting for, and then check for that element's existence for a set amount of seconds. If the element doesn't exist, the try catch block will stop the error from propagating. If the element does exist, you can continue. If the element doesn't exist after the set amount of time, then bubble up the error.
I would recommend using Selenium2 and then reproducing any functionality that you feel is missing from within your framework.
EXAMPLE:
def wait_for_present(element, retry = 10, seconds = 2)
for i in 0...retry
return true if element.present?
sleep(seconds)
end
return false
end

you can try use traits to extend two different classes http://php.net/manual/en/language.oop5.traits.php
class PHPUnit_Extensions_SeleniumTestCase{
...
}
change PHPUnit_Extensions_Selenium2TestCase class to trait:
trait PHPUnit_Extensions_Selenium2TestCase {
...
}
class blabla extends PHPUnit_Extensions_SeleniumTestCase {
use PHPUnit_Extensions_Selenium2TestCase;
your tests here..
}

Related

How to leave the browser open when a Behat/Mink test fails

I'm using the selenium2 driver to test my Drupal site using Behat/Mink in a docker container.
Using the Selenium Standalone-Chrome container, I can watch my behat tests fail, but the problem is that as soon as they fail, the browser is closed, which makes it harder for me to see what the problem is.
I'm running my tests like this:
behat --tags '#mystuff' --config=behat-myconfig.yml --strict --stop-on-failure
Is there a way to leave the remote-controlled browser open even when a test fails?
By default it is not possible.
Maybe you could find some hack to do it but it is not recommended, since each scenario should be isolated and this is not a good solution at least when running some suite with multiple tests.
For one time only see if you can use the logic for printscreen and use a breakpoint instead.
Anyway, you should use a verbose (-vvv for Behat 3) output + ide debugger to debug your code.
Finally I found a good solution for this: behat-fail-aid.
Add the fail aid to your FeatureContext and then run behat with the --wait-on-failure option:
the --wait-on-failure={seconds} option can be used to
investigate/inspect failures in the browser.
You can take a screenshot whenever an error occurs using Behat hook "AfterStep".
Consider having a look at the Panther Driver or DChrome Driver.
Here you are a shortened example considering also non javascript tests (which are faster):
use Behat\Mink\Driver\Selenium2Driver;
/** Context Class Definition ... */
/**
* #AfterStep
*/
public function takeScreenShotAfterFailedStep(AfterStepScope $scope)
{
if (99 !== $scope->getTestResult()->getResultCode()) {
return;
}
$this->takeAScreenShot('error');
}
private function takeAScreenShot($prefix = 'screenshot')
{
$baseName= sprintf('PATH_FOR_YOUR_SCREENSHOTS/%s-%s', $prefix, (new \DateTime())->format('Y_m_d_H_i_s'));
if ($this->supportsJavascript()) {
$extension = '.png';
$content = $this->session->getScreenshot();
} else {
$extension = '.html';
$content = $this->getSession()->getPage()->getOuterHtml();
}
file_put_contents(sprintf('%s%s', $baseName, $extension), $content);
}
private function supportsJavascript()
{
return $this->getSession()->getDriver() instanceof Selenium2Driver;
}

Class cannot resolve module as content unless #Stepwise used

I have a Spock class, that when run as a test suite, throws Unable to resolve iconRow as content for geb.Page, or as a property on its Navigator context. Is iconRow a class you forgot to import? unless I annotate my class with #Stepwise. However, I really don't want the test execution to stop on the first failure, which #Stepwise does.
I've tried writing (copy and pasting) my own extension using this post, but I still get these errors. It is using my extension, as I added some logging statements that were printed out to the console.
Here is one of my modules:
class IconRow extends Module {
static content = {
iconRow (required: false) {$("div.report-toolbar")}
}
}
And a page that uses it:
class Report extends SomeOtherPage {
static at = {$("div.grid-container").displayed}
static content = {
iconRow { module IconRow }
}
}
And a snippet of the test that is failing:
class MyFailingTest extends GebReportingSpec {
def setupSpec() {
via Dashboard
SomeClass.login("SourMonk", "myPassword")
assert page instanceof Dashboard
nav.goToReport("Some report name")
assert page instanceof Report
}
#Unroll
def "I work"() {
given:
at Report
expect:
this == that
where:
this << ["some list", "of values"]
that << anotherModule.someContent*.#id
}
#Unroll
def "I don't work"() {
given:
at Report
expect:
this == that
where:
this << ["some other", "list", "of values"]
that << iconRow.columnHeaders*.attr("innerText")*.toUpperCase()
}
}
When executed as a suite I work passes and I don't work fails because it cannot identify "iconRow" as content for the page. If I switch the order of the test cases, I don't work will pass and I work will fail. Alternatively, if I execute each test separately, they both pass.
What I have tried:
Adding/removing the required: true property from content in the modules
Prefixing the module name with the class, such as IconRow.iconRow
Defining my modules as static #Shared properties
Initialize the modules both in and outside of my setupSpec()
Making simple getter methods in each module's class that return the module, and referencing content such as IconRow.getIconRow().columnHeaders*.attr("innerText")*.toUpperCase()
Moving the contents of my setupSpec() into setup()
Adding autoClearCookies = false into my GebConfig.groovy
Making a #Shared Report report variable and prefix all modules with that such as report.iconRow
Very peculiar note about that last bullet point -- it magically resolves the modules that don't have the prefix -- so it won't resolve report.IconRow but will resolve just iconRow -- absolutely bizarre, because if I remove that variable the module that was just previously working suddenly can't be resolved again. I even tried declaring this variable and then not prefixing anything, and that did not work either.
Another problem that I keep banging my head against the wall with is that I'm also not sure of where the problem is. The error it throws leads me to believe that it's a project setup issue, but running each feature individually works fine, so it appears to be resolving the classes just fine.
On the other hand, perhaps it's an issue with the session and/or cookies? Although I have yet to see any official documentation on this, it seems to be the general consensus (from other posts and articles I've read) that only using #Stepwise will maintain your session between feature methods. If this is the case, why is my extension not working? It's pretty much a copy and paste of #Stepwise without the skipFeaturesAfterFirstFailingFeature method (I can post if needed), unless there is some other stuff going on behind the scenes with #Stepwise.
Apologies for the wall of text, but I've been trying to figure this out for about 6 hours now, so my brain is pretty fried.
Geb has special support for #Stepwise, if a spec is annotated with it it does not call resetBrowser() after each test, instead it is called after the spec is completed. See the code on github
So basically you need to change your setupSpec to setup so that it will be executed before each test.
Regarding your observation, if you just run a focused test the setupSpec is executed for that test and thus it passes. The problem arises, that the cleanup is invoked afterwards and resets the browser, breaking subsequent tests.
EDIT
I overlooked your usage of where blocks, everything in the where block needs to be statically (#Shared) available, so using instance level constructs won't work. Resetting the browser will also kill every reference so just getting it before wont work either. Basically, don't use Geb objects in where blocks!
Looking at your code however I don't see any reason to use data driven tests here.
This can be easily done with one assertion in a normal test
It is good practice for unit tests to just test one thing. Geb however, is not an unit test but an acceptance/frontend test. The problem here is that they are way slower than unit tests and it makes sense to combine sensible assertions into one test.
class MyFailingTest extends GebReportingSpec {
def setup() {
via Dashboard
SomeClass.login("SourMonk", "myPassword")
assert page instanceof Dashboard
nav.goToReport("Some report name")
assert page instanceof Report
}
def "I work"() {
given:
at Report
expect:
["some list", "of values"] == anotherModule.someContent*.#id
}
def "I don't work"() {
given:
at Report
expect:
["some other", "list", "of values"] == iconRow.columnHeaders*.attr("innerText")*.toUpperCase()
}
}

Need help handling StaleElementReferenceException

Before we get started let me say I have done my research on this matter, and I have seen the solutions posted here: stale element solution one, and I even came up with my own solution here: My temporary solution the problem with my solutions is that it does not work for all cases (particularly when dealing with long chains of .children()
The problem I have with "stale element solution one" is that it is not a very robust solution at all. It only works if you can put your Navigator element instantiation inside of the try catch, but if you do not have that instantiation, then this solution does no good. let me give an example of what I mean.
Lets say i have a Page class that looks something like this:
package interfaces
import geb.Page
import geb.navigator.Navigator
import org.openqa.selenium.By
class TabledPage extends Page {
static content ={
table {$(By.xpath("//tbody"))}
headers {$(By.xpath("//thead"))}
}
Navigator getAllRows(){
return table.children()
}
Navigator getRow(int index){
return table.children()[index]
}
Navigator getRow(String name){
return table.children().find{it.text().matches(~/.*\b${name}\b.*/)}
}
Navigator getColumn(Navigator row, int column){
return row.children()[column]
}
}
lets say that I have a method in my script that does what "stale element solution one" does (more or less). That looks like this:
def staleWraper(Closure c, args = null, attempts = 5){
def working = false
def trys = 0
while(!working){
try{
if(args){
return c(args)
}
else{
return c()
}
}
catch(StaleElementReferenceException se){
println("I caught me a stale element this many times: ${trys}")
if(trys>attempts){
working = true
throw se
}
else{
trys++
}
}
}
}
The way you call the above method is like this (using TabledPage as an example: staleWrapper(TabledPage.&getRow, 5) //grabs the 4th row of the table
and this works fine reason being, and this is important, the getRowmethod references an element that is in static content. When an static content element is reference, the Navigator is re-defined at run time. this is why this solution works for the getRow method. (table is re-instantiated inside of the try catch)
my problem and gripe with "stale element solution one" is that this type of implementation does not work for methods like getColumn this is because getColumn does not reference the static content itself. The Tabled page I am testing has javascript running on it that refreshes the DOM multiple times per second. so even if I use the staleWraper method it will always throw a stale element no matter how many attempts are made.
One solution to this is to add the columns as static content for the page, but I want to avoid that because it just doesn't flow with the way I have my whole project setup (I have many Page objects that implement methods in a similar way to TabledPage) If it were up to me, there would be a way to suppress the staleElelementExcpetions, but that is not an option either.
I am wondering if anyone here has a creative solution to Robustly (key word here) handle the StaleElementException, because I think looping over a try catch is already kinda hacky.
Well, I am not sure if my solution will solve your purpose. In my case I am using Page Pattern to design the tests so each method of Page class uses PageFactory to return instance of Page class. For example,
public class GoogleSearchPage {
// The element is now looked up using the name attribute
#FindBy(how = How.NAME, using = "q")
private WebElement searchBox;
public SearchResultPage searchFor(String text) {
// We continue using the element just as before
searchBox.sendKeys(text);
searchBox.submit();
// Return page instance of SearchResultPage class
return PageFactory.initElements(driver, SearchResultPage.class);
}
public GoogleSearchPage lookForAutoSuggestions(String text) {
// Do something
// Return page instance of itself as this method does not change the page
return PageFactory.initElements(driver, GoogleSearchPage.class);
}
}
The lookForAutoSuggestions may throw StaleElementException which is taken care by the returning the page instance. So if you are having page classes then ideally each page method should return an instance of page where user is supposed to land.
I ended up implementing something similar to what this guy gives as the "3rd option": Look at the answer (option 3)
I need to test it out some more but I have yet to get stale element since I implemented it this way (the key was to make my own WebElement class and then override the Navigator classes but use the NeverStaleWebElement object instead of the WebElement class

Programmatically execute Gatling tests

I want to use something like Cucumber JVM to drive performance tests written for Gatling.
Ideally the Cucumber features would somehow build a scenario dynamically - probably reusing predefined chain objects similar to the method described in the "Advanced Tutorial", e.g.
val scn = scenario("Scenario Name").exec(Search.search("foo"), Browse.browse, Edit.edit("foo", "bar")
I've looked at how the Maven plugin executes the scripts, and I've also seen mention of using an App trait but I can't find any documentation for the later and it strikes me that somebody else will have wanted to do this before...
Can anybody point (a Gatling noob) in the direction of some documentation or example code of how to achieve this?
EDIT 20150515
So to explain a little more:
I have created a trait which is intended to build up a sequence of, I think, ChainBuilders that are triggered by Cucumber steps:
trait GatlingDsl extends ScalaDsl with EN {
private val gatlingActions = new ArrayBuffer[GatlingBehaviour]
def withGatling(action: GatlingBehaviour): Unit = {
gatlingActions += action
}
}
A GatlingBehaviour would look something like:
object Google {
class Home extends GatlingBehaviour {
def execute: ChainBuilder =
exec(http("Google Home")
.get("/")
)
}
class Search extends GatlingBehaviour {...}
class FindResult extends GatlingBehaviour {...}
}
And inside the StepDef class:
class GoogleStepDefinitions extends GatlingDsl {
Given( """^the Google search page is displayed$""") { () =>
println("Loading www.google.com")
withGatling(Home())
}
When( """^I search for the term "(.*)"$""") { (searchTerm: String) =>
println("Searching for '" + searchTerm + "'...")
withGatling(Search(searchTerm))
}
Then( """^"(.*)" appears in the search results$""") { (expectedResult: String) =>
println("Found " + expectedResult)
withGatling(FindResult(expectedResult))
}
}
The idea being that I can then execute the whole sequence of actions via something like:
val scn = Scenario(cucumberScenario).exec(gatlingActions)
setup(scn.inject(atOnceUsers(1)).protocols(httpConf))
and then check the reports or catch an exception if the test fails, e.g. response time too long.
It seems that no matter how I use the 'exec' method it tries to instantly execute it there and then, not waiting for the scenario.
Also I don't know if this is the best approach to take, we'd like to build some reusable blocks for our Gatling tests that can be constructed via Cucumber's Given/When/Then style. Is there a better or already existing approach?
Sadly, it's not currently feasible to have Gatling directly start a Simulation instance.
Not that's it's not technically feasible, but you're just the first person to try to do this.
Currently, Gatling is usually in charge of compiling and can only be passed the name of the class to load, not an instance itself.
You can maybe start by forking io.gatling.app.Gatling and io.gatling.core.runner.Runner, and then provide a PR to support this new behavior. The former is the main entry point, and the latter the one can instanciate and run the simulation.
I recently ran into a similar situation, and did not want to fork gatling. And while this solved my immediate problem, it only partially solves what you are trying to do, but hopefully someone else will find this useful.
There is an alternative. Gatling is written in Java and Scala so you can call Gatling.main directly and pass it the arguments you need to run the Gatling Simulation you want. The problem is, the main explicitly calls System.exit so you have to also use a custom security manager to prevent it from actually exiting.
You need to know two things:
the class (with the full package) of the Simulation you want to run
example: com.package.your.Simulation1
the path where the binaries are compiled.
The code to run a Simulation:
protected void fire(String gatlingGun, String binaries){
SecurityManager sm = System.getSecurityManager();
System.setSecurityManager(new GatlingSecurityManager());
String[] args = {"--simulation", gatlingGun,
"--results-folder", "gatling-results",
"--binaries-folder", binaries};
try {
io.gatling.app.Gatling.main(args);
}catch(SecurityException se){
LOG.debug("gatling test finished.");
}
System.setSecurityManager(sm);
}
The simple security manager i used:
public class GatlingSecurityManager extends SecurityManager {
#Override
public void checkExit(int status){
throw new SecurityException("Tried to exit.");
}
#Override
public void checkPermission(Permission perm) {
return;
}
}
The problem is then getting the information you want out of the simulation after it has been run.

Is it possible to skip a scenario with Cucumber-JVM at run-time

I want to add a tag #skiponchrome to a scenario, this should skip the scenario when running a Selenium test with the Chrome browser. The reason to-do this is because some scenario's work in some environments and not in others, this might not even be browser testing specific and could be applied in other situation for example OS platforms.
Example hook:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
// Skip scenario code here
}
}
I know it is possible to define ~#skiponchrome in the cucumber tags to skip the tag, but I would like to skip a tag at run-time. This way I don't have to think about which steps to skip in advance when I starting a test run on a certain environment.
I would like to create a hook that catches the tag and skips the scenario without reporting a fail/error. Is this possible?
I realized that this is a late update to an already answered question, but I want to add one more option directly supported by cucumber-jvm:
#Before //(cucumber one)
public void setup(){
Assume.assumeTrue(weAreInPreProductionEnvironment);
}
"and the scenario will be marked as ignored (but the test will pass) if weAreInPreProductionEnvironment is false."
You will need to add
import org.junit.Assume;
The major difference with the accepted answer is that JUnit assume failures behave just like pending
Important Because of a bug fix you will need cucumber-jvm release 1.2.5 which as of this writing is the latest. For example, the above will generate a failure instead of a pending in cucumber-java8-1.2.3.jar
I really prefer to be explicit about which tests are being run, by having separate run configurations defined for each environment. I also like to keep the number of tags I use to a minimum, to keep the number of configurations manageable.
I don't think it's possible to achieve what you want with tags alone. You would need to write a custom jUnit test runner to use in place of #RunWith(Cucumber.class). Take a look at the Cucumber implementation to see how things work. You would need to alter the RuntimeOptions created by the RuntimeOptionsFactory to include/exclude tags depending on the browser, or other runtime condition.
Alternatively, you could consider writing a small script which invokes your test suite, building up a list of tags to include/exclude dynamically, depending on the environment you're running in. I would consider this to be a more maintainable, cleaner solution.
It's actually really easy. If you dig though the Cucumber-JVM and JUnit 4 source code, you'll find that JUnit makes skipping during runtime very easy (just undocumented).
Take a look at the following source code for JUnit 4's ParentRunner, which Cucumber-JVM's FeatureRunner (which is used in Cucumber, the default Cucumber runner):
#Override
public void run(final RunNotifier notifier) {
EachTestNotifier testNotifier = new EachTestNotifier(notifier,
getDescription());
try {
Statement statement = classBlock(notifier);
statement.evaluate();
} catch (AssumptionViolatedException e) {
testNotifier.fireTestIgnored();
} catch (StoppedByUserException e) {
throw e;
} catch (Throwable e) {
testNotifier.addFailure(e);
}
}
This is how JUnit decides what result to show. If it's successful it will show a pass, but it's possible to #Ignore in JUnit, so what happens in that case? Well, an AssumptionViolatedException is thrown by the RunNotifier (or Cucumber FeatureRunner in this case).
So your example becomes:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
throw new AssumptionViolatedException("Not supported on Chrome")
}
}
If you've used vanilla JUnit 4 before, you'd remember that #Ignore takes an optional message that is displayed when a test is ignored by the runner. AssumptionViolatedException carries the message, so you should see it in your test output after a test is skipped this way without having to write your own custom runner.
I too had the same challenge, where in I need to skip a scenario from running based on a flag which I obtain from the application dynamically in run-time, which tells whether the feature to be tested is enabled on the application or not..
so this is how I wrote my logic in the scenarios file, where we have the glue code for each step.
I have used a unique tag '#Feature-01AXX' to mark my scenarios that need to be run only when that feature(code) is available on the application.
so for every scenario, the tag '#Feature-01XX' is checked first, if its present then the check for the availability of the feature is made, only then the scenario will be picked for running. Else it will be merely skipped, and Junit will not mark this as failure, instead it will me marked as Pass. So the final result if these tests did not run due to the un-availability of the feature will be pass, that's cool...
#Before
public void before(final Scenario scenario) throws Exception {
/*
my other pre-setup tasks for each scenario.
*/
// get all the scenario tags from the scenario head.
final ArrayList<String> scenarioTags = new ArrayList<>();
scenarioTags.addAll(scenario.getSourceTagNames());
// check if the feature is enabled on the appliance, so that the tests can be run.
if (checkForSkipScenario(scenarioTags)) {
throw new AssumptionViolatedException("The feature 'Feature-01AXX' is not enabled on this appliance, so skipping");
}
}
private boolean checkForSkipScenario(final ArrayList<String> scenarioTags) {
// I use a tag "#Feature-01AXX" on the scenarios which needs to be run when the feature is enabled on the appliance/application
if (scenarioTags.contains("#Feature-01AXX") && !isTheFeatureEnabled()) { // if feature is not enabled, then we need to skip the scenario.
return true;
}
return false;
}
private boolean isTheFeatureEnabled(){
/*
my logic to check if the feature is available/enabled on the application.
in my case its an REST api call, I parse the JSON and check if the feature is enabled.
if it is enabled return 'true', else return 'false'
*/
}
I've implemented a customized junit runner as below. The idea is to add tags during runtime.
So say for a scenario we need new users, we tag the scenarios as "#requires_new_user". Then if we run our test in an environment (say production environment which dose not allow you to register new user easily), then we will figure out that we are not able to get new user. Then the ""not #requires_new_user" will be added to cucumber options to skip the scenario.
This is the most clean solution I can imagine now.
public class WebuiCucumberRunner extends ParentRunner<FeatureRunner> {
private final JUnitReporter jUnitReporter;
private final List<FeatureRunner> children = new ArrayList<FeatureRunner>();
private final Runtime runtime;
private final Formatter formatter;
/**
* Constructor called by JUnit.
*
* #param clazz the class with the #RunWith annotation.
* #throws java.io.IOException if there is a problem
* #throws org.junit.runners.model.InitializationError if there is another problem
*/
public WebuiCucumberRunner(Class clazz) throws InitializationError, IOException {
super(clazz);
ClassLoader classLoader = clazz.getClassLoader();
Assertions.assertNoCucumberAnnotatedMethods(clazz);
RuntimeOptionsFactory runtimeOptionsFactory = new RuntimeOptionsFactory(clazz);
RuntimeOptions runtimeOptions = runtimeOptionsFactory.create();
addTagFiltersAsPerTestRuntimeEnvironment(runtimeOptions);
ResourceLoader resourceLoader = new MultiLoader(classLoader);
runtime = createRuntime(resourceLoader, classLoader, runtimeOptions);
formatter = runtimeOptions.formatter(classLoader);
final JUnitOptions junitOptions = new JUnitOptions(runtimeOptions.getJunitOptions());
final List<CucumberFeature> cucumberFeatures = runtimeOptions.cucumberFeatures(resourceLoader, runtime.getEventBus());
jUnitReporter = new JUnitReporter(runtime.getEventBus(), runtimeOptions.isStrict(), junitOptions);
addChildren(cucumberFeatures);
}
private void addTagFiltersAsPerTestRuntimeEnvironment(RuntimeOptions runtimeOptions)
{
String channel = Configuration.TENANT_NAME.getValue().toLowerCase();
runtimeOptions.getTagFilters().add("#" + channel);
if (!TestEnvironment.getEnvironment().isNewUserAvailable()) {
runtimeOptions.getTagFilters().add("not #requires_new_user");
}
}
...
}
Or you can extends the official Cucumber Junit test runner cucumber.api.junit.Cucumber and override method
/**
* Create the Runtime. Can be overridden to customize the runtime or backend.
*
* #param resourceLoader used to load resources
* #param classLoader used to load classes
* #param runtimeOptions configuration
* #return a new runtime
* #throws InitializationError if a JUnit error occurred
* #throws IOException if a class or resource could not be loaded
* #deprecated Neither the runtime nor the backend or any of the classes involved in their construction are part of
* the public API. As such they should not be exposed. The recommended way to observe the cucumber process is to
* listen to events by using a plugin. For example the JSONFormatter.
*/
#Deprecated
protected Runtime createRuntime(ResourceLoader resourceLoader, ClassLoader classLoader,
RuntimeOptions runtimeOptions) throws InitializationError, IOException {
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
return new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
}
You can manipulate runtimeOptions here as you wish. But the method is marked as deprecated, so use it with caution.
If you're using Maven, you could read use a browser profile and then set the appropriate ~ exclude tags there?
Unless you're asking how to run this from command line, in which case you tag the scenario with #skipchrome and then when you run cucumber set the cucumber options to tags = {"~#skipchrome"}
If you wish simply to temporarily skip a scenario (for example, while writing the scenarios), you can comment it out (ctrl+/ in Eclipse or Intellij).