Run Cucumber JVM #BeforeClass Without Feature Files - cucumber-jvm

The title may be a bit confusing at this point; hopefully I can clear it up.
What I Have
I'm running Cucumber JVM with Selenium WebDriver to automate our system test cases. These test cases are currently stored in JIRA using the XRay Test Management plugin. XRay also provides APIs to fetch the feature files as well as upload the results back to JIRA.
I have created a custom JIRA utility class to download the tests as feature files and upload the test results from and to JIRA - as well as demonstrated that it does work. These are run in the #BeforeClass and #AfterClass in the Cucumber Runner class respectively.
I have also demonstrated that the developed test framework does work by manually running with feature files created on my computer.
What I Want
I want to be able to (eventually) run the automation test framework automatically with our CI tools. Along with this, it would pull the defined automation tests from JIRA and push the test results back to JIRA.
I do not want the feature files stored with the code. In my opinion, this defeats the purpose of it being dynamic as the tests we execute will change over time (in number executed and the steps themselves).
What Is Happening (Or More Specifically, Not Happening)
When I try to execute the Cucumber Runner class without any feature files in the framework, Cucumber says "No features found at [src/test/resources/features/]". This is understandable since there are no feature files (yet).
However, it does not run the #BeforeClass; thus it does not download the feature files to be run. I have tried this both with and without tags in the runner class.
Code
#RunWith(Cucumber.class)
#CucumberOptions(
tags={"#smoketests"},
features= {"src/test/resources/features/"},
plugin={"json:target/reports/cucumber.json"},
monochrome=true)
public class RunCucumberTest {
#BeforeClass
public static void executeBeforeTests() {
JiraUtil.getFeatureFiles();
//String browser = "firefox";
String browser = "chrome";
//String browser = "safari";
//String browser = "edge";
//String browser = "ie";
DriverUtil.getInstance().setDriver(browser);
}
#AfterClass
public static void executeAfterTests() {
DriverUtil.getInstance().resetDriver();
JiraUtil.uploadTestResults();
}
}
Back To My Question
How can I execute the JIRA Util code so I can download the feature files?
Is it possible to achieve what I want? Or do I have to admit defeat and just have all the feature files stored with the code?

This is the expected behavior when using JUnit. A test suite will not invoke the #BeforeClass, #AfterClass or #ClassRule when there are no tests in the suite or if all tests are ignored[1]. This avoids the execution of a potentially expensive setup for naught.
This does mean you can't use a class rule to bootstrap your tests. Nor should you attempt to do so. In a build process it is a good practice to fetch all sources and resources prior to compilation.
If you are using maven could write a maven instead and attach it to the generate-test-sources phase[2]. Creating a maven plugin is a bit more involved then a JUnit Rule but not prohibitively so. Check the Guide to Developing Java Plugins.
I assume there are similar options for Gradle.

Related

What is the gradle command to run scenarios with tags?

I am using Gradle 7.6, Karate 1.3.1, Java 17.0.5 and Junit 5.8.1.
I want to configure a Jenkin job for each feature to create a health check monitor. I need gradle commands to run feature files using tags #smoke, #regression, #featureName etc.,
I have tried with the following command, it worked earlier and stopped working recently.
./gradlew test -Dkarate.options="--tags #smoke" -Dtest.single=TestRunner#testTagsWithoutFeatureName
Where TestRunner is the following Java class
import com.intuit.karate.junit5.Karate;
public class TestRunner {
#Karate.Test
Karate testTagsWithoutFeatureName() {
return Karate.run().tags("#smoke").relativeTo(getClass());
}
}
My advice is use the Runner class, that is better designed for running tests in CI. The JUnit helpers are just for local-dev convenience: https://stackoverflow.com/a/65578167/143475
It should be possible to even pass a feature to karate.options as the last argument. Which might be more convenient than writing a Java class for every combinations. You should experiment.
Otherwise no suggestions, but if you feel there's a bug, follow this process: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue

Karate summary reports not showing all tested features after upgrade to 1.0.0

I have recently upgraded to version 1.0.0 from 0.9.6 and noticed that the generated karate-summary.html file, it doesn't display all the tested feature files in the JUnit 5 Runner unlike in 0.9.6.
What it displays instead was the last tested feature file only.
The below screenshots are from the provided SampleTest.java sample code (excluding other Tests for simplicity).
package karate;
import com.intuit.karate.junit5.Karate;
class SampleTest {
#Karate.Test
Karate testSample() {
return Karate.run("sample").relativeTo(getClass());
}
#Karate.Test
Karate testTags() {
return Karate.run("tags").relativeTo(getClass());
}
}
This is from Version 0.9.6.
And this one is from Version 1.0.0
However, when running the test below in 1.0.0, all the features are displayed in the summary correctly.
#Karate.Test
Karate testAll() {
return Karate.run().relativeTo(getClass());
}
Would anyone be kind to confirm if they are getting the similar result? It would be very much appreciated.
What it displays instead was the last tested feature file only.
This is because for each time you run a JUnit method, the reports directory is backed up by default. Look for other directories called target/karate-reports-<timestamp> and you may find your reports there. So maybe what is happening is that you have multiple JUnit tests that are all running, so you see this behavior. You may be able to over-ride this behavior by calling the method: .backupReportDir(false) on the builder. But I think it may not still work - because the JUnit runner has changed a little bit. It is designed to run one method at a time, when you are in local / dev-mode.
So the JUnit runner is just a convenience. You should use the Runner class / builder for CI execution, and when you want to run multiple tests and see them in one report: https://stackoverflow.com/a/65578167/143475
Here is an example: ExamplesTest.java
But in case there is a bug in the JUnit runner (which is quite possible) please follow the process and help the project developers replicate and then fix the issue to release as soon as possible.

No coverage results using the eclEmma plugin for server code in a gwt application

I'm using the eclEmma plugin to test code coverage for my gwt application. I've written jUnit test classes for client code, such as testing get/set methods etc. as well as jUnit tests for rpc services. I used "syncproxy" to test my equivalent GreetService, GreetServiceAsync and GreetServiceImpl rpc services. For example I have a location service that gets a users location and this is part of my test class:
public class LocationServiceTest {
private static LocationService rpcService =
(LocationService) SyncProxy.newProxyInstance(LocationService.class,
"http://localhost:...", "location");
#Test
public void testAdministrativeAreaLevel2LocationService() {
String result = rpcService.getAddress("49.28839970000001,-123.1259316");
assertTrue((result != null) && (result.startsWith("Vancouver")));
}
The jUnit tests all pass, but when I run eclEmma on my project (I right click the project, select "Coverage as" then "jUnit test") I only get coverage results for client code, and 0% coverage for all my server code.
Any suggestions for how to get eclEmma to cover server code? Or for what I might be doing wrong?
EclEmma tracks coverage on code launched at the test jvm (the vm you launch when you run the test). You seem to be running your server before, so eclEmma "can't see" its coverage. You could try running the server inside your tests, with Cargo, for example.

Using Selenium 2.0 WebDriver in practice

I want to write Selenium test cases in JUnit and test my projects in multiple browsers and I would like to take advantage of the fact that all Selenium drivers implement the same interface.
Each test case should look like this:
package fm;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import static org.junit.Assert.*;
public class HomepageTest {
#Test
public void testTitle(WebDriver driver) {
driver.get("http://localhost/");
assertEquals("Foo", driver.getTitle());
}
#Test
public void testSearchForm(WebDriver driver) {
//...
}
}
The passed WebDriver implementations should be controlled somewhere centrally. I'll probably need to override some of the JUnit behaviour and I hope it's possible.
I want to do it this way in order to avoid two things:
Code repetition: If each test case would initialize all tested browsers in #Before, the test suite would have a lot of repeated code that is hard to maintain.
Speed of the test suite: If I had centralized control over the order and passed WebDriver implementations, I could easily manage to open for example Firefox, run all test cases in it, close it and open the next browser. If each test case would manage to open and close browsers on its own, it would add a lot of time to each test run.
Anybody have an idea how should I do it? Thanks.
In the Selenium project we inject what we need using http://code.google.com/p/selenium/source/browse/trunk/java/client/test/org/openqa/selenium/AbstractDriverTestCase.java and then our build calls the browser and we get tests running in it.
Have a look at our code base to get some inspiration!
Please check with ISFW it supports selenium webdriver/remote webdriver as well as conventional (selenium1) rc way.
You need to write code using regular selenium api
for example
selenium.open(url);
selenium.type("loc", "text to type");
selenium.submit("loc");
Here is the working demo. Set browser String as per your requirement.
The FW support selenium conventional way as well as selenium 2 webdriver. You need to set appropriate browser string in application properties. Following are different browser configurations for Firefox:
*firefox - required selenium server running on configured host/port
if not found then fw will check/start one on locahost/port
firefoxDriver – will run directly with firefox web driver without
selenium server
firefoxRemoteDriver - required selenium server running on
configured host/port if not found then fw will check/start one on
locahost/port, it will run test using firefox web driver on host
machine
Same way for IE - *iexplore, *iehta, iexplorerDriver, iexplorerRemoteDriver
and so on.
I did what you are/were? trying to do with a static class that controlls the webdriver and all my test which need the same webdriver get it from there. It really helps when you are running multiple tests that need to use the same session. And all your tests run in one browser, so not every tests opens a new browser instance.
Maybe you should also have a look at testNG. I made the experience that testNG is better for tests with selenium since it is not so focused on independent tests. It offers a lot of useful functionality.

How to skip tests in PHPUnit if Selenium server is not running?

I want to add a suite of Selenium tests as part of a global PHPUnit test suite for an application. I have hooked the suite of Selenium tests into the global AllTests.php file and everything runs fine whilst the Selenium server is running.
However, I would like the script to skip the Selnium tests if the Selenium server isn't running so other developers aren't forced to install Selenium server in order for the tests to run. I would normally try to connect within the setUp method of each testcase and mark the tests as skipped if this failed, but this seems to throw a RuntimeException with message:
The response from the Selenium RC server is invalid: ERROR Server Exception: sessionId should not be null; has this session been started yet?
Does anyone have a method for marking the Selenium tests as skipped in this scenario?
You could use test dependencies that were introduced in PHPUnit 3.4.
Basically
write a test that checks whether Selenium is up.
If not, call $this->markTestAsSkipped().
Make all your selenium requiring tests depend on this one.
My preferred selenium / PHPUnit Configuration:
Maintaining integration (selenium) tests can be a lot of work. I use the firefox selenium IDE for developing test cases, which doesn't support exporting test suites to PHPUnit, and only supports individual test cases. As such - if I had to maintain even 5 tests, that'd be a lot of manual work to re-PHPUnit them every time they needed to be updated. That is why I setup PHPUnit to use Selenium IDE's HTML Test files! They can be reloaded & reused between PHPUnit & selenium IDE
<?php
class RunSeleniumTests extends PHPUnit_Extensions_SeleniumTestCase {
protected $captureScreenshotOnFailure = true;
protected $screenshotPath = 'build/screenshots';
protected $screenshotUrl = "http://localhost/site-under-test/build/screenshots";
//This is where the magic happens! PHPUnit will parse all "selenese" *.html files
public static $seleneseDirectory = 'tests/selenium';
protected function setUp() {
parent::setUp();
$selenium_running = false;
$fp = #fsockopen('localhost', 4444);
if ($fp !== false) {
$selenium_running = true;
fclose($fp);
}
if (! $selenium_running)
$this->markTestSkipped('Please start selenium server');
//OK to run tests
$this->setBrowser("*firefox");
$this->setBrowserUrl("http://localhost/");
$this->setSpeed(0);
$this->start();
//Setup each test case to be logged into WordPress
$this->open('/site-under-test/wp-login.php');
$this->type('id=user_login', 'admin');
$this->type('id=user_pass', '1234');
$this->click('id=wp-submit');
$this->waitForPageToLoad();
}
//No need to write separate tests here - PHPUnit runs them all from the Selenese files stored in the $seleneseDirectory above!
} ?>
You can try skipWithNoServerRunning()
For more information follow this link