How to use my custmize test-engine in junit5 only? - junit5

I had build a new TestEngine which can suppert the cases run in a parallelStream so that run faster.But when i launch,the default test-engine:jupiter-engine will run。So my cases will run 2 times. What should i do to stop the jupiter-engine run?
When i debug the junit5 source code,i see the DefaultLauncher will find two engine.
This is the source code:
DefaultLauncher(Iterable testEngines) { Preconditions.condition(testEngines != null && testEngines.iterator().hasNext(), () -> "Cannot create Launcher without at least one TestEngine; " + "consider adding an engine implementation JAR to the classpath"); this.testEngines = validateUniqueIds(testEngines); }
this is the launch code:
LauncherDiscoveryRequest request =
LauncherDiscoveryRequestBuilder.request() .selectors(selectPackage("mytest) ) .filters( includeClassNamePatterns("^.*TestCase?$") ).configurationParameter() .build();
Launcher launcher = LauncherFactory.create(); // Register a listener of your choice TestExecutionListener listener = new SummaryGeneratingListener(); launcher.registerTestExecutionListeners(listener);

Related

Testing initial delay on CoroutineWorker with Dependencies

I know that WorkManager provides a work-testing artifact for test workers and we can use TestListenableWorkerBuilder to test CoroutineWorker (see this link for more information). I found an medium article by Ian Roberts showing how to test CoroutineWorker with dependencies by creating your own WorkerFactory.
According to official documentation, we can test initial delays on Worker using TestDriver but nothing was said about testing delays, constraints etc on CoroutinesWork. Is there a way to perform such tests in CoroutineWorker using TestListenableWorkerBuilder?
After watching this video (at 13:00) from the 2019 Android Dev Summit, I found the answer for this question:
When initializing workManager for test (by WorkManagerTestInitHelper.initializeTestWorkManager method), we have to pass our custom WorkerFactory through the configuration step;
Setup your request worker as you normally would using the OneTimeWorkRequestBuilder method;
By default, all constraints for Workmanager instances in the test
mode are unmet. Using an instance of TestDriver, we can mark those constraints as met.
Here's an example to summarize these above steps:
#Test
fun checkInitialDelay() {
val config = Configuration.Builder()
.setWorkerFactory(
MyWorkFactory(myDependencies)
)
.setMinimumLoggingLevel(Log.DEBUG)
.setExecutor(SynchronousExecutor())
.build()
// Initialize WorkManager
WorkManagerTestInitHelper.initializeTestWorkManager(context, config)
//setup the request work
val request =
OneTimeWorkRequestBuilder<MyWork>()
.setInitialDelay(10, TimeUnit.MINUTES)
.build()
val workManager = WorkManager.getInstance(context)
// Get the TestDriver
val testDriver = WorkManagerTestInitHelper.getTestDriver(context)
// Enqueue
workManager.enqueue(request).result.get()
// Tells the WorkManager test framework that initial delays are now met.
testDriver?.setInitialDelayMet(request.id)
// Get WorkInfo and outputData
val workInfo = workManager.getWorkInfoById(request.id).get()
// Assert
assert(workInfo.state == WorkInfo.State.SUCCEEDED)
}

Karate start-up feature

Need to execute a 'healthcheck' test(feature) before all the test-cases execute .
This is like a preliminary test before executing the bunch of test-cases. Need a solutions to exit the platform if any of this pre check fails .
Execute your health check feature from karate-config.js using karate.call/karatecallSingle,
if you feature fails to use java System.exit to force kill your test.
snippet for karate-config.js
try{
var healthCheckInput = {};
var healthcheckCall = karate.callSingle("healthCheck.feature",healthCheckInput );
if (!<healcheckCondition>){
java.lang.System.exit(0);
}
}
catch(e){
java.lang.System.exit(0);
}
if your health check condition failed this would force exit your execution.
Not sure whether karate.abort() will give a soft exit from the platform, but if you are planning to implement try this as well.
Note: since System.exit() force kills your execution you will not get any reports properly, but you can refer console logs/karate logs
for further investigation.
EDIT:
Another Approach,
You can use karate Java API inside Junit #BeforeClass run your health status check feature.
#BeforeClass
public static void startUpCheck() {
Map<String, Object> args = new HashMap();
args.put("inputOne", "valueOne");
Map<String, Object> result = Runner.runFeature("classpath:stackoverflow/demo/healthCheck.feature", args, true);
// also assert the 'result' if you want OR keep some assertions/match in your feature
}

How to leave the browser open when a Behat/Mink test fails

I'm using the selenium2 driver to test my Drupal site using Behat/Mink in a docker container.
Using the Selenium Standalone-Chrome container, I can watch my behat tests fail, but the problem is that as soon as they fail, the browser is closed, which makes it harder for me to see what the problem is.
I'm running my tests like this:
behat --tags '#mystuff' --config=behat-myconfig.yml --strict --stop-on-failure
Is there a way to leave the remote-controlled browser open even when a test fails?
By default it is not possible.
Maybe you could find some hack to do it but it is not recommended, since each scenario should be isolated and this is not a good solution at least when running some suite with multiple tests.
For one time only see if you can use the logic for printscreen and use a breakpoint instead.
Anyway, you should use a verbose (-vvv for Behat 3) output + ide debugger to debug your code.
Finally I found a good solution for this: behat-fail-aid.
Add the fail aid to your FeatureContext and then run behat with the --wait-on-failure option:
the --wait-on-failure={seconds} option can be used to
investigate/inspect failures in the browser.
You can take a screenshot whenever an error occurs using Behat hook "AfterStep".
Consider having a look at the Panther Driver or DChrome Driver.
Here you are a shortened example considering also non javascript tests (which are faster):
use Behat\Mink\Driver\Selenium2Driver;
/** Context Class Definition ... */
/**
* #AfterStep
*/
public function takeScreenShotAfterFailedStep(AfterStepScope $scope)
{
if (99 !== $scope->getTestResult()->getResultCode()) {
return;
}
$this->takeAScreenShot('error');
}
private function takeAScreenShot($prefix = 'screenshot')
{
$baseName= sprintf('PATH_FOR_YOUR_SCREENSHOTS/%s-%s', $prefix, (new \DateTime())->format('Y_m_d_H_i_s'));
if ($this->supportsJavascript()) {
$extension = '.png';
$content = $this->session->getScreenshot();
} else {
$extension = '.html';
$content = $this->getSession()->getPage()->getOuterHtml();
}
file_put_contents(sprintf('%s%s', $baseName, $extension), $content);
}
private function supportsJavascript()
{
return $this->getSession()->getDriver() instanceof Selenium2Driver;
}

Browser Restart Using Geb & Spock within same test

I would like to be able to restart my browser session mid test using Geb and Spock Framework. I no howto close the browser and update after test compltion etc, but when i close during the test and try and re use the browser object i get a session error thrown by selenium. Below is the base outline i am trying to execute. NB never allows me to navigate to the new StoreHome and if i try and use just Browser i get error thrown.
#Category(High.class)
def "TC1: Verify Browser Restart"() {
when: "On my StoreFront HP wait until title displayed"
to StoreHomePage
waitFor { homepagetitle.displayed }
then: "Update your site picker"
mySitePicker.click()
waitFor { myNewHomePageTitle.displayed }
when: "Close the browser and insure on restart new page is loaded"
browser.close()
browser.quit()
def nb = new Browser()
nb.to(NewStoreHomePage)
then: "Validate on New HP"
asset myNewHomePageTitle.displayed
}
It's as simple as doing the following in your spec:
resetBrowser()
CachingDriverFactory.clearCacheAndQuitDriver()
After that any code that tries to access browser will trigger automatic creation of new WebDriver and Browser instances.
This is how you force a new driver:
CachingDriverFactory.clearCache()
I tested it, it works beautifully. This hint can also be found in the Geb manual.
Update 2017-02-07 15:10 CET: Thanks for the follow-up question. Well, my brief answer was made under the assumption that the command is issued at the end of one feature method and the next feature method starts with a new browser session. In order to do this mid-test you would have to create a new WebDriver instance manually and somehow trick Geb into updating its browser session.
Because this is tricky at least and I do not know how to do it, I recommend using two separate feature methods for testing what should be tested before and after quitting the browser. You can share state between them via #Shared members, if necessary. This also had the advantage that if you let Geb create the new WebDriver and browser session for you, everything configured in GebConfig.groovy, such as browser type and capabilities, will automatically be considered. If you would create a driver manually, you would have to parse the Geb config by yourself - ugly!
But the main problem with this approach is: How to assure that the feature methods are executed in the (lexical) order of declaration? Normally tests should be runnable in any order, so you cannot and should not rely on a specific execution order. Spock offers the Stepwise annotation to adress the rare case in which you want to enforce execution order, but this would lead to the same problem as in the mid-test situation because Geb implicitly assumes that it should continue to test in the same session. I.e. we need a trick to enforce lexical execution order without using #Stepwise.
Another problem is that if your spec extends GebReportingSpec because you want to take screenshots, Geb fails to take the last screenshot at the end of the feature method with the browser gone. Now you can configure Geb not to take screenshots if the test succeeds via reportOnTestFailureOnly, but that still leaves us with failed tests. So I added an override for the report method with some additional exception handling.
The full solution looks like this, derived from one of my real-life tests:
package de.scrum_master.tdd
import geb.driver.CachingDriverFactory
import geb.spock.GebReportingSpec
import org.openqa.selenium.Keys
import org.spockframework.runtime.model.FeatureInfo
import spock.lang.Shared
class SampleGebIT extends GebReportingSpec {
#Override
void report(String label = "") {
// GebReportingSpec tries to write a report (screenshot) at the end of each feature
// method. But because we use 'CachingDriverFactory.clearCacheAndQuitDriver()',
// there is no valid driver instance anymore from which to get a screenshot. Geb is
// unprepared for this kind of error, so we handle it gracefully so as to keep the
// test from failing just because the last screenshot cannot be taken anymore.
try {
super.report(label)
}
catch (Exception e) {
System.err.println("Cannot create screenshot: ${e.message}")
}
}
// We cannot use 'specificationContext' directly from 'setupSpec()' because of this
// compilation error: "Only #Shared and static fields may be accessed from here"
// Okay then, so use we a #Shared field as a workaround. ;-)
#Shared
def currentSpec = specificationContext.currentSpec
def setupSpec() {
// Make sure that feature methods are run in declaration order. Normally we could
// use #Stepwise for this, but because #Stepwise implies staying in the same
// browser session, it would not work in connection with
// 'CachingDriverFactory.clearCacheAndQuitDriver()'. This is the workaround for it.
for (FeatureInfo feature : currentSpec.features)
feature.executionOrder = feature.declarationOrder
}
def "Search web site Scrum-Master.de"() {
setup:
def deactivateAutoComplete =
"document.getElementById('mod_search_searchword')" +
".setAttribute('autocomplete', 'off')"
def regexNumberOfMatches = /Insgesamt wurden ([0-9]+) Ergebnisse gefunden/
when:
go "https://scrum-master.de"
report "welcome page"
then:
$("h2").text().startsWith("Herzlich Willkommen bei Scrum-Master.de")
when:
js.exec(deactivateAutoComplete)
$("form").searchword = "Product Owner" + Keys.ENTER
then:
waitFor { $("form#searchForm") }
when:
report "search results"
def searchResultSummary = $("form#searchForm").$("table.searchintro").text()
def numberOfMatches = (searchResultSummary =~ regexNumberOfMatches)[0][1] as int
then:
numberOfMatches > 0
cleanup:
println "Closing browser and WebDriver"
CachingDriverFactory.clearCacheAndQuitDriver()
}
def "Visit Scrum-Master.de download page"() {
when:
go "https://scrum-master.de/Downloads"
report "download page"
then:
$("h2").text().startsWith("Scrum on a Page")
}
}
BTW, I tested this successfully with several browsers on my Windows 10 machine:
HtmlUnit (with activated JavaScript)
PhantomJS
Chrome
Internet Explorer
Edge
Firefox

Is it possible to skip a scenario with Cucumber-JVM at run-time

I want to add a tag #skiponchrome to a scenario, this should skip the scenario when running a Selenium test with the Chrome browser. The reason to-do this is because some scenario's work in some environments and not in others, this might not even be browser testing specific and could be applied in other situation for example OS platforms.
Example hook:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
// Skip scenario code here
}
}
I know it is possible to define ~#skiponchrome in the cucumber tags to skip the tag, but I would like to skip a tag at run-time. This way I don't have to think about which steps to skip in advance when I starting a test run on a certain environment.
I would like to create a hook that catches the tag and skips the scenario without reporting a fail/error. Is this possible?
I realized that this is a late update to an already answered question, but I want to add one more option directly supported by cucumber-jvm:
#Before //(cucumber one)
public void setup(){
Assume.assumeTrue(weAreInPreProductionEnvironment);
}
"and the scenario will be marked as ignored (but the test will pass) if weAreInPreProductionEnvironment is false."
You will need to add
import org.junit.Assume;
The major difference with the accepted answer is that JUnit assume failures behave just like pending
Important Because of a bug fix you will need cucumber-jvm release 1.2.5 which as of this writing is the latest. For example, the above will generate a failure instead of a pending in cucumber-java8-1.2.3.jar
I really prefer to be explicit about which tests are being run, by having separate run configurations defined for each environment. I also like to keep the number of tags I use to a minimum, to keep the number of configurations manageable.
I don't think it's possible to achieve what you want with tags alone. You would need to write a custom jUnit test runner to use in place of #RunWith(Cucumber.class). Take a look at the Cucumber implementation to see how things work. You would need to alter the RuntimeOptions created by the RuntimeOptionsFactory to include/exclude tags depending on the browser, or other runtime condition.
Alternatively, you could consider writing a small script which invokes your test suite, building up a list of tags to include/exclude dynamically, depending on the environment you're running in. I would consider this to be a more maintainable, cleaner solution.
It's actually really easy. If you dig though the Cucumber-JVM and JUnit 4 source code, you'll find that JUnit makes skipping during runtime very easy (just undocumented).
Take a look at the following source code for JUnit 4's ParentRunner, which Cucumber-JVM's FeatureRunner (which is used in Cucumber, the default Cucumber runner):
#Override
public void run(final RunNotifier notifier) {
EachTestNotifier testNotifier = new EachTestNotifier(notifier,
getDescription());
try {
Statement statement = classBlock(notifier);
statement.evaluate();
} catch (AssumptionViolatedException e) {
testNotifier.fireTestIgnored();
} catch (StoppedByUserException e) {
throw e;
} catch (Throwable e) {
testNotifier.addFailure(e);
}
}
This is how JUnit decides what result to show. If it's successful it will show a pass, but it's possible to #Ignore in JUnit, so what happens in that case? Well, an AssumptionViolatedException is thrown by the RunNotifier (or Cucumber FeatureRunner in this case).
So your example becomes:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
throw new AssumptionViolatedException("Not supported on Chrome")
}
}
If you've used vanilla JUnit 4 before, you'd remember that #Ignore takes an optional message that is displayed when a test is ignored by the runner. AssumptionViolatedException carries the message, so you should see it in your test output after a test is skipped this way without having to write your own custom runner.
I too had the same challenge, where in I need to skip a scenario from running based on a flag which I obtain from the application dynamically in run-time, which tells whether the feature to be tested is enabled on the application or not..
so this is how I wrote my logic in the scenarios file, where we have the glue code for each step.
I have used a unique tag '#Feature-01AXX' to mark my scenarios that need to be run only when that feature(code) is available on the application.
so for every scenario, the tag '#Feature-01XX' is checked first, if its present then the check for the availability of the feature is made, only then the scenario will be picked for running. Else it will be merely skipped, and Junit will not mark this as failure, instead it will me marked as Pass. So the final result if these tests did not run due to the un-availability of the feature will be pass, that's cool...
#Before
public void before(final Scenario scenario) throws Exception {
/*
my other pre-setup tasks for each scenario.
*/
// get all the scenario tags from the scenario head.
final ArrayList<String> scenarioTags = new ArrayList<>();
scenarioTags.addAll(scenario.getSourceTagNames());
// check if the feature is enabled on the appliance, so that the tests can be run.
if (checkForSkipScenario(scenarioTags)) {
throw new AssumptionViolatedException("The feature 'Feature-01AXX' is not enabled on this appliance, so skipping");
}
}
private boolean checkForSkipScenario(final ArrayList<String> scenarioTags) {
// I use a tag "#Feature-01AXX" on the scenarios which needs to be run when the feature is enabled on the appliance/application
if (scenarioTags.contains("#Feature-01AXX") && !isTheFeatureEnabled()) { // if feature is not enabled, then we need to skip the scenario.
return true;
}
return false;
}
private boolean isTheFeatureEnabled(){
/*
my logic to check if the feature is available/enabled on the application.
in my case its an REST api call, I parse the JSON and check if the feature is enabled.
if it is enabled return 'true', else return 'false'
*/
}
I've implemented a customized junit runner as below. The idea is to add tags during runtime.
So say for a scenario we need new users, we tag the scenarios as "#requires_new_user". Then if we run our test in an environment (say production environment which dose not allow you to register new user easily), then we will figure out that we are not able to get new user. Then the ""not #requires_new_user" will be added to cucumber options to skip the scenario.
This is the most clean solution I can imagine now.
public class WebuiCucumberRunner extends ParentRunner<FeatureRunner> {
private final JUnitReporter jUnitReporter;
private final List<FeatureRunner> children = new ArrayList<FeatureRunner>();
private final Runtime runtime;
private final Formatter formatter;
/**
* Constructor called by JUnit.
*
* #param clazz the class with the #RunWith annotation.
* #throws java.io.IOException if there is a problem
* #throws org.junit.runners.model.InitializationError if there is another problem
*/
public WebuiCucumberRunner(Class clazz) throws InitializationError, IOException {
super(clazz);
ClassLoader classLoader = clazz.getClassLoader();
Assertions.assertNoCucumberAnnotatedMethods(clazz);
RuntimeOptionsFactory runtimeOptionsFactory = new RuntimeOptionsFactory(clazz);
RuntimeOptions runtimeOptions = runtimeOptionsFactory.create();
addTagFiltersAsPerTestRuntimeEnvironment(runtimeOptions);
ResourceLoader resourceLoader = new MultiLoader(classLoader);
runtime = createRuntime(resourceLoader, classLoader, runtimeOptions);
formatter = runtimeOptions.formatter(classLoader);
final JUnitOptions junitOptions = new JUnitOptions(runtimeOptions.getJunitOptions());
final List<CucumberFeature> cucumberFeatures = runtimeOptions.cucumberFeatures(resourceLoader, runtime.getEventBus());
jUnitReporter = new JUnitReporter(runtime.getEventBus(), runtimeOptions.isStrict(), junitOptions);
addChildren(cucumberFeatures);
}
private void addTagFiltersAsPerTestRuntimeEnvironment(RuntimeOptions runtimeOptions)
{
String channel = Configuration.TENANT_NAME.getValue().toLowerCase();
runtimeOptions.getTagFilters().add("#" + channel);
if (!TestEnvironment.getEnvironment().isNewUserAvailable()) {
runtimeOptions.getTagFilters().add("not #requires_new_user");
}
}
...
}
Or you can extends the official Cucumber Junit test runner cucumber.api.junit.Cucumber and override method
/**
* Create the Runtime. Can be overridden to customize the runtime or backend.
*
* #param resourceLoader used to load resources
* #param classLoader used to load classes
* #param runtimeOptions configuration
* #return a new runtime
* #throws InitializationError if a JUnit error occurred
* #throws IOException if a class or resource could not be loaded
* #deprecated Neither the runtime nor the backend or any of the classes involved in their construction are part of
* the public API. As such they should not be exposed. The recommended way to observe the cucumber process is to
* listen to events by using a plugin. For example the JSONFormatter.
*/
#Deprecated
protected Runtime createRuntime(ResourceLoader resourceLoader, ClassLoader classLoader,
RuntimeOptions runtimeOptions) throws InitializationError, IOException {
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
return new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
}
You can manipulate runtimeOptions here as you wish. But the method is marked as deprecated, so use it with caution.
If you're using Maven, you could read use a browser profile and then set the appropriate ~ exclude tags there?
Unless you're asking how to run this from command line, in which case you tag the scenario with #skipchrome and then when you run cucumber set the cucumber options to tags = {"~#skipchrome"}
If you wish simply to temporarily skip a scenario (for example, while writing the scenarios), you can comment it out (ctrl+/ in Eclipse or Intellij).