Running a single feature file before and after a test run - karate

Was looking for a solution to run a feature file at the end of the suite
My workflow (In parallel Run)
karate.callSingle('Login.feature') so at the beginning i do one
login and then use the cookies/token for the whole suite
Run tests in Parallel
Runs the Logout.feature file

There is no direct support for this currently. By the way no one has ever requested this. If this is so important, kindly open a feature request.
One workaround is to set a singleton / Java static variable from callSingle and then in your JUnit / Java parallel runner, call the feature to logout using the Java API (search the docs for this) and you can pass arguments / access the static variable.
EDIT: just realized that the #AfterClass JUnit annotation may be more than sufficient for your needs.

Related

When running a custom TestEngine can execution of JUnitTestEngine be suppressed?

I have created a custom TestEngine using the JUnit 5 (junit-platform-engine) framework.
The custom TestEngine registers using the ServiceLoader mechanism with an entry in META-INF/services/org.junit.platform.engine.TestEngine.
When I run my tests, this works well, but the tests get run a second time by the built-in JUnitTestEngine.
Is it possible to replace the default TestEngine in this circumstance instead of supplement it?
After checking the JUnit 5 user guide and documentation for maven surefire plugin it seems there's currently no way to filter out certain test engine with Maven :-(.
Using the console launcher, however, does allow to choose test engines: https://junit.org/junit5/docs/current/user-guide/#running-tests-console-launcher-options. And so does Gradle: https://junit.org/junit5/docs/current/user-guide/#running-tests-build-gradle.

How to limit the report generation only for specific feature file in karate framework?

I am having 3 feature file and I am trying to execute specific feature in karate using
#CucumberOptions(features = "classpath:Karate/Karate/APIM_LAYER.feature") over the test runner class.
But on execution, we are able to find reports for all the 3 feature files present in the "target/sure-fire reports path".
Please let us know is there a way to resolve this issue.
You should upgrade to V0.6.2, and when you run with #RunWith(Karate.class) with Cucumber options it will run those files sequentially and generate pretty html reports for each feature file.
As for the location of the reports, its usually mentioned in the console/terminal.
So make a testfolderunner.java file. add your cucumber options and then from terminal do mvn test -D test=testfolderunner
Al the best
I'm sure you still have the #RunWith(Karate.class) annotation even though it is clearly mentioned in the documentation that you should NOT use it for the parallel runner. Please confirm.

Connect Jmeter to Redis with Beanshell

I want to connect Jmeter to Redis DB, I want to do it via java programming.
I added jedis-2.2.1.jar file to lib folder.
and create a test plan with only bean-shell preprocessor.
I can not understand what I can see, since nothing happened, and the response tree is blank,
Can someone please advise how to connect to redis via jmeter (please without the redis plugin)
Provided the Pic of the program, it is a simple program just want to connect.
** I am new in java scripting in Jmeter and the only jar I added is jedis.jar, the program is a script from the net. not created thread group in the test plan
with void main it is not worked also
You need to add a Sampler to your Test Plan. PreProcessors are executed before samplers, single PreProcessor won't do any work as it will simply not be executed. So you either need to add a Sampler to your test plan or convert your PreProcessor to be a Sampler
Since JMeter 3.1 it is recommended to use JSR223 Elements and Groovy language for any form of scripting. The reasons are in:
Groovy performance is much better as it is capable of compiling scripts and caching them
Groovy fully supports Java syntax, valid Java code most likely will be valid Groovy code while with Beanshell you are stuck with Java 5 language level
Groovy provides many enhancements on top of Java SDK
See Apache Groovy - Why and How You Should Use It article for more information, benchmarks, examples of real-life Groovy usage, etc.
The solution is to use bean shell sampler and not pre-processor to see response.
Here is a JMeter file and beanShell Sampler script to fetch a set of keys from redis and put them into variables used by a looping HTTP GET request.
https://bitbucket.org/barryknapp/shared/src/d62f8ebb57ede1d15a3bd7683adfdd02cd039369/jmeter/?at=master

serenity jbehave multiple browsers

I am trying to setup a test project that uses serenity and jbehave
I am noticing that all examples use serenity.properties that define a browser in it
I would like to structure my tests in a way so that same test can be executed in IE/firefox/chrome etc
How do I do this?
You can pass in properties as command line properties, so you can rerun the same tests with different browsers by passing in different settings for webdriver.driver, e.g.
$ mvn verify -Dwebdriver.driver=firefox
$ mvn verify -Dwebdriver.driver=chrome
etc.
I think you are able to get this to work by creating multiple Junit test classes with each its own driver and execute them all in a single run.
Every test class should be able to assign a specific 'managed' driver (e.g. PhantomJS, Chrome, Firefox). This is documented here: http://www.thucydides.info/docs/serenity/#_serenity_webdriver_support_in_junit
I don't know what the impact this would have on the generated report, hopefully you are still able to identify the feature/driver combination.

Integration Tests on Geb / Spock + Selenium Grid do not run in parallel

I have the following set-up: an integration tests project which has a suite of tests written in Groovy/Geb + Spock, which are running perfectly both using Selenium WebDriver and Selenium Grid (RemoteWebDriver).
The problem is that no matter how much I try to tweak the "system", I can't get the tests to run in parallel (i.e. although I have 3 slaves [nodes] registered to the hub, only one of the slaves actually receives the requests). I've enforced maxSession=1 to the Selenium nodes and tried different combinations of parallel=classes|methods, threadCount and fork settings in failsafe plugin configuration (pom.xml file).
I have the feeling that the problem lies somewhere between the maven configuration and selenium grid, probably in relation to Geb/Spock config.
Does any of you have any insight on this issue?
PS: someone suggested that running tests in parallel using Geb / Spock is not possible - because for some reason ?Geb? locks the JUnitRunner (not sure what this means).
Add following configuration to your build.gradle file:
tasks.withType(Test) {
maxParallelForks = 3 // here three forks shall open in parallel
forkEvery = 1
include '**/*TestName*.class' // name of your test class
}
There are test frameworks, TestNG for example, that support parallel testing on the method level out of the box.
Spock, as an example to the contrary, does not support it.
But you do not have to have multithreading implemented by your test framework to make this work.
You can use your build tool to run test classes in parallel, both Maven and Gradle support this.
If you are using Maven, this documentation page and examples might help you:
https://maven.apache.org/surefire/maven-surefire-plugin/examples/fork-options-and-parallel-execution.html
Specifically have a look at "Forked Test Execution".
You can run it for sure, The point is you have to put them (your tests) in threads. Here is the link.