selenium cucumber tests taking longer time while executing in parallel? - selenium

So far I'm able to run cucumber-jvm tests in parallel by using multiple runner classes but as my project increasing and new features are adding up each time so it's getting really difficult to optimise execution time
So my question is what is the best approach to optimise execution time
Is it by adding new runner class for each feature/limiting to certain threads and updating runner class with new tags to execute
So far I'm using thread count 8 and I've runner classes are also 8
Using this approach is fine until now, but one of the feature has got more scenarios added recently and it's taking longer time to finish :( so how is it possible to optimise execution time here...
Any help much appreciated!!

This worked for me:
Courgette-JVM
It has added capabilities to run cucumber tests in parallel on a feature level or on a scenario level.
It also provides an option to automatically re-run failed scenarios.
Usage
#RunWith(Courgette.class)
#CourgetteOptions(
threads = 10,
runLevel = CourgetteRunLevel.SCENARIO,
rerunFailedScenarios = true,
showTestOutput = true,
cucumberOptions = #CucumberOptions(
features = "src/test/resources/features",
glue = "steps",
tags = {"#regression"},
plugin = {
"pretty",
"json:target/courgette-report/courgette.json",
"html:target/courgette-report/courgette.html"}
))
public class RegressionTestSuite {
}

Related

Karate - Results from two series of tests aren't merged anymore after upgrading version from 0.9.3 to 1.2.0

We are facing an issue related to the tests results after upgrading our Karate project from V0.9.3 to V1.2.0. We are testing an API after the execution of two batchs. Therefore we have a first series of tests (Runner 1) executed on our API after our first batch run, then a second series of tests (Runner 2 on new features files) executed ofter our 2nd batch.
On the version we were using, the tests results were merged but on the updated version, we cannot achieve to get all the results on the same report : the results of the first run are deleted so we’re left with only the results of the second run.
Previously working code :
Results results1 = Runner.parallel(
Arrays.asList("#tag1,#tag2", "#ignore"),
Collections.singletonList("classpath:features"),
5,
"target/sources-rapports");
int totalFailCount = results1.getFailCount();
Results results2 = Runner.parallel(Arrays.asList("#tag3,#tag4", "#ignore"),Collections.singletonList("classpath:features"),5,"target/sources-rapports");
totalFailCount += results2.getFailCount();
generateReport(results2.getReportDir());
The report would contain all test features of results1 and results2. Whereas now, each execution seems to remove previous karate json files before generating the new ones.
New non-working code with the following syntax :
Runner.path("classpath:features")
.tags(Arrays.asList("#tag1,#tag2", "#ignore"))
.outputCucumberJson(true)
.parallel(5);
I'm looking for help to solve this problem. Do not hesitate to ask for more informations if you need.
Try this change:
Runner.path("classpath:features")
.tags(Arrays.asList("#tag1,#tag2", "#ignore"))
.outputCucumberJson(true)
.backupReportDir(false)
.parallel(5);
For further info: https://stackoverflow.com/a/66685944/143475

JMeter - Avoid threads abrupt shutdown

I have a testPlan that has several transacion controllers (that I called UserJourneys) and each one is composed by some samplers (JourneySteps).
The problem I'm facing is that once the test duration is over, Jmeter kills all the threads and does not take into consideration if they are in the middle of a UserJourney (transaction controller) or not.
On some of these UJs I do some important stuff that needs to be done before the user logs in again, otherwise the next iterations (new test run) will fail.
The question is: Is there a way to tell to JMeter that it needs to wait every thread reach the end of its flow/UJ/TransactionController before killing it?
Thanks in advance!
This is not possible as of version 5.1.1, you should request an enhancement at:
https://jmeter.apache.org/issues.html
The solution is to add as first child of Thread Group a Flow Control Action containing a JSR223 PreProcessor:
The JSR223 PreProcessor will contain this groovy code:
import org.apache.jorphan.util.JMeterStopTestException;
long startDate = vars["TESTSTART.MS"].toLong();
long now = System.currentTimeMillis();
String testDuration = Parameters;
if ((now - startDate) >= testDuration.toLong()) {
log.info("Test duration "+testDuration+" reached");
throw new JMeterStopTestException("Test duration "+testDuration+"reached ");
} else {
log.info("Test duration "+testDuration+" not reached yet");
}
And be configured like this:
Finally you can set the property testDuration in millis on command line using:
-JtestDuration=3600000
If you'd like to learn more about JMeter and performance testing this book can help you.

Geb, Spock, Gradle and maxParallelForks

I am having some trouble understanding an issue we are having with our Geb/Spock tests. We are using gradle and we are trying to run our tests in parallel. As I understand it, the maxParallelForks property in gradle will run test classes in separate JVMs.
The issue I am running into is when I have 6 test classes and I set maxParallelForks to 4, when the test starts I will get 4 test classes running in parallel. Awesome! But the final 2 classes is where the problem is. Let's say out of the first 4 classes running, 2 of the classes are done in 1 minute and 2 of the classes are done in 5 minutes. What I'm seeing is instead of the first 2 finishing and starting the next 2 classes, it seems to waiting until the last 2 long running classes finish before spinning up the other forks. This is way less than ideal.
Am I misunderstanding something or am I missing a property somewhere? This is what I have in my build.gradle:
tasks.withType(Test) {
systemProperties System.properties
maxParallelForks = 4
forkEvery = 1
}
Classes are assigned to forks for execution upfront and not on a polling basis. So the first two forks will get two classes assigned upfront and the other two one each regardless of how long each of these classes takes to finish. In worst case scenario two of the longest running classes will be assigned to the a single fork. This is how it works - classes are split into groups and then separate test jvms (forks) are spun up with the list of classes to execute for each of them.
On a side note - you don't want to do forkEvery = 1 - this will restart your test jvms after each test class slowing your test execution down for no benefit.
Using JUNIT suites you can decide which set of classes need to be picked by a particular fork.
import org.junit.runner.RunWith
import org.junit.runners.Suite
#RunWith(Suite.class)
#Suite.SuiteClasses([
TimeTaking.class, // Class that takes a lot of time
NotSoMuchTimeTaking.class, //Class that is quick
// Add more test classes which need to be executed in same fork.
])
public class FirstTestSuite { // keep this empty
}
Similarly, create a SecondTestSuite {
} and so on..
In addition to above steps, include the *TestSuite.class in your build.gradle
tasks.withType(Test) {
systemProperties System.properties
maxParallelForks = 4
forkEvery = 1
include '**/*TestSuite*.class'
}
This way, you will be able to control your execution and decide which test classes need to be executed in what order.

Elasticsearch throws 'ElasticsearchIllegalStateException' part way through tests

I've got a large Groovy application with a lot of JUnit integration tests (256), most of which use 'com.github.tlrx.elasticsearch-test', version: '1.2.1' to run elasticsearch locally.
part way through running all of the test classes all the test that use elasticsearch start throwing a 'ElasticsearchIllegalStateException' with message 'Failed to obtain node lock, is the following location writable?: [./target/elasticsearch-test/data/cluster-test-kiml42s-MacBook-Pro.local]'.
If I run any of these classes alone, it works fine.
This is my initialising code run in all #Befores:
esSetup = new EsSetup();
CreateIndex createIndex = createIndex(index)
for(int i = 0; i < types.size(); i++){
createIndex.withMapping(types[i], fromClassPath(mappings[i]))
}
esSetup.execute(deleteAll(), createIndex)
client = esSetup.client()
And this if my teardown code run in the #Afters:
client.admin().indices().prepareDelete(index).get()
This problem doesn't seem to happen on our build server, so it's only annoying and inconvinient, not a serious problem, but any help would be most appreciated. Thanks.
This problem seems to have been cause by leaving the test nodes active while jUnit ran through all the tests - eventually it stopped being able to create new nodes. The solution is to use esSetup.terminate() in the after to destroy the nodes at the end of each test.
Here's an example of it being used correctly: https://gist.github.com/tlrx/4117854

Optaplanner benchmarkAggregator can run with benchmark simultaneously?

I did run the bench marking successfully without aggregator, and I did run the aggregator alone.
Can I run the bench marking and obtain the aggregator gui simultaneously?
Yes, it's possible, just write a main() that does both in sequence:
PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromXmlResource(
"org/optaplanner/examples/nqueens/benchmark/nqueensBenchmarkConfig.xml");
PlannerBenchmark plannerBenchmark = benchmarkFactory.buildPlannerBenchmark();
plannerBenchmark.benchmark();
PlannerBenchmarkFactory plannerBenchmarkFactory2 = PlannerBenchmarkFactory.createFromXmlResource(
"org/optaplanner/examples/nqueens/benchmark/nqueensBenchmarkConfig.xml");
BenchmarkAggregatorFrame.createAndDisplay(plannerBenchmarkFactory2);