Geb, Spock, Gradle and maxParallelForks - selenium

I am having some trouble understanding an issue we are having with our Geb/Spock tests. We are using gradle and we are trying to run our tests in parallel. As I understand it, the maxParallelForks property in gradle will run test classes in separate JVMs.
The issue I am running into is when I have 6 test classes and I set maxParallelForks to 4, when the test starts I will get 4 test classes running in parallel. Awesome! But the final 2 classes is where the problem is. Let's say out of the first 4 classes running, 2 of the classes are done in 1 minute and 2 of the classes are done in 5 minutes. What I'm seeing is instead of the first 2 finishing and starting the next 2 classes, it seems to waiting until the last 2 long running classes finish before spinning up the other forks. This is way less than ideal.
Am I misunderstanding something or am I missing a property somewhere? This is what I have in my build.gradle:
tasks.withType(Test) {
systemProperties System.properties
maxParallelForks = 4
forkEvery = 1
}

Classes are assigned to forks for execution upfront and not on a polling basis. So the first two forks will get two classes assigned upfront and the other two one each regardless of how long each of these classes takes to finish. In worst case scenario two of the longest running classes will be assigned to the a single fork. This is how it works - classes are split into groups and then separate test jvms (forks) are spun up with the list of classes to execute for each of them.
On a side note - you don't want to do forkEvery = 1 - this will restart your test jvms after each test class slowing your test execution down for no benefit.

Using JUNIT suites you can decide which set of classes need to be picked by a particular fork.
import org.junit.runner.RunWith
import org.junit.runners.Suite
#RunWith(Suite.class)
#Suite.SuiteClasses([
TimeTaking.class, // Class that takes a lot of time
NotSoMuchTimeTaking.class, //Class that is quick
// Add more test classes which need to be executed in same fork.
])
public class FirstTestSuite { // keep this empty
}
Similarly, create a SecondTestSuite {
} and so on..
In addition to above steps, include the *TestSuite.class in your build.gradle
tasks.withType(Test) {
systemProperties System.properties
maxParallelForks = 4
forkEvery = 1
include '**/*TestSuite*.class'
}
This way, you will be able to control your execution and decide which test classes need to be executed in what order.

Related

Karate - Multi threaded access requested - issue

I have 100+ tests being covered in 25+ feature files and I have the karate-config.js which has 3 "karate.callSingle" functions as below.
config.weatherParams = karate.callSingle(
"file:src/test/java/utils/AvailableForecasts.feature",
config
);
config.routingParams = karate.callSingle(
"file:src/test/java/utils/CalculationInput.feature",
config
);
config.vesselParams = karate.callSingle(
"file:src/test/java/utils/VesselStatus.feature",
config
);
Same issue when I use classpath inside callSingle.
When I run all the tests at once with parallel (tried randomly 1-100 threads) enabled, I get the following error:
org.graalvm.polyglot.PolyglotException: Multi threaded access requested by thread Thread[pool-2-thread-8,5,main] but is not allowed for language(s) js.
- com.oracle.truffle.polyglot.PolyglotEngineException.illegalState(PolyglotEngineException.java:132)
- com.oracle.truffle.polyglot.PolyglotContextImpl.throwDeniedThreadAccess(PolyglotContextImpl.java:727)
- com.oracle.truffle.polyglot.PolyglotContextImpl.checkAllThreadAccesses(PolyglotContextImpl.java:627)
- com.oracle.truffle.polyglot.PolyglotContextImpl.enterThreadChanged(PolyglotContextImpl.java:526)
- com.oracle.truffle.polyglot.PolyglotEngineImpl.enter(PolyglotEngineImpl.java:1857)
- com.oracle.truffle.polyglot.HostToGuestRootNode.execute(HostToGuestRootNode.java:104)
- com.oracle.truffle.polyglot.PolyglotMap.entrySet(PolyglotMap.java:119)
After playing around with multiple combinations- surprisingly, when I have only 2 "callSingle" functions in karate.config (commenting VesselStatus.feature) then it works fine.
All these 3 "callSingle" things calling 3 different services and sets the variable for other tests to run, so these 3 are critical.
Is there a way, we can re-optimize / bring a different approach to avoid the above issue?
This is a known issue that should be fixed in 1.1.0.RC2
Details here: https://github.com/intuit/karate/issues/1558
Would be good if you can confirm.
I faced this issue in my karate implementation #peter-thomas. I just got an easy workaround for this issue since we know that graalVM js engine doesnt support multithreading of karate-config.js
work around is - we can wait for a certain milliseconds and that milliseconds has to be genrated randomly.
below code inside karate-config.js have a look please -
function fn(){
// karate-config essential coding
var random_millis = Math.floor(Math.random() * 5000 - 1000 +1 )) + 1000;
java.lang.Thread.sleep(random_millis);
return something;
}
with above piece of code i tried my 100+ feature files running with 20 parrellal threads with karate 1.2.0.RC1 and it worked fantastically fine.
How its working - all the 20 threads will jump altogether , reaching karate-config at the same time. but if we apply some delay that too random between 1 to 5 seconds (in millis) , all threads will wait for different time avoiding multithreading issue.
I also know that between 1 to 5000 millisends , still there are suppose 1% chances that we get same numbers but till we get concrete solution of this issue i guess we can use this workaround.
Thanks,
Saurabh

Sleep in Spock's Mock waits when run by CompletableFuture

When I separately run the runAsyncWithMock test, it waits for 3 seconds until the mock's execution is finalised, rather than get terminated like the other 2 tests.
I was not able to figure out why.
It is interesting that:
When multiple Runnables are executed by CompletableFuture.runAsync in a row in the runAsyncWithMock test, only the first one waits, the others not.
When having multiple duplicated runAsyncWithMock tests, each and every of them runs for 3s when the whole specification is executed.
When using Class instance rather than a Mock, the test is finalised immediately.
Any idea what I got wrong?
My configuration:
macOS Mojave 10.14.6
Spock 1.3-groovy-2.4
Groovy 2.4.15
JDK 1.8.0_201
The repo containing the whole Gradle project for reproduction:
https://github.com/lobodpav/CompletableFutureMisbehavingTestInSpock
The problematic test's code:
#Stepwise
class SpockCompletableFutureTest extends Specification {
def runnable = Stub(Runnable) {
run() >> {
println "${Date.newInstance()} BEGIN1 in thread ${Thread.currentThread()}"
sleep(3000)
println "${Date.newInstance()} END1 in thread ${Thread.currentThread()}"
}
}
def "runAsyncWithMock"() {
when:
CompletableFuture.runAsync(runnable)
then:
true
}
def "runAsyncWithMockAndClosure"() {
when:
CompletableFuture.runAsync({ runnable.run() })
then:
true
}
def "runAsyncWithClass"() {
when:
CompletableFuture.runAsync(new Runnable() {
void run() {
println "${Date.newInstance()} BEGIN2 in thread ${Thread.currentThread()}"
sleep(3000)
println "${Date.newInstance()} END2 in thread ${Thread.currentThread()}"
}
})
then:
true
}
}
This is caused by the synchronized methods in https://github.com/spockframework/spock/blob/master/spock-core/src/main/java/org/spockframework/mock/runtime/MockController.java when a mock is executed it delegates through the handle method. The Specification also uses the synchronized methods, in this case probably leaveScope, and is thus blocked by the sleeping Stub method.
Since this is a thread interleaving problem I guess that additional closure in runAsyncWithMockAndClosure moves the execution of the stub method behind the leaveScope and thus changes the ordering/blocking.
Oh, just now after writing my last comment I saw a difference:
You use #Stepwise (I didn't when I tried at first), an annotation I almost never use because it creates dependencies between feature methods (bad, bad testing practice). While I cannot say why this has the effect described by you only when running the first method, I can tell you that removing the annotation fixes it.
P.S.: With #Stepwise you cannot even execute the second or third method separately because the runner will always run the preceding one(s) first, because - well, the specification is said to be executed step-wise. ;-)
Update: I could briefly reproduce the problem with #Stepwise, but after recompilation now it does not happen anymore, neither with or without that annotation.

JMeter - Avoid threads abrupt shutdown

I have a testPlan that has several transacion controllers (that I called UserJourneys) and each one is composed by some samplers (JourneySteps).
The problem I'm facing is that once the test duration is over, Jmeter kills all the threads and does not take into consideration if they are in the middle of a UserJourney (transaction controller) or not.
On some of these UJs I do some important stuff that needs to be done before the user logs in again, otherwise the next iterations (new test run) will fail.
The question is: Is there a way to tell to JMeter that it needs to wait every thread reach the end of its flow/UJ/TransactionController before killing it?
Thanks in advance!
This is not possible as of version 5.1.1, you should request an enhancement at:
https://jmeter.apache.org/issues.html
The solution is to add as first child of Thread Group a Flow Control Action containing a JSR223 PreProcessor:
The JSR223 PreProcessor will contain this groovy code:
import org.apache.jorphan.util.JMeterStopTestException;
long startDate = vars["TESTSTART.MS"].toLong();
long now = System.currentTimeMillis();
String testDuration = Parameters;
if ((now - startDate) >= testDuration.toLong()) {
log.info("Test duration "+testDuration+" reached");
throw new JMeterStopTestException("Test duration "+testDuration+"reached ");
} else {
log.info("Test duration "+testDuration+" not reached yet");
}
And be configured like this:
Finally you can set the property testDuration in millis on command line using:
-JtestDuration=3600000
If you'd like to learn more about JMeter and performance testing this book can help you.

selenium cucumber tests taking longer time while executing in parallel?

So far I'm able to run cucumber-jvm tests in parallel by using multiple runner classes but as my project increasing and new features are adding up each time so it's getting really difficult to optimise execution time
So my question is what is the best approach to optimise execution time
Is it by adding new runner class for each feature/limiting to certain threads and updating runner class with new tags to execute
So far I'm using thread count 8 and I've runner classes are also 8
Using this approach is fine until now, but one of the feature has got more scenarios added recently and it's taking longer time to finish :( so how is it possible to optimise execution time here...
Any help much appreciated!!
This worked for me:
Courgette-JVM
It has added capabilities to run cucumber tests in parallel on a feature level or on a scenario level.
It also provides an option to automatically re-run failed scenarios.
Usage
#RunWith(Courgette.class)
#CourgetteOptions(
threads = 10,
runLevel = CourgetteRunLevel.SCENARIO,
rerunFailedScenarios = true,
showTestOutput = true,
cucumberOptions = #CucumberOptions(
features = "src/test/resources/features",
glue = "steps",
tags = {"#regression"},
plugin = {
"pretty",
"json:target/courgette-report/courgette.json",
"html:target/courgette-report/courgette.html"}
))
public class RegressionTestSuite {
}

Time out for test cases in googletest

Is there a way in gtest to have a timeout for inline/test cases or even tests.
For example I would like to do something like:
EXPECT_TIMEOUT(5 seconds, myFunction());
I found this issue googletest issues as 'Type:Enhancement' from Dec 09 2010.
https://code.google.com/p/googletest/issues/detail?id=348
Looks like there is no gtest way from this post.
I am probably not the first to trying to figure out a way for this.
The only way I can think is to make a child thread run the function, and if it does not return by the
time limit the parent thread will kill it and show timeout error.
Is there any way where you don't have to use threads?
Or any other ways?
I just came across this situation.
I wanted to add a failing test for my reactor. The reactor never finishes. (it has to fail first). But I don't want the test to run forever.
I followed your link but still not joy there. So I decided to use some of the C++14 features and it makes it relatively simple.
But I implemented the timeout like this:
TEST(Init, run)
{
// Step 1 Set up my code to run.
ThorsAnvil::Async::Reactor reactor;
std::unique_ptr<ThorsAnvil::Async::Handler> handler(new TestHandler("test/data/input"));
ThorsAnvil::Async::HandlerId id = reactor.registerHandler(std::move(handler));
// Step 2
// Run the code async.
auto asyncFuture = std::async(
std::launch::async, [&reactor]() {
reactor.run(); // The TestHandler
// should call reactor.shutDown()
// when it is finished.
// if it does not then
// the test failed.
});
// Step 3
// DO your timeout test.
EXPECT_TRUE(asyncFuture.wait_for(std::chrono::milliseconds(5000)) != std::future_status::timeout);
// Step 4
// Clean up your resources.
reactor.shutDown(); // this will allow run() to exit.
// and the thread to die.
}
Now that I have my failing test I can write the code that fixes the test.