I am using the sauce lab for running selenium testNg java script where i have a single #Test method that accepts 250 distinct value from a #dataProvider of TestNG as input. Expected: To spawn 250 browser session parallel in saucelabs and execute the #Test method 250 times parallel.
Actual: I can see only a max of 10-12 at a time and remaining sessions follows as the running batch completes.
Please find below my code
POM.XML snippet:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.12.4</version>
<configuration>
<parallel>methods</parallel>
<threadCount>250</threadCount>
<data-provider-thread-count>250</data-provider-thread-count>
<redirectTestOutputToFile>false</redirectTestOutputToFile>
</configuration>
</plugin>
DataProvider Code:
#DataProvider(name="SearchData", parallel=true)
public Object[][] GetSearchData() {
//Returning 2D array of Test Data
Object[][] arrayObject = readFromExcel("C:/Test_Workspace/TestData/ICJ-DataProvider.xls","Sheet1");
return arrayObject;
}
#Test(dataProvider = "SearchData")
public void TestE2E(String hocn, String username, String password, Method method)
throws MalformedURLException, InvalidElementStateException, UnexpectedException {
this.createDriver("chrome", "54.0", "Windows 10", method.getName());
WebDriver driver = this.getWebDriver();
Service.visitPage(driver, hocn, username, password);
}
As you can see, I am passing threadCount=250 and data-provider-thread-count=250 from pom.xml. Still it runs as a batch of 10 to complete the 250 data in data provider.
Image showing only 10 instances at a time instead of 250
Can some one please guide me in getting all 250 sessions up at a time?
The problem has got nothing to do with TestNG.
You are being throttled by SauceLabs.
Quoting the SauceLabs documentation.
Checking Your Concurrency Limit
Each Sauce Labs account has a set maximum number of concurrent
sessions. You can find your concurrency limit on the My Account page
(at https://saucelabs.com/beta/users/username). If this number does
not match your subscription or invoiced contract, please contact
Support.
Subaccounts may have had their concurrency limit lowered by their
parent account. To access higher concurrency levels, you will need to
ask the person responsible for the parent account to increase your
limit.
For more information, please refer to the below documentation on SauceLabs portal.
Why am I not getting the parallelism/concurrency I expected?
Understanding Concurrency Limits and Team Accounts
Related
My scenario is for example in my parallel execution for 2 scenarios the application is launched in two chrome browser and two transactions are initiated. I need the the 2 transaction for which I used "transaction.getAttribute("value")" because the transaction IDs can be seen when I do inspect.
I wrote the below code but only one transaction ID is getting fetched. I am using ThreadLocal for parallel execution. I checked there are two different thread got assigned for two test.
List AlltransactionID = DriverManager.getDriver().findElements(By.id("transaction-id"));
int i = AlltransactionID.size(); // (here I am getting size=1)
log.info("Total number of Transaction ID " + i);
for(WebElement tranID : AlltransactionID){
Object transactionID = tranID.getAttribute("value");
log.info("Value is --> " + transactionID);
}
Note: I have used driver.findelements because I want to fetch the transactions IDs from the two test. Please assist
I have a testPlan that has several transacion controllers (that I called UserJourneys) and each one is composed by some samplers (JourneySteps).
The problem I'm facing is that once the test duration is over, Jmeter kills all the threads and does not take into consideration if they are in the middle of a UserJourney (transaction controller) or not.
On some of these UJs I do some important stuff that needs to be done before the user logs in again, otherwise the next iterations (new test run) will fail.
The question is: Is there a way to tell to JMeter that it needs to wait every thread reach the end of its flow/UJ/TransactionController before killing it?
Thanks in advance!
This is not possible as of version 5.1.1, you should request an enhancement at:
https://jmeter.apache.org/issues.html
The solution is to add as first child of Thread Group a Flow Control Action containing a JSR223 PreProcessor:
The JSR223 PreProcessor will contain this groovy code:
import org.apache.jorphan.util.JMeterStopTestException;
long startDate = vars["TESTSTART.MS"].toLong();
long now = System.currentTimeMillis();
String testDuration = Parameters;
if ((now - startDate) >= testDuration.toLong()) {
log.info("Test duration "+testDuration+" reached");
throw new JMeterStopTestException("Test duration "+testDuration+"reached ");
} else {
log.info("Test duration "+testDuration+" not reached yet");
}
And be configured like this:
Finally you can set the property testDuration in millis on command line using:
-JtestDuration=3600000
If you'd like to learn more about JMeter and performance testing this book can help you.
So far I'm able to run cucumber-jvm tests in parallel by using multiple runner classes but as my project increasing and new features are adding up each time so it's getting really difficult to optimise execution time
So my question is what is the best approach to optimise execution time
Is it by adding new runner class for each feature/limiting to certain threads and updating runner class with new tags to execute
So far I'm using thread count 8 and I've runner classes are also 8
Using this approach is fine until now, but one of the feature has got more scenarios added recently and it's taking longer time to finish :( so how is it possible to optimise execution time here...
Any help much appreciated!!
This worked for me:
Courgette-JVM
It has added capabilities to run cucumber tests in parallel on a feature level or on a scenario level.
It also provides an option to automatically re-run failed scenarios.
Usage
#RunWith(Courgette.class)
#CourgetteOptions(
threads = 10,
runLevel = CourgetteRunLevel.SCENARIO,
rerunFailedScenarios = true,
showTestOutput = true,
cucumberOptions = #CucumberOptions(
features = "src/test/resources/features",
glue = "steps",
tags = {"#regression"},
plugin = {
"pretty",
"json:target/courgette-report/courgette.json",
"html:target/courgette-report/courgette.html"}
))
public class RegressionTestSuite {
}
I am trying to use sentinal redis to get/set keys from redis. I was trying to stress test my setup with about 2000 concurrent requests.
i used sentinel to put a single key on redis and then I executed 1000 concurrent get requests from redis.
But the underlying jedis used my sentinel is blocking call on getResource() (pool size is 500) and the overall average response time that I am achieving is around 500 ms, but my target was about 10 ms.
I am attaching sample of jvisualvm snapshot here
redis.clients.jedis.JedisSentinelPool.getResource() 98.02227 4.0845232601E7 ms 4779
redis.clients.jedis.BinaryJedis.get() 1.6894469 703981.381 ms 141
org.apache.catalina.core.ApplicationFilterChain.doFilter() 0.12820946 53424.035 ms 6875
org.springframework.core.serializer.support.DeserializingConverter.convert() 0.046286926 19287.457 ms 4
redis.clients.jedis.JedisSentinelPool.returnResource() 0.04444578 18520.263 ms 4
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept() 0.035538 14808.45 ms 11430
May anyone help to debug further into the issue?
From JedisSentinelPool implementation of getResource() from Jedis sources (2.6.2):
#Override
public Jedis getResource() {
while (true) {
Jedis jedis = super.getResource();
jedis.setDataSource(this);
// get a reference because it can change concurrently
final HostAndPort master = currentHostMaster;
final HostAndPort connection = new HostAndPort(jedis.getClient().getHost(), jedis.getClient()
.getPort());
if (master.equals(connection)) {
// connected to the correct master
return jedis;
} else {
returnBrokenResource(jedis);
}
}
}
Note the while(true) and the returnBrokenResource(jedis), it means that it tries to get a jedis resource randomly from the pool that is indeed connected to the correct master and retries if it is not the good one. It is a dirty check and also a blocking call.
The super.getResource() call refers to JedisPool traditionnal implementation that is actually based on Apache Commons Pool (2.0). It does a lot to get an object from the pool, and I think it even repairs fail connections for instance. With a lot of contention on your pool, as probably in your stress test, it can probably take a lot of time to get a resource from the pool, just to see it is not connected to the correct master, so you end up calling it again, adding contention, slowing getting the resource etc...
You should check all the jedis instances in your pool to see if there's a lot of 'bad' connections.
Maybe you should give up using a common pool for your stress test (only create Jedis instances manually connected to the correct node, and close them nicely), or setting multiple ones to mitigate the cost of looking to "dirty" unchecked jedis resources.
Also with a pool of 500 jedis instances, you can't emulate 1000 concurrent queries, you need at least 1000.
posted this to google groups SpecFlow but there is little or no activity there so here we go.
I have a SpecFlow/Selenium/MSBuild project and I am running one simple scenario through
the command line, something like this:
SpecRun.exe run Default.srprofile "/filter:#%filter%"
The browser instance fires up, the assert is done, and the browser instance closes. This
takes about 5-10 seconds.
However: after this, I have to wait for 60 seconds until the SpecRun process closes and gives me the result like:
Discovered 1 tests
Thread#0:
0% completed
Thread#0: S
100% completed
Done.
Result: all tests passed
Total: 1
Succeeded: 1
Ignored: 0
Pending: 0
Skipped: 0
Failed: 0
Execution Time: 00:01:01.1724989
I am currently assuming this is because it is writing the test execution report to disk.. but I can not figure out how to turn this OFF... http://www.specflow.org/documentation/Reporting/
And, I can not figure out why this would take 60 seconds, or how to further debug this.
I have removed the AfterScenario and checked the selenium driver quit/close and verified that is not what is causing the problem.
Can anyone shed some light on this ?
Thank you
Jesus. There was something seriously wrong with the BaseStepDefinitions. Did some more debugging and found that the BeforeScenario was hit 25 times on one single test. 25 instances were launched and closed per 1 single scenario. Fixed by starting all over again with a clean file like:
[Binding]
public class BaseStepDefinitions
{
public static IWebDriver Driver;
private static void Setup()
{
Driver = new ChromeDriver();
}
[BeforeFeature]
public static void BeforeFeature()
{
Setup();
}
[AfterFeature]
public static void AfterFeature()
{
Driver.Dispose();
}
}
I will not post my original file because it is embarrassing.
This is a similar problem that helped me https://groups.google.com/forum/#!topic/specflow/LSt0PGv2DeY