How to speed up selenium tests on AWS device farm? - selenium

I'm using Python for testing on AWS device farm. It seems that starting a selenium takes very very long. This is the code I use:
from time import time
from boto3 import client
from selenium import webdriver
def main():
start = time()
device_farm_client = client("devicefarm", region_name='us-west-2')
test_grid_url_response = device_farm_client.create_test_grid_url(
expiresInSeconds=666,
projectArn="arn:aws:devicefarm:us-west-2:..."
)
driver = webdriver.Remote(
command_executor=test_grid_url_response['url'],
desired_capabilities=webdriver.DesiredCapabilities.CHROME,
)
driver.get('https://api.ipify.org')
print(f"Your IP is: {driver.find_element_by_tag_name('pre').text}")
driver.quit()
print(f"took: {time() - start:.2f}")
if __name__ == '__main__':
main()
Output:
Your IP is: 100.10.10.111
took: 99.89s
Using existing selenium-hub infrastructure the IP is obtained in less than 2 seconds!
Is there any way how to reduce the time radically?

To reduce the overall execution time for complete test suite execution take advantage of the 50 concurrent sessions given you by default at no cost. Check this link. For eg:
Lets assume following details
one test Suite has 200 Selenium test cases
each test case takes around 10 seconds to execute
One AWS Device Farm Selenium Session takes around 60 seconds to start
then I will divide my 200 test cases into 50 concurrent sessions by running concurrent batches of 4 test cases per session.
Total Execution Time = (60 seconds to start each session + 10 seconds to start all 50 concurrent sessions with rate of 5 sessions per second + 4*10 seconds to execute the test cases in each session) = 60+10+40 = 110 seconds to finish complete test suite execution
WHEREAS
If you are existing selenium-hub infrastructure and lets say following details are assumed
200 Selenium test cases to execute
2 seconds to start a session
assume at max you can run 10 concurrent sessions
Total Execution Time = 2 seconds to start each session + 20*10 seconds to execute the test cases in each session = 200+2 = 202 seconds to finish complete test suite execution

Related

Catchpoint pause vs. waitForNoRequest - What's the difference?

I have a test that was alerting because it was taking extra time for an asset to load. We changed from waitForNoRequest to a pause (at Catchpoint's suggestion). That did not seem to have the expected effect of waiting for things to load. We increased the pause from 3000 to 12000 and that helped to allow the page to load and stop the alert. We noticed some more alerts, so I tried to increase the pause to something like 45000 and it would not allow me to pause for that long.
So the main question here is - what functionality does both of these different features provide? What do I gain by pausing instead of waiting, if anything?
Here's the test, data changed to protect company specific info. Step 3 is where we had some failures and we switched between pause and wait.
// Step - 1
open("https://website.com/")
waitForNoRequest("2000")
click("//*[#id=\"userid\"]")
type("//*[#id=\"userid\"]", "${username}")
setStepName("Step1-Login-")
// Step - 2
clickMouseAndWait("//*[#id=\"continue\"]")
waitForVisible("//*[#id=\"challenge-password\"]")
click("//*[#id=\"challenge-password\"]")
type("//*[#id=\"challenge-password\"]", "${password}")
setStepName("Step2-Login-creds")
// Step - 3
clickMouseAndWait("//*[#id=\"signIn\"]")
setStepName("Step3-dashboard")
waitForTitle("Dashboard")
waitForNoRequest("3000")
click("//*[#id=\"account-header-wrapper\"]")
waitForVisible("//*[#id=\"logout-link\"]")
click("//*[#id=\"logout-link\"]")
// Step - 4
clickAndWait("//*[text()=\"Sign Out\"]")
waitForTitle("Login - ")
verifyTextPresent("You have been logged out.")
setStepName("Step5-Logout")
Rachana here, I’m a member of the Technical Service Team here at Catchpoint, I’ll be happy to answer your questions.
Please find the differences below between waitForNoRequest and Pause commands:
Pause
Purpose: This command pauses the script execution for a specified amount of time, whether there are HTTP/s requests downloading or not. Time value is provided in milliseconds, it can range between 100 to 30,000 ms.
Explanation: This command is used when the agent needs to wait for a set amount of time and this is not impacted by the way the requests are loaded before proceeding to the next step or command. Only a parameter is required for this action.
WaitForNoRequest
Purpose: This commands waits for a specified amount of time, when there was no HTTP/s requests downloading. The wait time parameter can range between 1,000 to 5,000 ms.
Explanation: The only parameter for this action is a wait time. The agent will wait for that specified amount of time before moving onto the next step/command. Which will, in return, allow necessary requests more time to load after document complete.
For instance when you add waitforNoRequest(5000), initially agent waits 5000 ms after doc complete for any network activity. During that period if there is any network activity, then the agent waits another 5000 ms for the next network activity to end and the process goes on until no other request loads within the specified timeframe(5000 ms).
A pause command with 12000 ms, gives exactly 12 seconds to load the page. After 12 seconds the script execution will continue to next command no matter the page is loaded or not.
Since waitForNoRequest has a max time value of 5000 ms, you can tell the agent to wait for a gap of 5 seconds when there is no network activity. In this case, the page did not have any network activity for 3 seconds and hence proceeded to the next action. The page was not loaded completely and the script failed.
I tried to increase the pause to something like 45000 and it would not allow me to pause for that long.
We allow a maximum of 30 seconds pause time hence 45 seconds will not work.
Please reach out to our support team and we’ll be glad to connect you with our scripting SMEs and help you with any scripting needs you might have.

concurrent testing of login functionality with 50 users/threads is not working

i have given the thread count = 50
rampup period =0
for 48 threaads it is getting passed , for 2 threads there is no failure captured in the selenium log files.
I am expecting concurrent login of 50 users with 0 rampup period , i am not able to find out the exact reason of failure . please suggest the fixes to handle this scenario.
Check jmeter.log file for any suspicious entries
Add View Results Tree listener to your test plan - it will allow you to inspect request and response details
50 real browsers might be too high for a single machin
as per WebDriver Sampler documentation
From experience, the number of browser (threads) that the reader creates should be limited by the following formula:
C = N + 1
where
C = Number of Cores of the host running the test
and N = Number of Browser (threads).
as per Firefox 62.0 system requirements
512MB of RAM / 2GB of RAM for the 64-bit version
So you will need a machine with 51 cores and 100 GB of RAM in order to ensure there will no be JMeter-side bottleneck. If your machine hardware specifications are lower - you will have to go for Remote Testing

How to use assertions for multiple scenario in gatling?

Right Now I am trying to do performance testing of all my api's.I already created one feature file having different scenarios(every scenario having different tag).Now I want to do use assertions on mean ResponseTime with different scenarios different assertions.
val Performance1 = scenario("Performance1").exec(karateFeature("classpath:mock/Testing1.feature#Performance"))
val Performance2 = scenario("Performance2").exec(karateFeature("classpath:mock/Testing2.feature#v3ContentMeta"))
val v4SearchTest = scenario("SearchTest").
group("SearchTesting") { exec(karateFeature("classpath:mock/Testing1.feature#Performance"))
}
setUp(
(Performance1.inject(rampUsers(10) over (5 seconds)).protocols(protocol)),
Performance2.inject(rampUsers(10) over (5 seconds)).protocols(protocol)
).assertions(details("SearchTesting").responseTime.mean.lte(680))```
You can add Gatling assertions as Global asserts. This will perfectly work with Karate Gatling. This is a sample scenario which we tried
setUp(
firstScenario.inject(
nothingFor(5 seconds), // Pause for a given duration
atOnceUsers(10), //Inject 10 Users at once
constantUsersPerSec(10) during (20 seconds), // Induce 10 requests on every second and continues this process for 30 seconds
rampUsers(10) over (10 seconds) // Linear Ramp up of the user
).protocols(protocol),
secondScenario.inject(
nothingFor(10 seconds), // Pause for a given duration
atOnceUsers(20), // Inject 10 Users at once
constantUsersPerSec(10) during (10 seconds), // Induce 10 requests on every second and continues this process for 40 seconds
).protocols(protocol),
thirdScenario.inject(
nothingFor(15 seconds), // Pause for a given duration
rampUsers(20) over (1 minute) // Linear Ramp up of the user
).protocols(protocol),
fourthScenario.inject(
nothingFor(20 seconds), // Pause for a given duration
constantUsersPerSec(10) during (20 seconds), // Induce 10 requests on every second and continues this process for 20 seconds
).protocols(protocol)
).assertions(
global.responseTime.max.between(100, 5000),
global.failedRequests.percent.is(0),
global.successfulRequests.percent.gt(90)
).maxDuration(10 minutes) // Configuring the maximum duration of your simulation. It is useful when we need to bound the duration the simulation when we can’t predict it.
The global asserts will be displayed as a separate section in the Gatling reports. This is a useful feature of Karate Gatling. Test specific failures will also get displayed in the report of Karate Gatling. For example, if this is your scenario
Scenario: My First Sample Scenario
Given url endpointUrl
And header karate-name = 'Feature 1_Scenario3'
When method get
Then status 200
And if the status code is not responded as 200, this also gets recorded in the Karate Gatling reports.
Asserts in Gatling: https://gatling.io/docs/current/general/assertions/#scope

jmeter and apachetop - why I see different values?

Probably explanation is simple - but I couldn't find answer to my question:
I am running jmeter test from one VM (worker) to another (target). On worker I have jmeter with 100 threads (100 users). On target I have API that runs on Apache. When I run "apachetop -f access_log" on target, I see only about 7 req/s.
Can someone explain me, why I don't see 100 req/s on target?
In test result in jmeter I see always 200 OK, so all request are hitting the target, and moreover target always responds. So I am not dropping any requests here. Network bandwidth between machines is 1G. What I am missing here?
Thanks,
Daddy
100 users doesn't necessarily mean 100 requests per second, even more, it is highly unlikely.
According to JMeter glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Roughly, if JMeter is able to get response from server in 1 second - you will get 100 requests/second. If response time will be 2 seconds - throughput will be 50 requests/second, etc, response time 4 seconds - 25 requests/second, etc.
Also JMeter configuration matters. If you don't provide enough loops you may run into a situation where some threads already finished and some are not even started. See JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more detailed explanation.
Your target load = 100 threads ( you are assuming it should generate 100 req/sec as per your plan)
Your actual load = 7 req / sec = 7*3600 / hour = 25200
Per thread throughput = 25200 / 100 threads = 252 iterations / thread / hour
Per transaction time = 3600 / 252 = 14.2 secs
This means, JMeter should be actually sending each request every 14 secs per thread. i.e., 100 requests for every 14.2 secs.
Now, analyze your JMeter summary report for the transaction timers to find out where the remaining 13.2 secs are being spent.
Possible issues are
1. High DNS resolution time (DNS issue)
2. High connection setup time (indicates load balancer issues)
3. High Request send time (indicates n/w or firewall throttling issues)
4. High request receive time (same as #3)
Now, the time that you see in Apache logs are mostly visible to JMeter as time to first byte time. I am not sure about the machine that you are running your testing. If your worker can support curl, use Curl to find the components for a single request.
echo 'request payload for POST'
| curl -X POST -H 'User-Agent: myBrowser' -H 'Content-Type: application/json' -d #- -s -w '\nDNS time:\t%{time_namelookup}\nTCP Connect time:\t%{time_connect}\nAppCon Protocol time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' http://mytest.test.com
If the above output indicates no such issues then the time must have been spent within JMeter. You should tune your JMeter implementation by using various options like beanshell / JSR223 etc.

Gatling user injection for 50 total users in 1 hour adding 10 users per 5 minutes

I need to setup a Gatling Test with a total of 50 concurrent users, but I have a problem because there is no choice to get it.
I use rampUsers(10) over (60 minutes) but it gets only 10 concurrent users.
Using constantUsersPerSec(users) during (60 minutes) is too stressful.
Is there any suggestion?
Thanks.
This could be done as follow:
val scn = scenario("Test").during(1 hours) {
exec(http("test").get("/"))
}
setUp(scn.inject(splitUsers(50) into atOnceUsers(10) separatedBy(5 minutes))
.protocols(httpConf))
see http://gatling.io/docs/2.0.3/general/simulation_setup.html:
splitUsers(nbUsers) into(injectionStep) separatedBy(duration): Repeatedly execute the defined injection step separated by a pause of the given duration until reaching nbUsers, the total number of users to inject.