I want to run an automatic test develop with Protractor in parallel on 50 instances of google chrome using selenium grid on an 8CPU machine and 16GB RAM. but the machine becomes very slow and the load averge exceeds 40.
There is someone who can help me to increase the instances of chrome on this machine
Check Aerokube guys solution, it works way faster than original Selenium Grid:
Selenoid - Go implementation of original Selenium hub code. It
is using Docker to launch browsers.
GGR - A lightweight load
balancer used to create big Selenium clusters
Related
I'm trying to use an EC2 instance to run several Selenium chromedrivers in parallel. When I run just one, it works fine. But as soon as I start a second Selenium process, both processes fail (as in, page loads hit max timeouts after a couple minutes).
I'm using a t3.large instance, which has 8gb RAM and 5Gbps of network bandwidth. It's not a cheap instance and costs $2 per day. I'm surprised that with these specs, it can't handle two concurrent Selenium processes because my personal laptop has no problem handling 4+ Selenium processes.
Additional info: I'm using pyvirtualdisplay on the EC2 box.
Wondering if I'm missing something here that is causing the poor performance.
I try to run 10 user accessing a website concurrency by using Selenium Webdriver in JUnit and it caused my PC to lag because it open 10 Browser at the same time. I even tried to run Jmeter in Command Prompt, it is just laggy. Is there any methods to actually run 1000 user concurrently without stressing my PC?
Each browser has its system requirements, for example for Firefox 71 they are:
512MB of RAM / 2GB of RAM for the 64-bit version
Pentium 4 or newer processor that supports SSE2
If you want to kick off several browsers - you need to have:
2 GB of RAM per browser instance
1 CPU core per browser instance
For 10 browsers you will need to have 11+ CPU cores and 22+ GB of RAM, for 1000 browsers - proportionally more.
If you have to conduct performance testing using real browsers you will need to go for Distributed Testing and allocate sufficient amount of machines to act as load generators. Remember that machines must not be overloaded as if they will not be able to operate fast enough - you won't get accurate results.
Another option is migrating your Selenium tests to JMeter, you can basically run your Selenium tests through JMeter proxy so JMeter will be able to capture the relevant HTTP requests and convert them into HTTP Request samplers or replay them via Proxy2JMX Converter module of Taurus tool, check out How to Convert Selenium Scripts into the JMX article for more details.
JMeter's HTTP Request samplers have very small footprint comparing to real browsers so you will be able to mimic several thousands of virtual users from a modern mid-range laptop given you follow JMeter Best Practices
I am running some time dependent tests in selenium. For some reason when a insatiate a chrome driver, the load time of the browser window is varying. How can I fix this to get the load times consistent and stop the chrome browser windows from loading so slow?
It has nothing in common with Selenium, you need to get a snapshot of what's going on with your operating system when you launch the browser, for example using Windows Performance Monitor
Blind shot: Chrome browser is very memory intensive (you can check how much RAM it consumes using Windows Task Manager and if your machine is short on RAM it starts intensively using page file to store some memory pages to disk and since disk is much slower comparing to the RAM - you're getting inconsistent results.
The only way of effective speeding up Selenium tests is Parallel Execution via Selenium Grid or by means of your underlying unit testing framework.
I am trying to run Selenium Grid.
Currently, I'm using v3.8.1 with one hub on 1 network and 20+ nodes of different networks registering to that Selenium hub.
It's executing fast when hub and node are created on the same machine where the application has deployed. Nodes created in other remote machines are comparatively slow.
Slow when we try to access particular Node by passing applicationName in capability instead of HUB selecting the node randomly.
More Info:
Windows server 2008, Ruby gem - selenium-webdriver-2.53.4, selenium-server-standalone-3.8.1, Java 8.
Tried Selenium Grid versions - 2.48, 2.49, 2.52, 2.53 and 3.8.1 as per https://github.com/SeleniumHQ/selenium/issues/1565.
Any help on this is appreciated. Thank You.
At last I got it. Its not a selenium node performance issue.
Its a rdp performance issue as it has shared resources.
Its working fine in individual VMs and in server machine.
Thanks You.
I'm setting up a test infrastructure using Azure & Docker - Selenium HUB and Chrome Images
Running the latest version of Ubuntu on AZURE
System Configuration
Ubuntu:16*
Docker: Latest Version
RAM: 6GB
SSD: 120GB
Able to run automation script in the Chrome containers without any issue, if the number of containers is <=10.
When I scale up the numbers, entire system freezes and not responding and the tests are not running.
PS: I'm also mounting /dev/shm:/dev/shm when creating the containers.
What should be the optimal system configuration to run a minimum of 75 containers?
6GB RAM for 75 containers means ~ 80 MB/container. That too you want to run a firefox/chrome inside? Which may be running headless/without display but that doesn't mean they are not memory hungry.
You would need to park 500MB memory/container for such nodes. You can set a memory limit but as soon as your container goes above it, poof!!!. The container is dead and so is your browser and your test. Best is to either use Docker Swarm to deploy a self healing Selenium Grid
Or you can use https://github.com/zalando/zalenium as mentioned by #Leo Galluci.
PS: I wrote an article on how to setup Grid on swarm http://tarunlalwani.com/post/deploy-selenium-grid-using-docker-swarm-aws/. You can have a look at the same to get an idea at the horizontal scaling your grid