I have an existing selenium, testng Gradle project which is not well written in terms of multi threading, due to the use of static Webdriver.
Fixing this is too huge time taking since it has more than 500 tests across various test classes. And more over all page objects refer static Webdriver
Still searching for any options for parallel test execution came across Gradle's forked execution.
As per the Gradle official docs.
Gradle executes tests in a separate ('forked') JVM, isolated from the main build process. This prevents classpath pollution and excessive memory consumption for the build process. It also allows you to run the tests with different JVM arguments than the build is using.
If tests execute in separate JVM instances, then does that mean, even if the WebDriver is static will the tests still run in parallel without overlapping each other?
If Yes, then what are the disadvantages of this?
If No, then why don't variables and objects in separately isolated JVM instances posses separate identity?
Tried adding
maxParallelForks=5
And forkEery=5
Related
I am discovering selenium and more precisely selenium grid which allows the creation of nodes and the execution of parallel tests on several browsers.
I was wondering what the difference is between these 2 frameworks: is there one that performs better than the other?
Thanks
As per Cédric Beust's TestNG Documentation:
TestNG is a testing framework inspired from JUnit and NUnit but
introducing some new functionalities that make it more powerful and
easier to use, such as:
Annotations.
Run your tests in arbitrarily big thread pools with various policies available (all methods in their own thread, one thread per test class,
etc).
Test that your code is multithread safe.
Flexible test configuration.
Support for data-driven testing (with #DataProvider).
Support for parameters.
Powerful execution model (no more TestSuite).
Supported by a variety of tools and plug-ins (Eclipse, IDEA, Maven, etc...).
Embeds BeanShell for further flexibility.
Default JDK functions for runtime and logging (no dependencies).
Dependent methods for application server testing.
TestNG is designed to cover all categories of tests: unit,
functional, end-to-end, integration.
I started a project and have about 7 tests in my project now and it takes already more than a minute to execute the whole test suite using gradle test.
From the additional output (--info flag) I can see that the whole quarkus application and also dependencies like the mongodb instance are restarted for every test class and method.
This is the exact opposite of what the quarkus documentation says on the testing guide page:
So far in all our examples we only start Quarkus once for all tests. Before the first test is run Quarkus will boot, then all tests will run, then Quarkus will shutdown at the end. This makes for a very fast testing experience however it is a bit limited as you can’t test different configurations.
All the tests are annotated with #QuarkusTest and every test just tests a single endpoint.
I use "pure" kotlin (1.5.21), Quarkus version 2.2.2.Final and gradle 6.9.
Installed features: cdi, config-yaml, jacoco, kotlin, mongodb-client, mongodb-panache-kotlin, narayana-jta, rest-client, rest-client-jackson, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-openapi, swagger-ui
Is that a normal behaviour? If yes, an application with multiple hundred tests could easily take ~20 minutes or more to run the entire test suite.
I didn't try out maven yet, so I can't verify that it's not a gradle related issue.
While trying to reproduce it with a fresh project, I think I found the issue with my code:
I also used #QuarkusTestResources with restrictToAnnotatedClass=true on my tests.
This means the configuration & test profiles must be reloaded and therefore also the quarkus application.
Apparently all the DevServices get restarted, too (in my case it was a mongodb, since I'm using the panache extension), which explains the long runtimes of the tests.
I reorganized my tests a little bit, so they work with the "global" test resources (it was a WireMockServer in my case).
Now quarkus only gets started once before the tests and the total runtime of the gradle test task is acceptable.
Is there any way to rerun annotation processors without rebuilding the entire project?
I'm developing an annotation processor which is used in project that takes ~ 10 minutes to build from scratch and it's a bit painful to wait 10 mins to test a change...
General Aspect
This sounds like you don't have a proper testing approach for your annotation processor.
If you do testing always in an integrated environment, you will always have the problem of long running tests. This applys to any test environment that depens on heavy task.
So, my generall advice would be to write lightwight unit tests to check you code is working as expected. That's a general advice I can give you.
This article https://blog.jooq.org/2018/12/07/how-to-unit-test-your-annotation-processor-using-joor/ from Lukas Eder - the founder of JOOQ and that uses java annotation processors as well - is about unit testing java annotation processors.
Only Run Annotation Processors
Intellij Idea
AFAIK there is no way to do this.
Maven
If your project runs on maven, you can trigger annotation processors by just execution the generate-sources phase.
Annotation processor can not be run without the compiler.
If you are not using Maven or Gradle to build the project but is using the IDE's build, invoke the Build | Build Project action. This way IDE will perform an incremental build that will build only changed classes.
Presently we built a Automation framework which uses Selenium Webdriver+ specflow + Nunit, and we suing bamboo as our CI to run our Job against our every build.
we written a build.xml to handle our targets (like clean, init, install latest build, run Selenium scripts, uninstall build. etc)
ant command will read the tag name from the build.xml and runs the respective feature/scenarios based on Tags (like #smoke, #Regression)with Nunit in CI machine.
Now our requirement is to use Selenium Grid to divide scripts into different machine and execute with above set-up. Grid has to divide the scripts based on feature file or based on Tags.How to achieve this.
Is there any thing need to done under [BeforeFeature] and [BeforeScenario] ?
If you provide in details steps or any link which explains detail steps that would be a great help.
Please any one can help in this regards.
Thanks,
Ashok
You have misunderstood the role Grid plays in distributed parallel testing. It does not "divide the scripts", but simply provides a single hub resource through which multiple tests can open concurrent sessions.
It is the role of the test runner (in your case Specflow) to divide tests and start multiple threads.
I believe that you require SpecFlow+ (http://www.specflow.org/plus/), but this does have a license cost.
It should be possible to create your own multithread test runner for Specflow but will require programming and technical knowledge.
If you want a free open source approach to parallel test execution in DotNet, then there is MbUnit (http://code.google.com/p/mb-unit) but this would require you to rewrite your tests
I would like to run multiple Selenium Tests (on a Jenkins server) at the same time.
It currently runs only a single test at a time cause ChromeDriver seems to communicate over a special port. So somehow I guess I have to pass some kind of port settings via Selenium to the ChromeDriver to start up multiple tests.
The Selenium website unfortunately is empty for that topic:
http://docs.seleniumhq.org/docs/04_webdriver_advanced.jsp#parallelizing-your-test-runs
From my point of view it makes no difference if the Test runs locally or on Jenkins, the problem is the same. We need to somehow configure ChromeDriver. The question is just how.
Anybody has some ideas or pointers where to look at and what files are involved to get this done?
You can run multiple instances of chromedriver locally quite easily, just instantiate multiple driver objects, chromedriver will keep the profiles separate and find a port to run on all by itself.
Here a link to an example that can run multiple tests using TestNG and Maven:
https://github.com/Ardesco/Selenium-Maven-Template
Just clone the above project and run the following in the command line:
mvn verify -Pselenium-tests -Dbrowser=chrome -Dthreads=2
It takes advantage of TestNG's ability to manage the thread pool and will open up multiple instances if specified. You can do the same thing with jUnit but you'll need to write a custom test runner to fire the tests off into individual threads.
If you decide to use gradle it can deal with managing the thread pools for you with both TestNG and jUnit and a lot of people prefer it to maven.
This is an old question, but for anyone still reading along, it is very possible to run multiple Selenium WebDriver instances in parallel without using Grid. I have successfully tested this using Chrome, FireFox, and PhantomJs (up to 5). Each WebDriver instance uses an isolated context, so session conflict should not be an issue. Be wary of server side conflicts though, depending on the requirements of your website!
For NUnit users, NUnit 3.2.1 now has a 'TestContext.Current.WorkerId' property that will allow you to isolate one WebDriver instance per NUnit worker.
Running multiple browsers on the same machine will often hinder performance, so be careful not to use too many browsers instances, or you may actually increase your testing time!
What you are looking for is Selenium Grid 2.
Grid allows you to :
scale by distributing tests on several machines ( parallel execution )
manage multiple environments from a central point, making it easy to run the tests against a vast combination of browsers / OS.
minimize the maintenance time for the grid by allowing you to implement custom hooks to leverage virtual infrastructure for instance.
I agree using grid in combination with Maven parallelized class, you can run multiple instance in one PC. Jenkins is possible when you are using Ant for your build ,then you can specify which test can be run parallel.
Its quite easy to set it up though ;)