#QuarkusTest unit tests take a long time - testing

I started a project and have about 7 tests in my project now and it takes already more than a minute to execute the whole test suite using gradle test.
From the additional output (--info flag) I can see that the whole quarkus application and also dependencies like the mongodb instance are restarted for every test class and method.
This is the exact opposite of what the quarkus documentation says on the testing guide page:
So far in all our examples we only start Quarkus once for all tests. Before the first test is run Quarkus will boot, then all tests will run, then Quarkus will shutdown at the end. This makes for a very fast testing experience however it is a bit limited as you can’t test different configurations.
All the tests are annotated with #QuarkusTest and every test just tests a single endpoint.
I use "pure" kotlin (1.5.21), Quarkus version 2.2.2.Final and gradle 6.9.
Installed features: cdi, config-yaml, jacoco, kotlin, mongodb-client, mongodb-panache-kotlin, narayana-jta, rest-client, rest-client-jackson, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-openapi, swagger-ui
Is that a normal behaviour? If yes, an application with multiple hundred tests could easily take ~20 minutes or more to run the entire test suite.
I didn't try out maven yet, so I can't verify that it's not a gradle related issue.

While trying to reproduce it with a fresh project, I think I found the issue with my code:
I also used #QuarkusTestResources with restrictToAnnotatedClass=true on my tests.
This means the configuration & test profiles must be reloaded and therefore also the quarkus application.
Apparently all the DevServices get restarted, too (in my case it was a mongodb, since I'm using the panache extension), which explains the long runtimes of the tests.
I reorganized my tests a little bit, so they work with the "global" test resources (it was a WireMockServer in my case).
Now quarkus only gets started once before the tests and the total runtime of the gradle test task is acceptable.

Related

Running all Tests in Production Template in Shopware 6.3.5.2

We are building a shop for a customer on Shopware 6.3.5.2 and want to use tests to
ensure that core functionality is not broken by our customizations (static plugins)
write new tests for new functionality
There is Running End-to-End Tests but this seems to be for core development and uses psh.phar which is not available in the production template.
How should this be done?
edit
This question is meant a bit broader and concerns also Unit Tests.
Actually, you can use the E2E tests of the platform project - as Cypress itself doesn't care where to run the test against. However, as you already noticed you cannot use psh commands to run them. You may run the tests though the basic Cypress commands, setting your shop's url as baseUrl of the tests, for example via this command:
./node_modules/.bin/cypress run --config baseUrl="<your-url>"
It works with cypress open as well.
The only thing what may become troublesome is the setToInitialState command in most of the tests which takes care about the clean up of shopware's database using psh scripts, unfortunately. You may need to adjust it by overriding the command in order to reset the database of the Production template.
I hope I was able to help a bit. 🙏
There are actually two parts here:
ensure that core functionality is not broken by our customizations (static plugins)
write new tests for new functionality
re 1: For regression tests like this I would suggest end-to-end tests. Either test through the UI with tools like selenium or through the HTTP API (I don't know if the shopware API is sufficient for extensive regression tests).
re 2: Since plugins do not run on their own I would extract all relevant functionality into plain old PHP classes that are independent of shopware and test those in isolation. Explore if some of that functionality can be made visible through an API and test the plugin integration through this. Depending on the actual plugin you might have to resort to UI tests again.

What are the advantages of running specflow selenium tests in release pipeline instead of build pipeline

I was executing specflow selenium tests in azure build pipeline which is working fine. But someone forced me to run these tests in the release pipeline instead of build pipelines by taking artifacts from the build pipeline.
I am not deploying any application to the server or any other machine. My release pipeline only runs the selenium tests.
I am wondering why should I create a release pipeline if I can do this is in build pipeline itself.
Running your Selenium tests in the build pipeline has following disadvantages:
In most cases Selenium tests are much slower than e.g. simple unit tests, which increases the time of the build pipeline. But you want a fast build pipeline to continue with your work.
If your tests are not stable you will break the build which is IMHO a no-go.
But in some cases it makes sense to execute a small set of Selenium tests during the build pipeline (if not covered with other tests).
This makes sense if you have a big product or when the build pipeline takes very long. You don't want to wait a few hours to get a successful build in your release pipeline where all tests fail because some basic functionallity does not work.
In continuous integration, focus is getting an automated good build out with basic build verification tests while continuous deployment is focused heavily on testing and release management flows.
Typically you will run unit tests in your build workflow, and functional tests in your release workflow after your app is deployed (usually to a QA environment).
The official document also recommends running Selenium tests in the release pipeline

Custom "Custom Run Configurations" in Intellj

I would like to create two custom run configuraion runners for testng. The first would be the default TestNG runner and the second would start jetty for integration tests before running them. I use the CMD+SHIFT+R and CMD+R a lot to run individual tests or a whole class, but it is hard to use this feature whe I cannot start my server before an integration test runs.
Is there a way to set up two configurations, so when I run a test in a package that matches something it uses one configuration, otherwise it will use another?
Maven profiles sounds like a good tool for the job, yes.
A simple and very common approach is to split your tests into unit tests (which are plain vanilla java code) and integration tests (which require other fancy stuff to run).
I see the maven-surefire-plugin supports TestNG, so you are fine there.
Now, to set up jetty, the second pom at this link describes how to start and stop jetty on the maven pre-integration-test and post-integration-test phases.
Then, after you bind the relevant tests to the maven integration-test phase, you can execute everything (start jetty -> integration tests -> stop jetty) via this command:
mvn verify
There are other ways to do it but this is a good starting point.
Good luck.

Hudson CI: cannot test grails-app

I've tried in each and every way to test a grails-app using hudson. I've tried testing with maven, I've tried testing with the grails plugin and I've tried testing with a shell builder it seems that building via shell is the only thing that works..
Every time I get the same error:
org.hibernate.HibernateException:
contains is not valid without active
transaction
But If i go to a shell and type
grails test-app
everything runs fine.
Does anyone have any idea on what's going on?
I'm using CentOS with Java 1.6, no slaves, just a simple hudson deploy over Tomcat6.
I've tried both with maven and grails builder, both fail.
Edit: it seems that if I run both unit and integration tests on the same command (either with grails or with mvn) the integration tests fail always.
Hudson/Jenkins usually just use the command line for executing grails plugins (You should be able to confirm that from the build output). You could probably add a pre build step to dump the environment, so you can see if anything there (or in your own shell) cause it to be fundamentally different.
Otherwise try to log in as the hudson user and find the hudson workspace and repeat the process manually. That has been the easiest way to debug hard problems like this..
regards

Maven Multi-Module builds not honoring failsafe-maven-plugin?

I recently discovered that Hudson was not the problem. In actuality it was Maven itself as the multi-module build was causing the build failure, not Hudson. I just hadn't noticed where the issue actually existed.
Leaving the original question here.
I'm using the failsafe-maven-plugin to run some integration tests. The difference between failsafe and surefire is that failsafe allows failures and does not fail the build.
On my nightly builds there are occasions that a service the integration tests use might be down. In normal builds, the failsafe plugin would let the build continue since the integration tests are allowed to fail. However, Hudson does not seem to respect this and stops the build and produces rain.
I tried to turn the failsafe tests off on nightly builds using -DskipITs. This appears to fail since I'm in a multi module build.
Any ideas on how to get Maven to respect that these tests can fail even though they're part of a specific module?
The project structure is as follows:
-parent
\-jar
\-jar (where integration tests run)
\-war
\-ear
You can use profiles to make builds a bit different for different environments (nightly builds, releases, normal developer builds and so on).
I'd also try updating the Maven version, there were recently few fixes related to multi-module builds.
I don't believe your original assumption that failsafe-maven doesn't fail the build is correct. A failed test does not stop the integration-test phase from completing, which is different from the surefire plugin that runs unit tests. This allows the post-integration-test phase to run, so the test environment can be torn down (app server shut down, etc.).
After this, the verify phase is run, which looks at the results of the integration tests. if one of these tests has failed, then Maven will return with a build failure, which Hudson will rightly pick up so your build can be flagged as broken.
Use a maven profile to turn on/off the verify goal of the maven failsafe plugin.