Improving jenkins reliability in randomly failled tests - testing

I have a set of +800 selenium tests and because of problems in our application (mostly asynchronicity bugs which we are aware but cannot fix now) they, sometimes, randomly, fail.
We use jenkins for CI and I'd like to know if there is a way to configure jenkins to see the past builds and warn of failing tests only if the same tests fails in the previous builds. This, albeit not a perfect solution, would help mitigate the randomness of tests and the effort to analyse only those that are real bugs.
Any ideias?

Related

#QuarkusTest unit tests take a long time

I started a project and have about 7 tests in my project now and it takes already more than a minute to execute the whole test suite using gradle test.
From the additional output (--info flag) I can see that the whole quarkus application and also dependencies like the mongodb instance are restarted for every test class and method.
This is the exact opposite of what the quarkus documentation says on the testing guide page:
So far in all our examples we only start Quarkus once for all tests. Before the first test is run Quarkus will boot, then all tests will run, then Quarkus will shutdown at the end. This makes for a very fast testing experience however it is a bit limited as you can’t test different configurations.
All the tests are annotated with #QuarkusTest and every test just tests a single endpoint.
I use "pure" kotlin (1.5.21), Quarkus version 2.2.2.Final and gradle 6.9.
Installed features: cdi, config-yaml, jacoco, kotlin, mongodb-client, mongodb-panache-kotlin, narayana-jta, rest-client, rest-client-jackson, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-openapi, swagger-ui
Is that a normal behaviour? If yes, an application with multiple hundred tests could easily take ~20 minutes or more to run the entire test suite.
I didn't try out maven yet, so I can't verify that it's not a gradle related issue.
While trying to reproduce it with a fresh project, I think I found the issue with my code:
I also used #QuarkusTestResources with restrictToAnnotatedClass=true on my tests.
This means the configuration & test profiles must be reloaded and therefore also the quarkus application.
Apparently all the DevServices get restarted, too (in my case it was a mongodb, since I'm using the panache extension), which explains the long runtimes of the tests.
I reorganized my tests a little bit, so they work with the "global" test resources (it was a WireMockServer in my case).
Now quarkus only gets started once before the tests and the total runtime of the gradle test task is acceptable.

Should the QA team be doing unit test?

I come from a configuration management background using tools like chef. I have done quite a bit of code testing. Recently, I have been entrusted with the responsibility to work on the CI-CD of the various applications. But I am noticing a culture that’s quite not in line with my ideology.
Let’s say we have 4 environments in the CI-CD pipeline: Dev, Test, Stage & Prod. The dev environment is used by the developers to deploy and test the app before rolling out to the next stage (test). The test env is for the QA team to run their test. And STAGE is another env for 2nd layer of test by the QA folks before the code goes to prod.
Now, does it make sense for the QA or the CI process to run unit test while progressing/after deploying the code to (TEST or STAGE)? I agree that the unit test should be a part of the automated build. But unit test is mainly for the developers, for their code testing. The QA should be focus on the functional testing, load testing etc.. using their frameworks which may be selenium/protractor or LISA. Why should they be focusing on junit or nunits?
From my experience the developers do the unit testing.
Where I currently work, we follow TDD (Test Driven Development) so developers write the unit tests for their code first and then they write the actual implementation. This work goes on the Dev environment.
The QA people then do the acceptance tests and review stuff.
I hope this helps.
Unit tests are short code fragments created by programmers or occasionally by white box testers during the development process. Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended.
Hope you got the answer.
Your concern is absolutely right. QA does not necessarily be doing Unit Testing (If that's what you mean). This testing is done by dev while they were writing the code (if they were in TDD environment or even if in BDD).
However if QA has used Junit to write their Selenium automated test cases then they will be using Junit to update and change their automated test cases.
It's little difficult to automate Selenium test cases using Junit whereas using other framework like TestNG is the best option to write automation test cases for Selenium.
I hope this answer your question.

Jenkins Test Result Management

I'm working with a Jenkins system, and I've recently started working to optimize the tests. There are almost a thousand Selenium tests and twice as many unit and integration tests.
I'm wondering if there is some way to find which tests are the most prone to failure so we can attack the worst offenders first. It would also be nice if there was an integrated way to track who is working on which test, and if a test has been fixed, what was done to fix it. I'm new to Jenkins, so please point me to some documentation or a plugin I can install that would help me with this.
I think what you are describing is JIRA which can tie SCMs to issues to releases to developers.
There is Jenkins to JIRA integration to create Jira issues out of Jenkins

Automatically start jetty when running acceptance tests from IntelliJ IDEA

I have a bunch of acceptance tests that need the application to be running. It all works fine when I test from command line (thanks to some gradle magic) but I would like to be able to run these tests from IntelliJ IDEA without worrying about starting up Jetty.
Is there any clever way to achieve that automation? I do not even know where to begin.
Thank you very much.
You can do it with Maven/Ant, other Run configurations, but not Gradle at the moment, at least until this feature request is implemented.
For testing purposes it's generally a good idea to use jetty embedded. That way you can fully automate start/stop of jetty and it will work completely independent from build tools/ide.
It's really simple. With few lines of code you have a full featured jetty configured and running for testing.
This is one of the most beloved features of jetty. Have a look at this:
http://www.eclipse.org/jetty/documentation/current/advanced-embedding.html

How to run only relevant tests on jenkins?

Disclaimer: I'm not at all a skilled Jenkins user.
I was looking for a plugin to allow me to run only tests affected by the latest commits, in maven-built projects, but could find neither a maven plugin smart enough to not run all tests again, nor a jenkins plugin doing this. Is there a way to achieve this?
Say, for example, I have a project consisting of A.java, testA.java, B.java and testB.java. I have jenkins configured to run a build and test for each commit. I have just commited a change to A.java, but not to B.Java, and there's no dependency from testB.java and B.java to A.java. I don't want Jenkins to run testB.java when it checks out fresh sources. If ther3e was a dependency from B.java or testB.java to A.java, I'd like jenkins to figure this out, and run testB.java too.
Background: running all tests in a project is too time consuming. Running only a fixed set of tests leaves bug holes for bugs. Running all tests which might be broken by a change by hand is error prone.