I have a suite of Cucumber tests being run by Jenkins periodically. Most runs are not generating a JSON report. More specifically, a zero sized JSON file is created. I'm using version 4.3.1 of cucumber-java, cucumber-java8 and cucumber-junit and Java 1.8.
My test setup is a little convoluted. Jenkins runs a job every 2 hours to run the tests. This job runs in its own Docker container (running a Linux image) wherein a fresh clone of the test repo is created. Jenkins then executes Gradle to build and run the tests.
In the Jenkins console output I can see Gradle starts the tests and presumably executes some but never completes them all. But there is no error or exception from Gradle, it simply stops running. Nor is there any message about the JVM exiting with a non-zero status.
There is the occasional run of the tests that will produce a non-empty JSON report. This tends to coincide with all of the tests passing, but not always.
Unfortunately I am not able to post the Jenkinsfile, build.gradle or anything else. If you need further details I might be able to provide small snippets.
Related
I started a project and have about 7 tests in my project now and it takes already more than a minute to execute the whole test suite using gradle test.
From the additional output (--info flag) I can see that the whole quarkus application and also dependencies like the mongodb instance are restarted for every test class and method.
This is the exact opposite of what the quarkus documentation says on the testing guide page:
So far in all our examples we only start Quarkus once for all tests. Before the first test is run Quarkus will boot, then all tests will run, then Quarkus will shutdown at the end. This makes for a very fast testing experience however it is a bit limited as you can’t test different configurations.
All the tests are annotated with #QuarkusTest and every test just tests a single endpoint.
I use "pure" kotlin (1.5.21), Quarkus version 2.2.2.Final and gradle 6.9.
Installed features: cdi, config-yaml, jacoco, kotlin, mongodb-client, mongodb-panache-kotlin, narayana-jta, rest-client, rest-client-jackson, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-openapi, swagger-ui
Is that a normal behaviour? If yes, an application with multiple hundred tests could easily take ~20 minutes or more to run the entire test suite.
I didn't try out maven yet, so I can't verify that it's not a gradle related issue.
While trying to reproduce it with a fresh project, I think I found the issue with my code:
I also used #QuarkusTestResources with restrictToAnnotatedClass=true on my tests.
This means the configuration & test profiles must be reloaded and therefore also the quarkus application.
Apparently all the DevServices get restarted, too (in my case it was a mongodb, since I'm using the panache extension), which explains the long runtimes of the tests.
I reorganized my tests a little bit, so they work with the "global" test resources (it was a WireMockServer in my case).
Now quarkus only gets started once before the tests and the total runtime of the gradle test task is acceptable.
I'm new to Jenkins, so please go easy!
I have a web application which we are developing, and we've started automating our release using Jenkins.
I also have a standalone Selenium WebDriver script which will perform a Smoke test on our web app. We usually run this manually each time there's a new deployment.
I heard Jenkins can automatically trigger Selenium tests. So what I did was to create a batch file, which will in turn call the Selenium script. I added a Build Step which will call this batch file.
What's happening now is Jenkins first Builds the WAR file, executes the batch (for selenium) and then deploys it to the target Tomcat.
But I was wondering if I could change the order to Build WAR --> Deploy to Tomcat --> Call the Batch file that executes Selenium Test. I want to do this as Jenkins Tests before deploying, which means my Selenium test still runs on the old build. This makes little sense, as I would rather run the Selenium test on the new build.
In short, is there a way I can execute the Batch file as part of a Post Build Step rather than a Build Step?
Thank you Würgspaß !!
I solved my problem by creating a separate Selenium Job which gets triggered automatically if my Build is successful. This way, I can create any number of downstream jobs to be triggered for a successful build.
We are developing a project using Git, Gerrit and TeamCity.
we had a test that was failing, and TeamCity reported it correctly.
Then we had 2 commits submitted at the same time, one of them fixed the test.
The tests started running for both commits. The first commit passed with all of the tests. The second commit, due to not pulling the changes yet, passed all of the tests except for the broken one. Nevertheless, Teamcity didn't report on it and just hid this test. It even updated Gerrit as if the tests passed.
The test was not muted in teamcity, and when I checked the run area of the tests I could see the test ran and failed- just wasn't reported.
Is this a bug or just a strange feature?
I can understand the logic behind such behavior, but not without reporting on the failing test.
I would like to run local Selenium test script written in Java, via Jenkins/Hudson. Is it possible to run scripts from my local windows machine? So far I have written some simple Selenium tests in Java, which run perfectly if I execute them via Eclipse IDE. I would be thankful for an in-depth explanation.
Selenium test in Java: assuming them to be laid out as unit tests (junit or testng), second assumption is that project is governed by either ant or maven, so there is some test (or rather integration-test) target or phase being present and be running smoothly when invoked from IDE.
When such tests are launched, they hit to a running selenium server (remote-control) which in turn launch a browser and runs its magic. Here are options: selenium server might be running in background (and be always available), or it might be started right before that testing and shut down afterwards. The latter is a common case for maven: pre-integration-test phase is configured to launch selenium rc, (then integration-test phase runs the tests against it), post-integration-test shuts selenium rc down.
So up to this moment we might want to learn what targets (ant) or phases(goals) your IDE invokes when it launches your tests successfully (also, what variables it sets or what profiles it enables).
If you invoke the same command from cmd (like 'mvn clean integration-test') and it runs successfully (same as IDE), then just instruct jenkins to run the same goals/targets (I assume that jenkins is running on the same machine, at the same user session).
If cmd doesn't do the trick (and you've looked quite well into what IDE does for you when it launches your tests), then I'd asked for more details.
So, involved participants are: 0. ant/maven that run your junit tests 1. selenium rc that should be running in bg or launched by ant/maven first 2. browser (path to browser executable) 3. jenkins (which was assumed to be running in the same environment).
If any of the assumptions are false, please come up with more details of your setup.
Disclaimer: I'm not at all a skilled Jenkins user.
I was looking for a plugin to allow me to run only tests affected by the latest commits, in maven-built projects, but could find neither a maven plugin smart enough to not run all tests again, nor a jenkins plugin doing this. Is there a way to achieve this?
Say, for example, I have a project consisting of A.java, testA.java, B.java and testB.java. I have jenkins configured to run a build and test for each commit. I have just commited a change to A.java, but not to B.Java, and there's no dependency from testB.java and B.java to A.java. I don't want Jenkins to run testB.java when it checks out fresh sources. If ther3e was a dependency from B.java or testB.java to A.java, I'd like jenkins to figure this out, and run testB.java too.
Background: running all tests in a project is too time consuming. Running only a fixed set of tests leaves bug holes for bugs. Running all tests which might be broken by a change by hand is error prone.