I have Jenkins project that perform some sort of sanity check on couple of independent documents. Check result is written in JUnit XML format.
When one document test fails, entire build fails. Jenkins can be simply configured to send email to commiter in this situation. But I want to notify commiters only when new test failed or any failed test was fixed with the commit. They are not interested in failed tests for documents they have not edited. Email should contain only information of changes in tests, not full test report. Is it possible to send this kind of notification with any currently available Jenkins plugins? What could be the simplest way to achieve this?
I had the same question today. I wanted to configure Jenkins sending notifications only when new tests fail.
What I did was to install email-ext plugin.
You can find there a special trigger that is called Regression (An email will be sent any time there is a regression. A build is considered to regress whenever it hasmore failures than the previous build.)
Regarding fixed tests, there is Improvement trigger (An email will be sent any time there is an improvement. A build is considered to have improved wheneverit has fewer failures than the previous build.)
I guess that this is what you are looking for.
Hope it helps
There's the email-ext plugin. I don't think it does exactly what you want (e.g. sending only emails to committers who have changed a file that is responsible for a failure). You might be able to work around that/extend the plugin though.
Also have a look at the new Emailer, which talks about new email functionality in core hudson that is based on aforementioned plugin.
Related
SonarQube: Enterprise Edition Version 9.2.4 (build 50792)
Sonar client: 4.7.0.2747
Scan is launched for merge request in gitlab. I am requesting coverage for pull request.
Imidietly after scan (using scanner client) is finished I try to get coverage by following call:
http:///api/measures/component?metricKeys=coverage&component=&pullRequest=
I am getting:
404 : “{“errors”:[{“msg”:“Component \u0027u0027 of pull request \u0027\u0027 not found”}]}”
Interestingly if I put some sleep (1 second) after scan is finished and before i do a call to get coverage everything is fine.
It seems it has to do something with the fact that it’s a new pull request and regardless scan is finished and it generates link with results, it still requires some time before it will be possible for the api call i mentioned to be able to return coverage. Also, if i retry the operation(scan and get results) on already existing pull request there are no issues like this.
Could you please elaborate on this issue, is such behavior is expected or maybe there are some other ways I can get coverage right away after the scan is finished without adding any sleeps…
As a side observation under same circumstances if i do scan on new pull request and call another api (/issues/search?) to get list of detected issues and it successfully works without any additional sleeps,
Thank you.
After the call from the scanner client completes, SonarQube executes a "background task" in the project that finalizes the computations of measures. When the background task is complete, your measures will be available. This is why adding a "sleep" appears to work for you. In reality, it's just luck that you're sleeping long enough. The proper way to do this is to either manually check the status of the background task, or use tools that check for the background task completion under the covers.
If you're using Jenkins pipelines, and you have the "webhook" properly configured in SonarQube to notify completion of the background task, then the "waitForQualityGate" pipeline step does this, first checking to see if the task is already complete, and if not, going into a polling loop waiting for it to complete.
The machinery uses the "report-task.txt" file that should be written by the scanner. This is in the form of a Java properties file, but there's only one property in the file that you care about, which is the "ceTaskId" property. That is the id of the background task. You can then make an api call to "/api/ce/task?id=", which returns a block that tells you whether the background task is complete or not.
With system under test, when adding a new user their initial password is emailed to them.
I could split my test into multiple sections with manual intervention but this is less than ideal.
Appreciate any suggestions on how to proceed using TestCafe as I am sure others have encountered this as well.
If you run full integration test with real email server, then you can use libraries like "mail-receive" to connect to this server and verify the email.
You can also run your backend/server logic in mock mode, and then verify the mock, that the send event happened, by calling some test-specific rest endpoint from your TestCafe test.
Alternatively, you could also use something like "smtp-receiver" to start your own email-server-mock in nodejs context, and receive event upon email arrival. However you will need to configure your app server/backend to point to this mocked email server.
I have a few Nifi process groups which I want to run integration tests on before promoting to production. The issue is that I can't seem to find any documentation on how to do so.
Data Provenance seems like a promising tool to accomplish what I want, however, over the course of the flowfile's lifecycle, data is published to/from kafka or the file system. As a result, the flowfile UUID changes so I cannot query for it using the nifi-api.
Additionally, I know that Nifi offers a TestRunner library to run tests, however, this seems to only be for processors/processor groups generated via code and not the UI.
Does anyone know of a tool, framework, or pattern for integration and unit testing nifi process groups. Ideally this would be a solution where you can programatically compare input/output of the processor/processor group without modifying the existing workflow.
With the introduction of the Apache NiFi Registry, we have seen users promote flows from a development/sandbox environment to a test/QE environment where there are existing "test harness" flows surrounding the "flow under test" so that they can send repeatable and deterministic (or an anonymized sample of real production data) through the flow and compare the results to an expected value.
As you point out, there is a TestRunner class and a whole testing framework provided for unit tests. While it can be difficult to manually translate a UI-constructed flow to the programmatic construction, you could also create something like a translator to accept a flow template or flow.xml.gz file and convert it into something processable by the test framework.
Maybe plumber will help you with flow testing.
We also wanted to test whole NiFi flows, not just single processor, so we created this library and decided to open-source it.
Simple example in Scala:
// read flow previously exported from NiFi
val template = TemplateDeserializer.deserialize(this.getClass.getClassLoader.getResourceAsStream("exported-flow.xml"))
val flow = NifiTemplateFlowFactory(template).create()
// enqueue some data to any processor
flow.enqueueByName("csv row,12,another value,true", "CsvParserProcessor")
// run entire flow once
flow.run(1)
// get the results from any processor
val records = flow.resultsFromProcessorRelation("LastProcessorInFlow","successRelation")
records should have size 1
This library is still under development so improvements and ideas are welcomed! :)
I was tasked with creating a health check for our production site. It is a .NET MVC web application. There are a lot of dependencies and therefore points of failure e.g. a document repository, Java Web services, Site Minder policy server etc.
Management wants us to be the first to know if ever any point fails. Currently we are playing catch up if a problem arises, because it is the the client that informs us. I have written a suite of simple Selenium WebDriver based integration tests that test the sign in and a few light operations e.g. retrieving documents via the document api. I am happy with the result but need to be able to run them on a loop and notify IT when any fails.
We have a TFS build server but I'm not sure if it is the right tool for the job. I don't want to continuously build the tests, just run them. Also it looks like I can't define a build schedule more frequently than on a daily basis.
I would appreciate any ideas on how best achieve this. Thanks in advance
What you want to do is called a suite of "Smoke Tests". Smoke Tests are basically very short and sweet, independent tests that test various pieces of the app to make sure it's production ready, just as you say.
I am unfamiliar with TFS, but I'm sure the information I can provide you will be useful, and transferrable.
When you say "I don't want to build the tests, just run them." Any CI that you use, NEEDS to build them TO run them. Basically "building" will equate to "compiling". In order for your CI to actually run the tests, it needs to compile.
As far as running them, If the TFS build system has any use whatsoever, it will have a periodic build option. In Jenkins, I can specify a Cron time to run. For example:
0 0 * * *
means "run at 00:00 every day (midnight)"
or,
30 5 * 1-5 *
which means, "run at 5:30 every week day"
Since you are making Smoke Tests, it's important to remember to keep them short and sweet. Smoke tests should test one thing at a time. for example:
testLogin()
testLogout()
testAddSomething()
testRemoveSomething()
A web application health check is a very important feature. The use of smoke tests can be very useful in working out if your website is running or not and these can be automated to run at intervals to give you a notification that there is something wrong with your site, preferable before the customer notices.
However where smoke tests fail is that they only tell you that the website does not work, it does not tell you why. That is because you are making external calls as the client would, you cannot see the internals of the application. I.E is it the database that is down, is a network issue, disk space, a remote endpoint is not functioning correctly.
Now some of these things should be identifiable from other monitoring and you should definitely have an error log but sometimes you want to hear it from the horses mouth and the best thing that can tell you how you application is behaving is your application itself. That is why a number of applications have a baked in health check that can be called on demand.
Health Check as a Service
The health check services I have implemented in the past are all very similar and they do the following:
Expose an endpoint that can be called on demand, i.e /api/healthcheck. Normally this is private and is not accessible externally.
It returns a Json response containing:
the overall state
the host that returned the result (if behind a load balancer)
The application version
A set of sub system states (these will indicate which component is not performing)
The service should be resilient, any exception thrown whilst checking should still end with a health check result being returned.
Some sort of aggregate that can present a number of health check endpoints into one view
Here is one I made earlier
After doing this a number of times I have started a library to take out the main wire up of the health check and exposing it as a service. Feel free to use as an example or use the nuget packages.
https://github.com/bronumski/HealthNet
https://www.nuget.org/packages/HealthNet.WebApi
https://www.nuget.org/packages/HealthNet.Owin
https://www.nuget.org/packages/HealthNet.Nancy
I was wondering if there was a way to do this in Hudson (or with any of the various plugins). My IDEAL scenario:
I want to trigger a build based on a job through a REST-like API, and on that build, I want it to return me a job ID. After-wards, I would like to poll this ID to see its status. When it is done, I would like to see the status, and the build number.
Now, since I can't seem to get that working, here is my current solution that I have yet to implement:
When you do a REST call to do a build, its not very REST-ful. It simply returns HTML, and I would have to do a kind of parsing to get the job ID. Alternatively, I can do a REST call for all the history listing all the jobs, and the latest one would be the one I just built. Once I have that, I can poll the console output for the output of the build.
Anyone know a way I can implement my "ideal" solution?
Yes, you can use the Hudson Remote API for this (as #Dan mentioned). Specifically, you need to configure your job to accept Remote Triggers (Job Configuration -> Build Triggers -> Trigger builds remotely) and then you can fire off a build with a simple HTTP GET to the right url.
(You may need to jump through a couple additional hoops if your Hudson requires authentication.)
I'm able to start a Hudson job with wget:
wget --auth-no-challenge --http-user=test --http-password=test "http://localhost:8080/job/My job/build?TOKEN=test"
This returns a bunch of HTML with a build number #20 that you could parse. The build number can then be used to query whether the job is done / successful.
You can examine the Hudson Remote API right from your browser for most of the Hudson web pages that you normally access by appending /api (or /api/xml to see the actual XML output), e.g. http://your-hudson/job/My job/api/.
Update: I see from your question that you probably know much of what I wrote. It is worth exploring the built-in Hudson API documentation a bit. I just discovered this tidbit that might help.
You can get the build number of the latest build (as plain text) from the URL: http://your-hudson/job/My job/lastBuild/buildNumber
Once you have the build number, I think the polling and job status is straightforward once you understand the API.
And what if you don't want the latest build number, but you want the build number of the build that was triggered by hitting the build URL ?
As far as I can tell, hitting that URL returns a 302 that redirects you to the job's mainpage, with no indication whatsoever of what the build number is of the one that you triggered.