When I run unit tests with the coverage, I can see that lines are covered by unit tests. But when I commit them Sonarqube shows that lines are uncovered. How can i configure sonarqube to measure unit test written with using powermockito?
First of all read the docs of SonarJava - actually everything you need to know is in there :D
short outline:
you need to generate a report for the coverage with eg. JaCoCo or Cobertura
you need to provide a property with a path to those reports eg. for JaCoCo sonar.jacoco.reportPaths=<path>
you run the analysis and sonar will use those reports
Related
Is it possible to run JUnit Code Coverage on a TestSuite in Intellij ? I am using Intellij 2020.1
I am able to run Code Coverage on JUnit tests when running it from Intellij against the Test Directory - just right-click and select 'Run Tests in xxxx with Coverage'.... that works fine...and you can see the output from this on the right hand side of the screenshot below.
When I run the TestSuite however - I dont see any Code Coverage stats and cant see how to generate them. The screenshot below shows the Run Configuration form for the TestSuite and shows the middle tab for Code Coverage. Does this form tab need to be configured ?
please see #Olga's comment above - this functionality should work using the same approach as for a normal JUnit Test class....i.e. right-click over the Test Suite name in the Project Folders pane and select the 'Run XXX with Code Coverage' option.
Is it possible to run a specific test within my selenium side runner test suite? For example, within a test suite, my first test logs me into a website, then the other tests, test specific areas of the website. Each of these tests first inherit the login test to auth the "user" when running the tests. But when I run the suite, it runs the tests in order, so it will first run the login test, then rerun the login test within my other tests. Hope this makes sense. So essentially i want to be able to specify which tests to run within my test suite. Thanks in advance
You may use the filter to run tests that have a common name:
Filter tests
You also have the option to run a targeted subset of your tests with the --filter
target command flag (where target is a regular expression value). Test names that
contain the given search criteria will be the only ones run.
[example] selenium-side-runner --filter smoke
I have a Bamboo job for mulesoft code, with steps as code checkout, build, test which generates coverage report, deploy.
Test step is failing with message 'Failing task since test cases were expected but none were found.' and coverage report is generated in specific path.
When I remove the specific path, job is success but coverage is not generated.
I tried enabling 'maven return code', which makes the job success, but can't see coverage report.
Give the path **/target/*, enable maven return code - job success, coverage not generated
Give the path **/target/*, disable maven return code - job failed, coverage generated
I found the fix, I had to upgrade munit plugin to higher version ! This fixed the issue !
This should be very late answer but I got same issue with Java + Maven + TestNG + Bamboo and resolved with the respective bamboo task
The below option should be unchecked
The build will produce test results - Uncheck
Maven task configuration
I'm cleaning up an Android project that has several hundred Espresso tests of which several are flaky. Getting 100% instrumented test success on a regular basis isn't realistic in this context.
My Gradle task for Jacoco code coverage works perfectly (!) when all the Espresso tests do pass (achievable by skipping the flaky ones). When there are test failures, however, the Jacoco report shows 0% coverage even though the Gradle test report has the correct data. Looks like the coverage.ec file generated is 0 bytes.
The connected<flavor>DebugAndroidTest task has been configured with ignoreFailures = true, so the build no longer halts when there were failing tests.
Is there any way to get the test task to correctly generate the coverage.ec file in spite of failures?
I am trying to use go test -cover to measure the test coverage of a service I am building. It is a REST API and I am testing it by spinning it up, making test HTTP requests and reviewing the HTTP responses. These tests are not part of the packages of the services and go tool cover returns 0% test coverage. Is there a way to get the actual test coverage? I would expect a best-case scenario test on a given endpoint to cover at least 30-50% of the code for specific endpoint handler, and by adding more tests for common error to improve this further.
I was pointed at the -coverpkg directive, which does what I need - measures the test coverage in a particular package, even if tests that use this package and not part of it. For example:
$ go test -cover -coverpkg mypackage ./src/api/...
ok /api 0.190s coverage: 50.8% of statements in mypackage
ok /api/mypackage 0.022s coverage: 0.7% of statements in mypackage
compared to
$ go test -cover ./src/api/...
ok /api 0.191s coverage: 71.0% of statements
ok /api/mypackage 0.023s coverage: 0.7% of statements
In the example above, I have tests in main_test.go which is in package main that is using package mypackage. I am mostly interested in the coverage of package mypackage since it contains 99% of the business logic in the project.
I am quite new to Go, so it is quite possible that this is not the best way to measure test coverage via integration tests.
you can run go test in a way that creates coverage html pages. like this:
go test -v -coverprofile cover.out ./...
go tool cover -html=cover.out -o cover.html
open cover.html
As far as I know, if you want coverage you need to run go test -cover.
However it is easy enough to add a flag which you can pass in which will enable these extra tests, so you can make them part of your test suite but don't run them normally.
So add a command line flag in your whatever_test.go
var integrationTest = flag.Bool("integration-test", false, "Run the integration tests")
Then in each test do something like this
func TestSomething(t *testing.T){
if !*integrationTest {
t.Skip("Not running integration test")
}
// Do some integration testing
}
Then to run the integration tests
go run -cover -integration-test