I am trying to use go test -cover to measure the test coverage of a service I am building. It is a REST API and I am testing it by spinning it up, making test HTTP requests and reviewing the HTTP responses. These tests are not part of the packages of the services and go tool cover returns 0% test coverage. Is there a way to get the actual test coverage? I would expect a best-case scenario test on a given endpoint to cover at least 30-50% of the code for specific endpoint handler, and by adding more tests for common error to improve this further.
I was pointed at the -coverpkg directive, which does what I need - measures the test coverage in a particular package, even if tests that use this package and not part of it. For example:
$ go test -cover -coverpkg mypackage ./src/api/...
ok /api 0.190s coverage: 50.8% of statements in mypackage
ok /api/mypackage 0.022s coverage: 0.7% of statements in mypackage
compared to
$ go test -cover ./src/api/...
ok /api 0.191s coverage: 71.0% of statements
ok /api/mypackage 0.023s coverage: 0.7% of statements
In the example above, I have tests in main_test.go which is in package main that is using package mypackage. I am mostly interested in the coverage of package mypackage since it contains 99% of the business logic in the project.
I am quite new to Go, so it is quite possible that this is not the best way to measure test coverage via integration tests.
you can run go test in a way that creates coverage html pages. like this:
go test -v -coverprofile cover.out ./...
go tool cover -html=cover.out -o cover.html
open cover.html
As far as I know, if you want coverage you need to run go test -cover.
However it is easy enough to add a flag which you can pass in which will enable these extra tests, so you can make them part of your test suite but don't run them normally.
So add a command line flag in your whatever_test.go
var integrationTest = flag.Bool("integration-test", false, "Run the integration tests")
Then in each test do something like this
func TestSomething(t *testing.T){
if !*integrationTest {
t.Skip("Not running integration test")
}
// Do some integration testing
}
Then to run the integration tests
go run -cover -integration-test
Related
I have manually instrumented my code using:
istanbul instrument src --o temp --es-modules --config=.istanbul.yml.
This is my .istanbul.yml:
instrumentation:
excludes: ['*.spec.js']
extensions: ['.js','.jsx']
Once it is instrumented I am running e2e tests using Selenium inside IntelliJ, using the run with coverage button.
The tests pass but at the end it only gives me coverage information of the *.e2e.js files and not the actual *.jsx file that the e2e test is running.
Any ideas?
The JavaScript is executed in the browser, not by the test-runnner. So only the code that is used by the test-runner is included in the coverage. You need to instrument the front-end code and send it to the browser and collect the coverage from the browser.
Here is how it could work with istanbul and Selenium:
Instrument your front-end code with the istanbul
instrument command. (As far as I know, istanbul instrument writes out
instrumented code to disk, whereas istanbul cover does everything in
memory.)
Instead of sending the original JS code to the browser, send
the instrumented JS code. The really nice thing here, with Istanbul,
you don’t have to manually modify your source code at all to make this
all work. Istanbul does almost all of the work for us in the browser,
automatically.
Run your Selenium-based tests, and for each individual
driver in your tests, run a hook that will send the coverage results
from the browser to the backend test process.
Once you get the
coverage data in the test process, you can do whatever you want with
it. In this case, we will HTTP POST the data to a server which can
interpret and display the coverage results.
And that’s it!
Read the full article : https://medium.com/#the1mills/front-end-javascript-test-coverage-with-istanbul-selenium-4b2be44e3e98
The article goes over all the details how to set it up.
I would like to separate my integration tests from the unit tests. I have read that I can do it including tags in the test file:
// +build integration
On the other hand, I select all the packages from my project by using wildcards ./...
Unfortunately, I have problems, tags are ignored because of the wildcard.
go test ./... -tags=integration
or
go test -tags=integration ./...
Do you have any solution or alternative to it?
Within your integration tests you can use:
func Test_SomeIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping test")
}
...
}
And then pass -short flag to the go test command to skip integration tests:
go test -short ./...
Update 2021
This Just Works now! But I did run into an issue where the tags are ignored if there is a TestMain function in the file. So if you're having this issue now, check for that. For the record I run with:
go test -tags="all your tags here" -v -count=1 ./...
Hopefully this helps someone in the future
Is it expected that benchmarks don't run unless all tests in the package have passed.
I've looked at the testing package doc and the testing flags and I can't find it documented that benchmarks run only after all tests pass.
Is there a way to force benchmark functions to run even when some tests in the package have failed ?
You can skip the failing tests using the -run flag, or choose to run none at all
go test -bench . -run NONE
I am using goconvey and other tools to get the code coverage.
This produces a test coverage report but it only shows the coverage for the test case code.
API is hosted on the a Golang server.
I would like to know how much server side code is covered by my
tests(unit,integreation,system tests).
How should I do this?
Here's what I do:
godep go test -coverprofile cover.out `go list ./... | grep -v vendor`
go tool cover -html=cover.out
That generates a coverage report and then opens a browser window to view it.
Is it possible to quickly run single/all integration test in a class quickly in Grails. The test-app comes with heavy baggage of clearing of all compiled files and generating cobertura reports hence even if we run single integration test, the entire code base is compiled,instrumented and the cobertura report is getting generated. For our application this takes more than 2 minutes.
If it was possible to quickly run one integration test and get a rapid feedbck, it would be immensely helpful.
Also, is it important to clean up all the compiled files once the test is complete? This cleaning is fine if we run the entire set of integration test, but if we are going to run one or two tests in a class this cleaning and re-compiling seems to be a big bottleneck for quicker feedback to developers.
Thanks
If you have an integration test class
class SimpleControllerTests extends GrailsUnitTestCase {
public void testLogin() {}
public void testLogin2() {}
public void testLogin3() {}
}
You can run just one test in this class using:
grails test-app integration: SimpleController.testLogin
However, you will still have to incurr the time penalty required for integration testing (loading config, connecting to DB, instantiating Spring beans, etc.)
If you want your tests to run quickly, then try to write unit tests rather than integration tests.
It is the intention of the integration test to do this whole compiling, data base creation, server starting, etc. because the tests should run in an integrated environment, as the name implies.
Maybe you can extract some tests to unit tests. These you can run in Eclipse.
You can switch off Cobertura by placing the following code in your grails-app/conf/BuildConfig.groovy:
coverage {
enabledByDefault = false
}
Like you stated, the majority of time is setting up the application environment, injecting beans and doing the dynamic class annotations. You can speed up your integration test cycle by only loading this once, by running your tests in the grails REPL.
However, the tradeoff is that there are dynamic reloading issues in the REPL. If you see random weirdness, exit the REPL and reload.
$> ./grailsw --plain-output
|Loading Grails 2.5.3
|Configuring classpath
|Enter a script name to run. Use TAB for completion:
grails> test-app -integration
... (loads some things)
...
grails> test-app -integration
... (faster loading)
And to reply to the other commenters - integration tests are useful as well, there is some code that cannot be tested with a unit test (for instance, testing HQL or SQL queries).