I have manually instrumented my code using:
istanbul instrument src --o temp --es-modules --config=.istanbul.yml.
This is my .istanbul.yml:
instrumentation:
excludes: ['*.spec.js']
extensions: ['.js','.jsx']
Once it is instrumented I am running e2e tests using Selenium inside IntelliJ, using the run with coverage button.
The tests pass but at the end it only gives me coverage information of the *.e2e.js files and not the actual *.jsx file that the e2e test is running.
Any ideas?
The JavaScript is executed in the browser, not by the test-runnner. So only the code that is used by the test-runner is included in the coverage. You need to instrument the front-end code and send it to the browser and collect the coverage from the browser.
Here is how it could work with istanbul and Selenium:
Instrument your front-end code with the istanbul
instrument command. (As far as I know, istanbul instrument writes out
instrumented code to disk, whereas istanbul cover does everything in
memory.)
Instead of sending the original JS code to the browser, send
the instrumented JS code. The really nice thing here, with Istanbul,
you don’t have to manually modify your source code at all to make this
all work. Istanbul does almost all of the work for us in the browser,
automatically.
Run your Selenium-based tests, and for each individual
driver in your tests, run a hook that will send the coverage results
from the browser to the backend test process.
Once you get the
coverage data in the test process, you can do whatever you want with
it. In this case, we will HTTP POST the data to a server which can
interpret and display the coverage results.
And that’s it!
Read the full article : https://medium.com/#the1mills/front-end-javascript-test-coverage-with-istanbul-selenium-4b2be44e3e98
The article goes over all the details how to set it up.
Related
First of all, my terminology may be incorrect with some of these terms as I am very new to jest so if that is the case I would love to be corrected to help me learn.
In case I am using the jest terminology incorrectly, here is what I mean:
Test Suite - The entire group of test files I am attempting to run
Test File - The actual .js test file that is being run
Test - The individual 'it' code blocks in each test file.
Currently, I am using a group of around 20 jest tests to test my API EPs for my SQL Server and its corresponding linked server.
To do this, I run an npm command like so in the terminal.
npm run test:file ape/linked -- --env=monke.env
With how jest currently is working, if one of tests in the 20 test files fails, then it quits out of the test suite entirely.
I would like it to just fail out of whatever test file it is in, then continue to the next text file.
I know jest currently has the --bail flag, but enabling this continues the same test file on failure which I can't have happen due to the nature of my linked server to my actual SQL server.
Any help would be greatly appreciated and I am new to all of this so let me know if more info needs to be included.
This will be running on various mac versions as well as Ubunutu server
I am using the Karate framework to do the API testing. As part of CI efforts, we send an email at the end of test execution listing the summary of test results. There is a need to include the screeshot of the test execution counts from 'overview-feature.html' file.
I did so through the TestRunner.java file - launched Chrome using Chrome.start() and then using it to take screenshot. It all works well locally on Windows.
However when executing on CI server which is a Unix box, the chrome executable is not present in the default location (usr/bin/google-chrome) and hence the connection for the localhost fails.
Is there a way we can change the default location of the chrome executable?
PS: Apologies if this was too trivial to be asked.
Yes Chrome on CI is hard to get right, refer: https://stackoverflow.com/a/62325328/143475 - note that CI boxes typically are "headless" a browser may not be even installed.
I think the best thing for you is to ZIP the HTML and send it. But I really think you need to work with some CI experts, because the report generation and e-mailing business is normally done by things like Jenkins. What you are doing is certainly not normal or best-practice.
If you really want, there is a Karate Docker container that can give you a proper Chrome instance (see docs) but that is overkill for what you need.
EDIT: The Chrome Java API allows for customization of the executable path and this is in the docs: https://github.com/intuit/karate/tree/master/karate-core#chrome-java-api
It should be something like this:
Chrome.start("/opt/blah/chrome");
seems like something quite basic but it dosen't seem to be available in jest automatically.
when i run my jest tests, during the test run i get the logging info from the code itself while the tests are run.
and only after the tests are finished i receive a summary of the spec results(which tests passed and which didn't).
what i'm trying to achieve is to have extra lines in my console, that fill while the tests are running, that tell which test we have just finished, and what was the result.. this will make log debugging so much simpler..
thanks!
I am trying to use go test -cover to measure the test coverage of a service I am building. It is a REST API and I am testing it by spinning it up, making test HTTP requests and reviewing the HTTP responses. These tests are not part of the packages of the services and go tool cover returns 0% test coverage. Is there a way to get the actual test coverage? I would expect a best-case scenario test on a given endpoint to cover at least 30-50% of the code for specific endpoint handler, and by adding more tests for common error to improve this further.
I was pointed at the -coverpkg directive, which does what I need - measures the test coverage in a particular package, even if tests that use this package and not part of it. For example:
$ go test -cover -coverpkg mypackage ./src/api/...
ok /api 0.190s coverage: 50.8% of statements in mypackage
ok /api/mypackage 0.022s coverage: 0.7% of statements in mypackage
compared to
$ go test -cover ./src/api/...
ok /api 0.191s coverage: 71.0% of statements
ok /api/mypackage 0.023s coverage: 0.7% of statements
In the example above, I have tests in main_test.go which is in package main that is using package mypackage. I am mostly interested in the coverage of package mypackage since it contains 99% of the business logic in the project.
I am quite new to Go, so it is quite possible that this is not the best way to measure test coverage via integration tests.
you can run go test in a way that creates coverage html pages. like this:
go test -v -coverprofile cover.out ./...
go tool cover -html=cover.out -o cover.html
open cover.html
As far as I know, if you want coverage you need to run go test -cover.
However it is easy enough to add a flag which you can pass in which will enable these extra tests, so you can make them part of your test suite but don't run them normally.
So add a command line flag in your whatever_test.go
var integrationTest = flag.Bool("integration-test", false, "Run the integration tests")
Then in each test do something like this
func TestSomething(t *testing.T){
if !*integrationTest {
t.Skip("Not running integration test")
}
// Do some integration testing
}
Then to run the integration tests
go run -cover -integration-test
I have a dependency with Intern where we have to spin up a Selenium server and use PhantomJS for our tests. We use Jenkins and may need some more inspection/debug output to console but the console.log's get suppressed from the test files to terminal/command-line
Is console.log to terminal/command-line supported yet?
How console.log works with intern-runner depends on where your test code is running. Unit tests (specified with suites) run in the browser, so that's where console.log output ends up. There isn't currently a way to get console output out of a browser for unit tests.
Functional tests (specified with functionalSuites) control a browser, but actually run in Node.js, so output from console.log statements in functional tests generally goes to intern's stdout. The exceptions are log statements in execute and executeAsync blocks; since those blocks run in the browser, that's where the log output ends up. You can retrieve browser logs in functional tests using getLogsFor('browser'), but WebDriver log support is inconsistent between browsers.