Is it expected that benchmarks don't run unless all tests in the package have passed.
I've looked at the testing package doc and the testing flags and I can't find it documented that benchmarks run only after all tests pass.
Is there a way to force benchmark functions to run even when some tests in the package have failed ?
You can skip the failing tests using the -run flag, or choose to run none at all
go test -bench . -run NONE
Related
I am working on a node.js application and would like to know if there is a way I can run all the unit tests from all the sub modules even if there are some test failures to know how many tests are failing in total to start putting the fixes for them. We use mocha for our tests on the back-end and jest for the ui.
Thanks.
The default behavior for mocha is to run all the tests. If it is exiting after the first test failure, that would suggest that you are using the "bail" option typically enabled on the command line with either --bail or -b.
Relevant docs: https://mochajs.org/#-bail-b
It can also be caused by passing the option { bail: true } to mocha.setup(). Look in your test runner and in your package.json.
Lastly, the least likely of these possibilities is that it could also be caused by using this.bail() somewhere in the Mocha test runner.
WebStorm has a feature that lets you right-click on an it and run that test. I use it often in my workflow.
When I choose 'mocha' it runs like this and is basically instantaneous. Jest takes over 20 seconds presumably because it's scanning all my files to find a pattern match.
Is there any way to make this faster? There is no question that running all of our tests is faster when run through jest... but it's terrible for running individual tests like when you're debugging.
/usr/local/bin/node /Users/blake/Documents/git/handle/node_modules/mocha/bin/mocha --ui bdd --reporter /Applications/WebStorm.app/Contents/plugins/NodeJS/js/mocha-intellij/lib/mochaIntellijReporter.js /Users/blake/Documents/git/handle/lib/test/helpers/state-abbr-helper.spec.js --grep "^#state-abbr-helper fake test$"
this test did nothing at all...
/usr/local/bin/node --require /Applications/WebStorm.app/Contents/plugins/JavaScriptLanguage/helpers/jest-intellij/lib/jest-intellij-stdin-fix.js /Users/blake/Documents/git/handle/node_modules/jest/bin/jest.js --colors --reporters /Applications/WebStorm.app/Contents/plugins/JavaScriptLanguage/helpers/jest-intellij/lib/jest-intellij-reporter.js --verbose "--testNamePattern=^#state-abbr-helper fake test$" --runTestsByPath /Users/blake/Documents/git/handle/lib/test/helpers/state-abbr-helper.spec.js
console.log lib/test/helpers/state-abbr-helper.spec.js:7
this test did nothing at all...
Check your jest.config.js to see if it's doing anything heavy to start at the test such as something like:
setupFilesAfterEnv: ['<rootDir>/_ui/test/setupTest.js'],
In my case was the coverage slowing it down. I solved it by adding
--collectCoverage=false
to the run configuration, overriding the file configuration.
Is there a way to execute TestNG.xml in dry run mode so that I can figure out what methods gets qualified for the test run. I am using Eclipse and intend to run the tests via testng.xml. How to configure Run Configurations for this.
Newbie to Selenium-TestNG and Eclipse
I tried to provide -Dtestng.mode.dryrun=true in Run Configurations -> Arguments tab under both Program argument and VM arguments
The run configurations had no effect on the execution. The tests were executed in normal fashion. I expected the configurations would just list test methods in the console
You are going to see all tests with no failures. That what you expect when you run testNg with the argument.
To check the dryrun argument works, make your test to fail. Then run your test with "-Dtestng.mode.dryrun=true". Add the argument in "VM arguments"
Also, check your version of testNG is 6.14 or higher
I need to run python units test cases as part of bamboo build step and the build needs to fail if unit tests fail.
For this, I have a Script step in bamboo build and i am trying to run the following in it:
python -m unittest discover /test
Here, /test folder has all the unit tests.
The output of the above script it
Ran (0) tests
So the problem is that bamboo isn't able to discover these tests. Bamboo agent is linux.
Wondering if anyone has done such a thing before and has any suggestions.
The following worked. Used -p (pattern) attribute to discover/run the unit tests in bamboo (unix build agent)
python -m unittest discover -s test -p "T*.py"
Note: 1. all my test cases start with "T" e.g. Test_check.py
2. "test" is the package where all my test cases are.
If you haven't figured it out, likely because in windows filenames aren't case sensitive but in Linux they are...
And you're test file named Test_xxxx.py isn't the same as test_xxxx.py which is the pattern that discovery is trying to use...
I am trying to use go test -cover to measure the test coverage of a service I am building. It is a REST API and I am testing it by spinning it up, making test HTTP requests and reviewing the HTTP responses. These tests are not part of the packages of the services and go tool cover returns 0% test coverage. Is there a way to get the actual test coverage? I would expect a best-case scenario test on a given endpoint to cover at least 30-50% of the code for specific endpoint handler, and by adding more tests for common error to improve this further.
I was pointed at the -coverpkg directive, which does what I need - measures the test coverage in a particular package, even if tests that use this package and not part of it. For example:
$ go test -cover -coverpkg mypackage ./src/api/...
ok /api 0.190s coverage: 50.8% of statements in mypackage
ok /api/mypackage 0.022s coverage: 0.7% of statements in mypackage
compared to
$ go test -cover ./src/api/...
ok /api 0.191s coverage: 71.0% of statements
ok /api/mypackage 0.023s coverage: 0.7% of statements
In the example above, I have tests in main_test.go which is in package main that is using package mypackage. I am mostly interested in the coverage of package mypackage since it contains 99% of the business logic in the project.
I am quite new to Go, so it is quite possible that this is not the best way to measure test coverage via integration tests.
you can run go test in a way that creates coverage html pages. like this:
go test -v -coverprofile cover.out ./...
go tool cover -html=cover.out -o cover.html
open cover.html
As far as I know, if you want coverage you need to run go test -cover.
However it is easy enough to add a flag which you can pass in which will enable these extra tests, so you can make them part of your test suite but don't run them normally.
So add a command line flag in your whatever_test.go
var integrationTest = flag.Bool("integration-test", false, "Run the integration tests")
Then in each test do something like this
func TestSomething(t *testing.T){
if !*integrationTest {
t.Skip("Not running integration test")
}
// Do some integration testing
}
Then to run the integration tests
go run -cover -integration-test