How to run Clojure code before each "lein test"? - testing

How can I run some Clojure code before tests in test files are run?
I'd like to have some piece of Clojure code to be called either before running all the tests (say by doing lein test at the root of my lein project) or before running indidual tests. I don't want to duplicate that piece of code in several .clj files.
How can I do that? Is there some "special" .clj file that can be run before any test is run?

You probably want to use test fixtures. This question has a good answer on it that can get you started.

Related

How to cleanup files created by GTest tests run with CMake

I have some tests written with gtest and I run them with ctest from CMake.
As a result of the test there are files created, that should be removed before the next run.
There are two approaches I see:
Remove files as a part of test execution from the test binary (calling system() from the fixture doesn't feel really good)
Remove files in make test run (which I have no idea how to do properly)
So which approach is better and what is the best way to do it?

Why karate runner file is not run before running the feature files, when running Junit4 Test.java file for tests

Karate suggests that to run all tests in a CI environment, a *Test.java file should be added above the feature files (in hierarchy) and then run using - mvn test command.
I am using my Runner.java file to create test data before the tests are run and then do a clean up. I run this runner file in IDE and everything runs fine - the data is created, all feature files in the same package run and then clean-up is performed. The reason i used the Runner file to create data is because i using karate itself to create test data and the Runner file passes some information on created data to the feature files to run api tests. I had earlier posted a question regarding how to achieve this, please refer to this answer - https://stackoverflow.com/a/55931786/4741035
So now I have a *Test.java file in my project which i run using - mvn test. This runs all the feature files and tests fails as the Runner.java is not executed at all.
Why doesn't karate run the Runner file, if it is present first that the feature files?
Help is much appreciated.
If you are trying to run something "once" before all your tests, use karate.callSingle() documented here: https://github.com/intuit/karate#hooks
var result = karate.callSingle('classpath:demo/headers/common-noheaders.feature', config);
And in the above feature (or JS) you can call Java code using Java interop.
By the way I don't agree with the answer you linked because of the above approaches.

How to run Clojure tests from clj (not from lein neither boot)?

In a project with the traditional lein project structure, how can I use simply clj to run the tests in the test folder?
Update:
After the mention the REPL, I would like to clarify I'm trying that from the System shell with the clj command. Not from the REPL, neither lein nor boot.
Check out the test-runner from Cognitect. https://github.com/cognitect-labs/test-runner
Once you add the alias, you should be able to run the tests via:
clj -Atest
If you need to configure the directory,
clj -Atest -d path/to/tests
You can run tests from the repl:
; all tests
(clojure.test/run-all-tests)
; all tests in one file
(clojure.test/run-tests 'com.myproject.test.routes.api_test)
; one particular test
(clojure.test/test-vars [#'com.myproject.test.routes.api_test/id-test])
You can find some additional information in official documentation.
And it's very convenient to run tests with cursive plugin in IntelliJ IDEA.

Webstorm: How to Run Test Setup for Whole Suite AND Individual Tests?

Webstorm has great test running support, which I can use to start my test suite by telling it "run the file testStart.js". I can then define testStart.js to do the setup for my test environment (eg. creating a Sinon sandbox) and to bring in all the tests themselves.
That works great, when I run the whole suite. But Webstorm has a feature that let's you re-run just a single failing test, and when I try to use that feature I run in to a problem: my test setup code doesn't get run because the individual test file doesn't invoke the setup code.
So, I'm looking for a solution. The only options I see so far are:
instead of having a separate testStart.js file I could move the setup code in to a testSetup.js file and make every test require it. DOWNSIDE: I have to remember to import the setup file in every single test file (vs. never having to import it in my current scheme)
use Mocha's --require option to run a testSetup.js. DOWNSIDE: Code require-ed in this way doesn't have access to the Mocha code, so I'm not sure how I can call beforeEach/afterEach
use some other Mocha or Webstorm option that I don't know about to run the test setup code. DOWNSIDE: Not sure if such an option even exists
If anyone else has run in to this problem I'd love to hear if any of the above solutions can be made to work (or if there's another solution I hadn't considered).
I wound up just importing testSetup.js in to every test file. It was a pain and violated the DRY principle, but it worked.
If anyone else has a better solution though I'll happily accept it.

Test executable failing only when run in ctest

When I use the ctest interface to cmake (add_test(...)), and run the make target make test, several of my tests fail. When I run each test directly at the command line from the binary build folder, they all work.
What can I use to debug this?
To debug, you can first run ctest directly instead of make test.
Then you can add the -V option to ctest to get verbose output.
A third neat trick from a cmake developer is to have ctest launch an xterm shell. So add
add_test(run_xterm xterm)
in your CMakeLists.txt file at the end. Then run make test and it will open up a xterm. Then see if you can replicate the test failing by running it from the xterm. If it does fail, then check your environment (i.e. run env > xterm.env from the xterm, then run it again env > regular.env from your normal session and diff the outputs).
I discovered that my tests were wired to look for external files passed on a relative path to the top of the binary cmake output folder (i.e. the one where you type make test). However, when you run the test through ctest, the current working directory is the binary folder for that particular subdirectory, and the test failed.
In other words:
This worked
test/mytest
but this didn't work
cd test; ./mytest
I had to fixed the unit tests to use an absolute path to the configuration files it needed, instead of a path like ../../../testvector/foo.txt.
The problem with ctest and googletest is that it assumes to run one command for each test case, whilst you will have potentially a lot of different test cases running in a single test executable. So when you use add_test with a Google Test executable, CTest reports one single failure whether actual number of failed test cases is 1 or 1000.
Since you say that running your test cases isolated makes them pass, my first suspicion is that your tests are somehow coupled. You can quickly check this by randomizing the test execution order using --gtest_shuffle, and see if you get the same failures.
I think the best approach to debug your failing test cases is not to use CTest, but just run the test executable using the command line options to filter the actual test cases getting run. I would start by running only the first test that fails together with the test run immediately before when the whole test suite is run.
Other useful tools to debug your test cases can be SCOPED_TRACE and extending your assertion messages with additional information.