How to cleanup files created by GTest tests run with CMake - cmake

I have some tests written with gtest and I run them with ctest from CMake.
As a result of the test there are files created, that should be removed before the next run.
There are two approaches I see:
Remove files as a part of test execution from the test binary (calling system() from the fixture doesn't feel really good)
Remove files in make test run (which I have no idea how to do properly)
So which approach is better and what is the best way to do it?

Related

run single unit tests for a module efficiently

I wish all of you a happy new year! I want to write more unit tests in the new year, but how to handle it more easily? I found out to run tests while updating a module via --test-enable and --stop-after-init command line parameters. I have also read about --test-file parameter, but it does not work. This parameter is also not described in the docs.
How would you do TDD (test driven development)? In order to do that you have to be able to run tests quickly. Requiring to test the whole module with all its dependencies makes it impractical to write tests frequently. How to run a single unit test case?
Edited my own question from 'run single unit test case'. This command works for me:
python ./odoo.py -i module_to_test --log-level=test -d minimal_database --test-enable --stop-after-init
This is quite similar to what danidee answered.
However, the solution seems to be not to use the --test-file parameter, since this unexpectedly runs all tests of all dependent modules and does whatever else, making it too long to execute.
Another part of the solution is to use a minimal database where just the module to be tested plus its dependencies of course is installed.
Now the above command takes only several seconds to execute at my machine even if the tested code uses objects from dependent modules. If only I could prevent the module to make an update each time while running the tests in order to make it even faster and more efficient ...
It's really difficult to do TDD with Odoo because most modules depend on other modules that still depend on some other modules and so on...
But what works for me is to have a custom module that installs all the other custom modules i've created,
This same module also contains all the tests for "anything custom" I've done on Odoo. Though you can place each modules tests within the module itself but I've had some problems when some tests didn't run for no reason. So i decided to place all of them in one module
so whenever i push a new commit. This is the command i use to run tests (assuming my module is named all_modules)
python odoo.py --addons=addons_path -i all_modules --log-level=test -d Test_Database --test-enable --xmlrpc-port=80xx --stop-after-init
Before doing this, I already have a Database (Test_Database) that's a clone of my production environment (so i can test against real Data) where the tests are run against.
For deployment i just use rsync and copy the files over.
My tests are reasonably fast (~5min) because i'm testing against a cloned DB and also because i'm only running tests for the custom modules I've built
I know this isn't standard TDD but Odoo doesn't conform to a lot of patterns in software development and most times you have to just strike a balance and find out what works for you.
Personally I've found run-bot too complicated and resource hungry.
PS: It's also helpful to have Selenium tests.

Test executable failing only when run in ctest

When I use the ctest interface to cmake (add_test(...)), and run the make target make test, several of my tests fail. When I run each test directly at the command line from the binary build folder, they all work.
What can I use to debug this?
To debug, you can first run ctest directly instead of make test.
Then you can add the -V option to ctest to get verbose output.
A third neat trick from a cmake developer is to have ctest launch an xterm shell. So add
add_test(run_xterm xterm)
in your CMakeLists.txt file at the end. Then run make test and it will open up a xterm. Then see if you can replicate the test failing by running it from the xterm. If it does fail, then check your environment (i.e. run env > xterm.env from the xterm, then run it again env > regular.env from your normal session and diff the outputs).
I discovered that my tests were wired to look for external files passed on a relative path to the top of the binary cmake output folder (i.e. the one where you type make test). However, when you run the test through ctest, the current working directory is the binary folder for that particular subdirectory, and the test failed.
In other words:
This worked
test/mytest
but this didn't work
cd test; ./mytest
I had to fixed the unit tests to use an absolute path to the configuration files it needed, instead of a path like ../../../testvector/foo.txt.
The problem with ctest and googletest is that it assumes to run one command for each test case, whilst you will have potentially a lot of different test cases running in a single test executable. So when you use add_test with a Google Test executable, CTest reports one single failure whether actual number of failed test cases is 1 or 1000.
Since you say that running your test cases isolated makes them pass, my first suspicion is that your tests are somehow coupled. You can quickly check this by randomizing the test execution order using --gtest_shuffle, and see if you get the same failures.
I think the best approach to debug your failing test cases is not to use CTest, but just run the test executable using the command line options to filter the actual test cases getting run. I would start by running only the first test that fails together with the test run immediately before when the whole test suite is run.
Other useful tools to debug your test cases can be SCOPED_TRACE and extending your assertion messages with additional information.

How to run Clojure code before each "lein test"?

How can I run some Clojure code before tests in test files are run?
I'd like to have some piece of Clojure code to be called either before running all the tests (say by doing lein test at the root of my lein project) or before running indidual tests. I don't want to duplicate that piece of code in several .clj files.
How can I do that? Is there some "special" .clj file that can be run before any test is run?
You probably want to use test fixtures. This question has a good answer on it that can get you started.

Organizing Haskell Tests

So I'm trying to follow the suggested structure of a Haskell project, and I'm having a couple problems organizing my tests.
For simplicity, let's start with:
src/Clue/Cards.hs # defines Clue.Cards module
testsuite/tests/Clue/Cards.hs # tests Clue.Cards module
For one, I'm not sure what to name the module in testsuite/tests/Clue/Cards.hs that contains the test code, and for another, I'm no sure how to compile my test code so that I can link to my source:
% ghc -c testsuite/tests/Clue/Cards.hs -L src
testsuite/tests/Clue/Cards.hs:5:0:
Failed to load interface for `Clue.Cards':
Use -v to see a list of the files searched for.
I use myself the approach taken by Snap Framework for their test-suites, which basically boils down to:
Use a test-framework such as haskell-test-framework or HTF
Name the modules containing tests by appending .Tests to the module-name containing the IUT, e.g.:
module Clue.Cards where ... -- module containing IUT
module Clue.Cards.Tests where ... -- module containing tests for IUT
By using separate namespaces, you can put your tests in a separate source-folder tests/, you can then use a separate Cabal build-target (see also cabal test-build-target support in recent Cabal versions) for the test-suite which includes the additional source folder in its hs-source-dirs setting, e.g.:
Executable clue
hs-source-dirs: src
...
Executable clue-testsuite
hs-source-dirs: src tests
...
This works, since there's no namespace collision between the modules in your IUT and the test-suite anymore.
Here's another way:
Each module's unit tests are defined as a hunit TestList at the end of the module, with some consistent naming scheme, such as "tests_Path_To_Module". I think this helps me write tests, since I don't have to search for another module far away in the source tree, nor keep two parallel file hierarchies in sync.
A module's test list also includes the tests of any sub-modules. Hunit's runTestTT runner is built in to the app, and accessible via a test command. This means a user can run the tests at any time without special setup. Or if you don't like shipping tests in the production app, use CPP and cabal flags to include them only in dev builds, or in a separate test runner executable.
There are also functional tests, one or more per file in the tests/ directory, run with shelltestrunner, and some dev-process-related tests based in the Makefile.
Personally I feel that an extra ./src/ directory doesn't make much sense for small Haskell projects. Of coarse there's source, I downloaded the source code.
Either way (with or without src), I'd suggest you refactor and have a Clue directory and a Test directory:
./Clue/Cards.hs -- module Clue.Cards where ...
./Test/Cards.hs -- module Test.Cards where ...
This allows GHCi + Test.Cards to see Clue.Cards without any extra args or using cabal. On that note, if you don't use cabal + flags for optionally building your test modules then you should look into it.
Another option, which I use in many of my projects, is to have:
./Some/Module/Hierarchy/File.hs
./tests/someTests.hs
And I cabal install the package then run the tests/someTests.hs stuff. I guess this would be annoying if my packages were particularly large and too a long time to install.
For completeness sake, it worth mentioning a very easy approach for small project through ghci -i. For example, in your case,
>ghci -isrc:testsuite
ghci>:l Clue.Cards
ghci>:l tests.Clue.Cards

Disable testing in cmake

The tests of a project that I try to build fail to build (missing libraries). The tests themselves are not important, but prevents me from building and installing the essential files. So I want to do this as a quick-fix.
How do I do to turn of the build of the tests in a cmake project? Should I edit the CMakeLists.txt file in the root or in the subdirectory tests? How should I edit it?
There are one issuing of the command ENABLE_TESTING(). I tried to comment that one out, but it didn't help. Also tried to rename the subdirectory tests. Did not help either. This was only true for a special case where "implicit" tests were being built.
Ok, it worked by editing the CMakeLists.txt to turn off all the flags BUILD_TESTING, BUILD_TESTING_STATIC and BUILD_TESTING_SHARED.
This can most easily be accomplished with this sed command:
sed --in-place=.BACKUP 's/\(BUILD_TESTING[A-Z_]*\) ON/\1 OFF/' CMakeLists.txt
(In one cases there were "implicit" tests that had to be taken care of in a more elaborate way.)