Organizing Haskell Tests - testing

So I'm trying to follow the suggested structure of a Haskell project, and I'm having a couple problems organizing my tests.
For simplicity, let's start with:
src/Clue/Cards.hs # defines Clue.Cards module
testsuite/tests/Clue/Cards.hs # tests Clue.Cards module
For one, I'm not sure what to name the module in testsuite/tests/Clue/Cards.hs that contains the test code, and for another, I'm no sure how to compile my test code so that I can link to my source:
% ghc -c testsuite/tests/Clue/Cards.hs -L src
testsuite/tests/Clue/Cards.hs:5:0:
Failed to load interface for `Clue.Cards':
Use -v to see a list of the files searched for.

I use myself the approach taken by Snap Framework for their test-suites, which basically boils down to:
Use a test-framework such as haskell-test-framework or HTF
Name the modules containing tests by appending .Tests to the module-name containing the IUT, e.g.:
module Clue.Cards where ... -- module containing IUT
module Clue.Cards.Tests where ... -- module containing tests for IUT
By using separate namespaces, you can put your tests in a separate source-folder tests/, you can then use a separate Cabal build-target (see also cabal test-build-target support in recent Cabal versions) for the test-suite which includes the additional source folder in its hs-source-dirs setting, e.g.:
Executable clue
hs-source-dirs: src
...
Executable clue-testsuite
hs-source-dirs: src tests
...
This works, since there's no namespace collision between the modules in your IUT and the test-suite anymore.

Here's another way:
Each module's unit tests are defined as a hunit TestList at the end of the module, with some consistent naming scheme, such as "tests_Path_To_Module". I think this helps me write tests, since I don't have to search for another module far away in the source tree, nor keep two parallel file hierarchies in sync.
A module's test list also includes the tests of any sub-modules. Hunit's runTestTT runner is built in to the app, and accessible via a test command. This means a user can run the tests at any time without special setup. Or if you don't like shipping tests in the production app, use CPP and cabal flags to include them only in dev builds, or in a separate test runner executable.
There are also functional tests, one or more per file in the tests/ directory, run with shelltestrunner, and some dev-process-related tests based in the Makefile.

Personally I feel that an extra ./src/ directory doesn't make much sense for small Haskell projects. Of coarse there's source, I downloaded the source code.
Either way (with or without src), I'd suggest you refactor and have a Clue directory and a Test directory:
./Clue/Cards.hs -- module Clue.Cards where ...
./Test/Cards.hs -- module Test.Cards where ...
This allows GHCi + Test.Cards to see Clue.Cards without any extra args or using cabal. On that note, if you don't use cabal + flags for optionally building your test modules then you should look into it.
Another option, which I use in many of my projects, is to have:
./Some/Module/Hierarchy/File.hs
./tests/someTests.hs
And I cabal install the package then run the tests/someTests.hs stuff. I guess this would be annoying if my packages were particularly large and too a long time to install.

For completeness sake, it worth mentioning a very easy approach for small project through ghci -i. For example, in your case,
>ghci -isrc:testsuite
ghci>:l Clue.Cards
ghci>:l tests.Clue.Cards

Related

How can you use multiple modules in a Raku project, if the modules are defined in the project?

I'm playing around with writing modules in Raku, when it made sense for me to break a piece of functionality off into another .rakumod file. How can I link these files together when I compile?
I tried to pull the other module to my main module via:
use MyProject::OtherModule;
But I get an error that it can't find this module even though they're side by side in the directory. I tried looking at some OSS projects in the Raku world, most of them are one file, the compiler Rakudo seems to use multiple module files but I can't figure out how they're linked.
Do I have to publish this module every time I want to run my project? How do I structure this if my project gets huge? Surely the best solution isn't to have it all in one file?
Edit: I should also note that I used this at the top of my new module too:
unit module MyProject::OtherModule;
When running locally, if you have your META6.json declared, you can use
raku -I. script.raku
and it will use the uninstalled versions, and you don't need to add any use lib in the script.

run single unit tests for a module efficiently

I wish all of you a happy new year! I want to write more unit tests in the new year, but how to handle it more easily? I found out to run tests while updating a module via --test-enable and --stop-after-init command line parameters. I have also read about --test-file parameter, but it does not work. This parameter is also not described in the docs.
How would you do TDD (test driven development)? In order to do that you have to be able to run tests quickly. Requiring to test the whole module with all its dependencies makes it impractical to write tests frequently. How to run a single unit test case?
Edited my own question from 'run single unit test case'. This command works for me:
python ./odoo.py -i module_to_test --log-level=test -d minimal_database --test-enable --stop-after-init
This is quite similar to what danidee answered.
However, the solution seems to be not to use the --test-file parameter, since this unexpectedly runs all tests of all dependent modules and does whatever else, making it too long to execute.
Another part of the solution is to use a minimal database where just the module to be tested plus its dependencies of course is installed.
Now the above command takes only several seconds to execute at my machine even if the tested code uses objects from dependent modules. If only I could prevent the module to make an update each time while running the tests in order to make it even faster and more efficient ...
It's really difficult to do TDD with Odoo because most modules depend on other modules that still depend on some other modules and so on...
But what works for me is to have a custom module that installs all the other custom modules i've created,
This same module also contains all the tests for "anything custom" I've done on Odoo. Though you can place each modules tests within the module itself but I've had some problems when some tests didn't run for no reason. So i decided to place all of them in one module
so whenever i push a new commit. This is the command i use to run tests (assuming my module is named all_modules)
python odoo.py --addons=addons_path -i all_modules --log-level=test -d Test_Database --test-enable --xmlrpc-port=80xx --stop-after-init
Before doing this, I already have a Database (Test_Database) that's a clone of my production environment (so i can test against real Data) where the tests are run against.
For deployment i just use rsync and copy the files over.
My tests are reasonably fast (~5min) because i'm testing against a cloned DB and also because i'm only running tests for the custom modules I've built
I know this isn't standard TDD but Odoo doesn't conform to a lot of patterns in software development and most times you have to just strike a balance and find out what works for you.
Personally I've found run-bot too complicated and resource hungry.
PS: It's also helpful to have Selenium tests.

Perl equivalent to python's `setup.py develop`

Is there a Perl equivalent to the python setup.py develop convention for installing a module that can also be actively developed?
If not, what is the best practice for active development on a Perl module that is also installed into a local library path (for example a path setup using local::lib)?
I am just starting to make a module, so I will be developing the installation package (Makefile.PL etc.) alongside the meat of the module, and wondering what is the best way to set up the development environment. There are many good tutorials about making a module using h2xs or other tools, but I have not seen this question addressed.
The blib core module sets up the include paths to use the blib directory structure that's created when you do make in a standard module directory. That way you can easily use your new code in scripts by running them with perl -Mblib foo.pl.
Another, arguably better, way is to write your test code while developing as standard test scripts and run them via the prove script. See the documentation on Test::Simple for how to get started on that.
If you are not doing XS stuff where you have to actually rebuild things and produce build artifacts, you can use perl -Ilib .... This assumes that your lib directory is structured as it will be installed (but then, if it's not, you're rebuilding to produce build artifacts :).
If you are playing with this module in development from outside its directory structure:
$ perl -I/path/to/module/repo/lib ...
Or many of the other ways to set #INC:
$ export PERL5LIB=/path/to/module/repo/lib
$ perl some_program.pl
I typically don't use prove or make while developing because I'm concentrating on one test file at a time:
$ perl -Ilib t/some_test.t
When I think I've fixed the issue, I then try it against all the tests:
$ make test

Running tests for modules inside a crate

I'm writing a crate which consists of multiple modules spread across multiple files. These modules are interdependent, i.e. some of the modules use other modules inside this crate.
Is it possible to run tests in such modules separately from other modules in the crate? Running rust test some_module.rs does not work if some_module.rs contains references to other modules in this crate. Running rust test my_crate.rc does work, but it runs tests from all of crate modules, which is not what I want.
It is possible to run a subset of the tests:
> rustc --test my_crate.rc
> ./my_crate some_module
... test output ...
This will run any function for which the full path contains some_module. There is a fairly detailed help page for unit testing on the wiki, including this use case.
Note that rust test doesn't support this (yet!), so you have to compile the test runner and invoke it by hand (or, write a Makefile/script to do it).

Determine all of the file dependencies in a build process that uses makefiles and ant scripts

I'm trying to understand the build process of a codebase. The project uses both autoconf (configure scripts that generate makefiles) and Maven.
I would like to be able identify all of the file dependencies in the project, so that for any output file that ends up being generated by a build, I can identify how it was actually produced. Ultimately, I'd like to generate a diagram using something like graphviz to visualize the dependencies, but for now I just want to extract them.
Is there any automated way to do this? In other words, given some makefiles and Maven or ant XML files, and the name of the top-level target, is there a way to identify all of the files that will be generated, the programs used to generate them, and the input files associated with those programs?
Electric Accelerator and ClearCase are two systems that do this, by running the build and watching what it does (presumably by intercepting operating system calls). This has the advantage of working for any tool, and being unaffected by buggy makefiles (hint: they're all buggy).
That's probably the only reliable way for non-trivial makefiles, since they all do things like generating new make rules on the fly, or have behaviour that depends on the existence of files on disk that are not explicitly listed in rules.
I don't know about the maven side, but once you've ./configured the project, you could grep through the output of make -pd (make --print-data-base --dry-run) to find the dependencies. This will probably be more annoying if it's based on recursive make, but still manageable.
Note that if you're using automake, it computes detailed dependencies as a side-effect of compilation, so you won't get all the dependencies on #included headers until you do a full build.