Running tests for modules inside a crate - testing

I'm writing a crate which consists of multiple modules spread across multiple files. These modules are interdependent, i.e. some of the modules use other modules inside this crate.
Is it possible to run tests in such modules separately from other modules in the crate? Running rust test some_module.rs does not work if some_module.rs contains references to other modules in this crate. Running rust test my_crate.rc does work, but it runs tests from all of crate modules, which is not what I want.

It is possible to run a subset of the tests:
> rustc --test my_crate.rc
> ./my_crate some_module
... test output ...
This will run any function for which the full path contains some_module. There is a fairly detailed help page for unit testing on the wiki, including this use case.
Note that rust test doesn't support this (yet!), so you have to compile the test runner and invoke it by hand (or, write a Makefile/script to do it).

Related

How can you use multiple modules in a Raku project, if the modules are defined in the project?

I'm playing around with writing modules in Raku, when it made sense for me to break a piece of functionality off into another .rakumod file. How can I link these files together when I compile?
I tried to pull the other module to my main module via:
use MyProject::OtherModule;
But I get an error that it can't find this module even though they're side by side in the directory. I tried looking at some OSS projects in the Raku world, most of them are one file, the compiler Rakudo seems to use multiple module files but I can't figure out how they're linked.
Do I have to publish this module every time I want to run my project? How do I structure this if my project gets huge? Surely the best solution isn't to have it all in one file?
Edit: I should also note that I used this at the top of my new module too:
unit module MyProject::OtherModule;
When running locally, if you have your META6.json declared, you can use
raku -I. script.raku
and it will use the uninstalled versions, and you don't need to add any use lib in the script.

run single unit tests for a module efficiently

I wish all of you a happy new year! I want to write more unit tests in the new year, but how to handle it more easily? I found out to run tests while updating a module via --test-enable and --stop-after-init command line parameters. I have also read about --test-file parameter, but it does not work. This parameter is also not described in the docs.
How would you do TDD (test driven development)? In order to do that you have to be able to run tests quickly. Requiring to test the whole module with all its dependencies makes it impractical to write tests frequently. How to run a single unit test case?
Edited my own question from 'run single unit test case'. This command works for me:
python ./odoo.py -i module_to_test --log-level=test -d minimal_database --test-enable --stop-after-init
This is quite similar to what danidee answered.
However, the solution seems to be not to use the --test-file parameter, since this unexpectedly runs all tests of all dependent modules and does whatever else, making it too long to execute.
Another part of the solution is to use a minimal database where just the module to be tested plus its dependencies of course is installed.
Now the above command takes only several seconds to execute at my machine even if the tested code uses objects from dependent modules. If only I could prevent the module to make an update each time while running the tests in order to make it even faster and more efficient ...
It's really difficult to do TDD with Odoo because most modules depend on other modules that still depend on some other modules and so on...
But what works for me is to have a custom module that installs all the other custom modules i've created,
This same module also contains all the tests for "anything custom" I've done on Odoo. Though you can place each modules tests within the module itself but I've had some problems when some tests didn't run for no reason. So i decided to place all of them in one module
so whenever i push a new commit. This is the command i use to run tests (assuming my module is named all_modules)
python odoo.py --addons=addons_path -i all_modules --log-level=test -d Test_Database --test-enable --xmlrpc-port=80xx --stop-after-init
Before doing this, I already have a Database (Test_Database) that's a clone of my production environment (so i can test against real Data) where the tests are run against.
For deployment i just use rsync and copy the files over.
My tests are reasonably fast (~5min) because i'm testing against a cloned DB and also because i'm only running tests for the custom modules I've built
I know this isn't standard TDD but Odoo doesn't conform to a lot of patterns in software development and most times you have to just strike a balance and find out what works for you.
Personally I've found run-bot too complicated and resource hungry.
PS: It's also helpful to have Selenium tests.

Accessing NUnit Console include parameter name inside tests

I am using Specflow and firing the nunit-console.exe in TeamCity to run tests as follows:
"C:\Program Files (x86)\NUnit 2.6.4\bin\nunit-console.exe" /labels /include:regression out=TestResultRegression.txt /xml=TestResultRegression.xml /framework=net-4.0 .\MyTests.dll
How can I access the NUnit include tag (/include:regression) so that I can call certain methods or properties for test setup (ex. If include = regression, then run this certain pull these certain test case ids from the app.config file where the key is "regression")
There is no way in NUnit for you to know what runner is running you or how it is doing it. This separation of concerns is by design. You could, of course, access the command line that ran the tests and examine it, but I think that again forces the tests to know too much about their environment.
Best solution is to organize tests hierarchically so that all tests requiring a certain setup are in a namespace or fixture where that type of setup is performed.

Running GWTTestCase on Already Compiled Module

I have an automated build script which involves unit testing of some GWT Modules in production mode. It appears that when these tests are run, they recompile the GWT module.
However, earlier in the build script, I have already compiled the modules. This is an obvious waste of effort. Does anybody know of any way to test a GWTTestCase to run in production mode, on modules that were already compiled.
I don't mind losing stacktraces or any information, because the build server only informs developers of which tests fails, and expects them to debug in their own environment.
This will be helpful for you
The main class in the test infrastructure is JUnitShell. To control aspects of how your tests execute, you must pass arguments to this class. Arguments cannot be passed directly through the command-line because normal command-line arguments go directly to the JUnit runner. Instead, define the system property gwt.args to pass arguments to JUnitShell.
For example, to run tests in production mode (that is, run the tests afer they have been compiled into JavaScript), declare -Dgwt.args="-prod" as a JVM argument when invoking JUnit. To get a full list of supported options, declare -Dgwt.args="-help" (instead of running the test, help is printed to the console).
Running your test in Production Mode
When using the webAppCreator tool, you get the ability to launch your tests in either development mode or production mode. Make sure you test in both modes - although rare, there are some differences between Java and JavaScript that could cause your code to produce different results when deployed.
If you instead decide to run the JUnit TestRunner from command line, you must add some additional arguments to get your unit tests running in production mode. By default, tests run in development mode are run as normal Java bytecode in a JVM. To override this default behavior, you need to pass arguments to JUnitShell
-Dgwt.args="-prod"
Well, I found a solution, but it is not elagant. I modified JUnitShell.maybeCompileForWebMode method, you should add VM argument -Dcompile=false to prevent compilation while running unit tests . You can get modified version of JUnitShell from here .

Organizing Haskell Tests

So I'm trying to follow the suggested structure of a Haskell project, and I'm having a couple problems organizing my tests.
For simplicity, let's start with:
src/Clue/Cards.hs # defines Clue.Cards module
testsuite/tests/Clue/Cards.hs # tests Clue.Cards module
For one, I'm not sure what to name the module in testsuite/tests/Clue/Cards.hs that contains the test code, and for another, I'm no sure how to compile my test code so that I can link to my source:
% ghc -c testsuite/tests/Clue/Cards.hs -L src
testsuite/tests/Clue/Cards.hs:5:0:
Failed to load interface for `Clue.Cards':
Use -v to see a list of the files searched for.
I use myself the approach taken by Snap Framework for their test-suites, which basically boils down to:
Use a test-framework such as haskell-test-framework or HTF
Name the modules containing tests by appending .Tests to the module-name containing the IUT, e.g.:
module Clue.Cards where ... -- module containing IUT
module Clue.Cards.Tests where ... -- module containing tests for IUT
By using separate namespaces, you can put your tests in a separate source-folder tests/, you can then use a separate Cabal build-target (see also cabal test-build-target support in recent Cabal versions) for the test-suite which includes the additional source folder in its hs-source-dirs setting, e.g.:
Executable clue
hs-source-dirs: src
...
Executable clue-testsuite
hs-source-dirs: src tests
...
This works, since there's no namespace collision between the modules in your IUT and the test-suite anymore.
Here's another way:
Each module's unit tests are defined as a hunit TestList at the end of the module, with some consistent naming scheme, such as "tests_Path_To_Module". I think this helps me write tests, since I don't have to search for another module far away in the source tree, nor keep two parallel file hierarchies in sync.
A module's test list also includes the tests of any sub-modules. Hunit's runTestTT runner is built in to the app, and accessible via a test command. This means a user can run the tests at any time without special setup. Or if you don't like shipping tests in the production app, use CPP and cabal flags to include them only in dev builds, or in a separate test runner executable.
There are also functional tests, one or more per file in the tests/ directory, run with shelltestrunner, and some dev-process-related tests based in the Makefile.
Personally I feel that an extra ./src/ directory doesn't make much sense for small Haskell projects. Of coarse there's source, I downloaded the source code.
Either way (with or without src), I'd suggest you refactor and have a Clue directory and a Test directory:
./Clue/Cards.hs -- module Clue.Cards where ...
./Test/Cards.hs -- module Test.Cards where ...
This allows GHCi + Test.Cards to see Clue.Cards without any extra args or using cabal. On that note, if you don't use cabal + flags for optionally building your test modules then you should look into it.
Another option, which I use in many of my projects, is to have:
./Some/Module/Hierarchy/File.hs
./tests/someTests.hs
And I cabal install the package then run the tests/someTests.hs stuff. I guess this would be annoying if my packages were particularly large and too a long time to install.
For completeness sake, it worth mentioning a very easy approach for small project through ghci -i. For example, in your case,
>ghci -isrc:testsuite
ghci>:l Clue.Cards
ghci>:l tests.Clue.Cards