run single unit tests for a module efficiently - odoo

I wish all of you a happy new year! I want to write more unit tests in the new year, but how to handle it more easily? I found out to run tests while updating a module via --test-enable and --stop-after-init command line parameters. I have also read about --test-file parameter, but it does not work. This parameter is also not described in the docs.
How would you do TDD (test driven development)? In order to do that you have to be able to run tests quickly. Requiring to test the whole module with all its dependencies makes it impractical to write tests frequently. How to run a single unit test case?

Edited my own question from 'run single unit test case'. This command works for me:
python ./odoo.py -i module_to_test --log-level=test -d minimal_database --test-enable --stop-after-init
This is quite similar to what danidee answered.
However, the solution seems to be not to use the --test-file parameter, since this unexpectedly runs all tests of all dependent modules and does whatever else, making it too long to execute.
Another part of the solution is to use a minimal database where just the module to be tested plus its dependencies of course is installed.
Now the above command takes only several seconds to execute at my machine even if the tested code uses objects from dependent modules. If only I could prevent the module to make an update each time while running the tests in order to make it even faster and more efficient ...

It's really difficult to do TDD with Odoo because most modules depend on other modules that still depend on some other modules and so on...
But what works for me is to have a custom module that installs all the other custom modules i've created,
This same module also contains all the tests for "anything custom" I've done on Odoo. Though you can place each modules tests within the module itself but I've had some problems when some tests didn't run for no reason. So i decided to place all of them in one module
so whenever i push a new commit. This is the command i use to run tests (assuming my module is named all_modules)
python odoo.py --addons=addons_path -i all_modules --log-level=test -d Test_Database --test-enable --xmlrpc-port=80xx --stop-after-init
Before doing this, I already have a Database (Test_Database) that's a clone of my production environment (so i can test against real Data) where the tests are run against.
For deployment i just use rsync and copy the files over.
My tests are reasonably fast (~5min) because i'm testing against a cloned DB and also because i'm only running tests for the custom modules I've built
I know this isn't standard TDD but Odoo doesn't conform to a lot of patterns in software development and most times you have to just strike a balance and find out what works for you.
Personally I've found run-bot too complicated and resource hungry.
PS: It's also helpful to have Selenium tests.

Related

VS CODE - Using the Test Explorer UI, how do I manually exclude/include test files

I am currently working on a SAM deployment project that includes the use of python for the Lambda. I created tests using pytest and runs great on my terminal. But its somehow hard to read on a terminal. Somehow I would like to have a testing like Visual Studio 2019's Test features, where its clean and neat, easy to review.
Using VS CODE (as I am working on python files), I installed the Test Explorer UI and support for python tests. As soon as I open it, it loads a ton of tests including the tests of the 3rd party libraries that I have on my deployment, and it clutters my test explorer. I do not want any of these tests anyway, but I do not know how to exclude them.
I also would want to only include specified test files manually (if that is possible). I do not have use for tons of tests auto-detected by the test explorer.
I know it's a late reply, but still, there is a solution. Since you're using pytest, I will give details for that test framework.
Python Test Explorer is aware of pytest arguments and most of the pytest arguments can be used to modify test discovery and execution in the same way as if pytest is used from the command line. So, for example, if you want to exclude some folder, you can use --ignore=relative/path/to/some/folder argument. See pytest documentation on --ignore option.
It works pretty much the same if you want only to include some tests or folders. There is no special option for that, just list files and folders you want to include, for example, relative/path/to/some/folder or relative/path/to/some/test_file.py. See pytest documentation on selecting tests.
Now, you have to tell Python Test Explorer what tests you want to include/exclude. This can be done with python.testing.pytestArgs option in settings.json. For example,
"python.testing.pytestArgs": ["--ignore=relative/path/to/some/folder"]
or
"python.testing.pytestArgs": [
"relative/path/to/some/folder",
"relative/path/to/some/test_file.py"
]
Full settings.json for the last example:
{
"python.pythonPath": "python",
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": [
"relative/path/to/some/folder",
"relative/path/to/some/test_file.py"
]
}
Note: These settings also can be set in pytest.ini or other pytest configuration file. In that case, there is no need to modify settings.json.

Webstorm: How to Run Test Setup for Whole Suite AND Individual Tests?

Webstorm has great test running support, which I can use to start my test suite by telling it "run the file testStart.js". I can then define testStart.js to do the setup for my test environment (eg. creating a Sinon sandbox) and to bring in all the tests themselves.
That works great, when I run the whole suite. But Webstorm has a feature that let's you re-run just a single failing test, and when I try to use that feature I run in to a problem: my test setup code doesn't get run because the individual test file doesn't invoke the setup code.
So, I'm looking for a solution. The only options I see so far are:
instead of having a separate testStart.js file I could move the setup code in to a testSetup.js file and make every test require it. DOWNSIDE: I have to remember to import the setup file in every single test file (vs. never having to import it in my current scheme)
use Mocha's --require option to run a testSetup.js. DOWNSIDE: Code require-ed in this way doesn't have access to the Mocha code, so I'm not sure how I can call beforeEach/afterEach
use some other Mocha or Webstorm option that I don't know about to run the test setup code. DOWNSIDE: Not sure if such an option even exists
If anyone else has run in to this problem I'd love to hear if any of the above solutions can be made to work (or if there's another solution I hadn't considered).
I wound up just importing testSetup.js in to every test file. It was a pain and violated the DRY principle, but it worked.
If anyone else has a better solution though I'll happily accept it.

Perl equivalent to python's `setup.py develop`

Is there a Perl equivalent to the python setup.py develop convention for installing a module that can also be actively developed?
If not, what is the best practice for active development on a Perl module that is also installed into a local library path (for example a path setup using local::lib)?
I am just starting to make a module, so I will be developing the installation package (Makefile.PL etc.) alongside the meat of the module, and wondering what is the best way to set up the development environment. There are many good tutorials about making a module using h2xs or other tools, but I have not seen this question addressed.
The blib core module sets up the include paths to use the blib directory structure that's created when you do make in a standard module directory. That way you can easily use your new code in scripts by running them with perl -Mblib foo.pl.
Another, arguably better, way is to write your test code while developing as standard test scripts and run them via the prove script. See the documentation on Test::Simple for how to get started on that.
If you are not doing XS stuff where you have to actually rebuild things and produce build artifacts, you can use perl -Ilib .... This assumes that your lib directory is structured as it will be installed (but then, if it's not, you're rebuilding to produce build artifacts :).
If you are playing with this module in development from outside its directory structure:
$ perl -I/path/to/module/repo/lib ...
Or many of the other ways to set #INC:
$ export PERL5LIB=/path/to/module/repo/lib
$ perl some_program.pl
I typically don't use prove or make while developing because I'm concentrating on one test file at a time:
$ perl -Ilib t/some_test.t
When I think I've fixed the issue, I then try it against all the tests:
$ make test

Running GWTTestCase on Already Compiled Module

I have an automated build script which involves unit testing of some GWT Modules in production mode. It appears that when these tests are run, they recompile the GWT module.
However, earlier in the build script, I have already compiled the modules. This is an obvious waste of effort. Does anybody know of any way to test a GWTTestCase to run in production mode, on modules that were already compiled.
I don't mind losing stacktraces or any information, because the build server only informs developers of which tests fails, and expects them to debug in their own environment.
This will be helpful for you
The main class in the test infrastructure is JUnitShell. To control aspects of how your tests execute, you must pass arguments to this class. Arguments cannot be passed directly through the command-line because normal command-line arguments go directly to the JUnit runner. Instead, define the system property gwt.args to pass arguments to JUnitShell.
For example, to run tests in production mode (that is, run the tests afer they have been compiled into JavaScript), declare -Dgwt.args="-prod" as a JVM argument when invoking JUnit. To get a full list of supported options, declare -Dgwt.args="-help" (instead of running the test, help is printed to the console).
Running your test in Production Mode
When using the webAppCreator tool, you get the ability to launch your tests in either development mode or production mode. Make sure you test in both modes - although rare, there are some differences between Java and JavaScript that could cause your code to produce different results when deployed.
If you instead decide to run the JUnit TestRunner from command line, you must add some additional arguments to get your unit tests running in production mode. By default, tests run in development mode are run as normal Java bytecode in a JVM. To override this default behavior, you need to pass arguments to JUnitShell
-Dgwt.args="-prod"
Well, I found a solution, but it is not elagant. I modified JUnitShell.maybeCompileForWebMode method, you should add VM argument -Dcompile=false to prevent compilation while running unit tests . You can get modified version of JUnitShell from here .

Organizing Haskell Tests

So I'm trying to follow the suggested structure of a Haskell project, and I'm having a couple problems organizing my tests.
For simplicity, let's start with:
src/Clue/Cards.hs # defines Clue.Cards module
testsuite/tests/Clue/Cards.hs # tests Clue.Cards module
For one, I'm not sure what to name the module in testsuite/tests/Clue/Cards.hs that contains the test code, and for another, I'm no sure how to compile my test code so that I can link to my source:
% ghc -c testsuite/tests/Clue/Cards.hs -L src
testsuite/tests/Clue/Cards.hs:5:0:
Failed to load interface for `Clue.Cards':
Use -v to see a list of the files searched for.
I use myself the approach taken by Snap Framework for their test-suites, which basically boils down to:
Use a test-framework such as haskell-test-framework or HTF
Name the modules containing tests by appending .Tests to the module-name containing the IUT, e.g.:
module Clue.Cards where ... -- module containing IUT
module Clue.Cards.Tests where ... -- module containing tests for IUT
By using separate namespaces, you can put your tests in a separate source-folder tests/, you can then use a separate Cabal build-target (see also cabal test-build-target support in recent Cabal versions) for the test-suite which includes the additional source folder in its hs-source-dirs setting, e.g.:
Executable clue
hs-source-dirs: src
...
Executable clue-testsuite
hs-source-dirs: src tests
...
This works, since there's no namespace collision between the modules in your IUT and the test-suite anymore.
Here's another way:
Each module's unit tests are defined as a hunit TestList at the end of the module, with some consistent naming scheme, such as "tests_Path_To_Module". I think this helps me write tests, since I don't have to search for another module far away in the source tree, nor keep two parallel file hierarchies in sync.
A module's test list also includes the tests of any sub-modules. Hunit's runTestTT runner is built in to the app, and accessible via a test command. This means a user can run the tests at any time without special setup. Or if you don't like shipping tests in the production app, use CPP and cabal flags to include them only in dev builds, or in a separate test runner executable.
There are also functional tests, one or more per file in the tests/ directory, run with shelltestrunner, and some dev-process-related tests based in the Makefile.
Personally I feel that an extra ./src/ directory doesn't make much sense for small Haskell projects. Of coarse there's source, I downloaded the source code.
Either way (with or without src), I'd suggest you refactor and have a Clue directory and a Test directory:
./Clue/Cards.hs -- module Clue.Cards where ...
./Test/Cards.hs -- module Test.Cards where ...
This allows GHCi + Test.Cards to see Clue.Cards without any extra args or using cabal. On that note, if you don't use cabal + flags for optionally building your test modules then you should look into it.
Another option, which I use in many of my projects, is to have:
./Some/Module/Hierarchy/File.hs
./tests/someTests.hs
And I cabal install the package then run the tests/someTests.hs stuff. I guess this would be annoying if my packages were particularly large and too a long time to install.
For completeness sake, it worth mentioning a very easy approach for small project through ghci -i. For example, in your case,
>ghci -isrc:testsuite
ghci>:l Clue.Cards
ghci>:l tests.Clue.Cards