I have an automated build script which involves unit testing of some GWT Modules in production mode. It appears that when these tests are run, they recompile the GWT module.
However, earlier in the build script, I have already compiled the modules. This is an obvious waste of effort. Does anybody know of any way to test a GWTTestCase to run in production mode, on modules that were already compiled.
I don't mind losing stacktraces or any information, because the build server only informs developers of which tests fails, and expects them to debug in their own environment.
This will be helpful for you
The main class in the test infrastructure is JUnitShell. To control aspects of how your tests execute, you must pass arguments to this class. Arguments cannot be passed directly through the command-line because normal command-line arguments go directly to the JUnit runner. Instead, define the system property gwt.args to pass arguments to JUnitShell.
For example, to run tests in production mode (that is, run the tests afer they have been compiled into JavaScript), declare -Dgwt.args="-prod" as a JVM argument when invoking JUnit. To get a full list of supported options, declare -Dgwt.args="-help" (instead of running the test, help is printed to the console).
Running your test in Production Mode
When using the webAppCreator tool, you get the ability to launch your tests in either development mode or production mode. Make sure you test in both modes - although rare, there are some differences between Java and JavaScript that could cause your code to produce different results when deployed.
If you instead decide to run the JUnit TestRunner from command line, you must add some additional arguments to get your unit tests running in production mode. By default, tests run in development mode are run as normal Java bytecode in a JVM. To override this default behavior, you need to pass arguments to JUnitShell
-Dgwt.args="-prod"
Well, I found a solution, but it is not elagant. I modified JUnitShell.maybeCompileForWebMode method, you should add VM argument -Dcompile=false to prevent compilation while running unit tests . You can get modified version of JUnitShell from here .
Related
I wish all of you a happy new year! I want to write more unit tests in the new year, but how to handle it more easily? I found out to run tests while updating a module via --test-enable and --stop-after-init command line parameters. I have also read about --test-file parameter, but it does not work. This parameter is also not described in the docs.
How would you do TDD (test driven development)? In order to do that you have to be able to run tests quickly. Requiring to test the whole module with all its dependencies makes it impractical to write tests frequently. How to run a single unit test case?
Edited my own question from 'run single unit test case'. This command works for me:
python ./odoo.py -i module_to_test --log-level=test -d minimal_database --test-enable --stop-after-init
This is quite similar to what danidee answered.
However, the solution seems to be not to use the --test-file parameter, since this unexpectedly runs all tests of all dependent modules and does whatever else, making it too long to execute.
Another part of the solution is to use a minimal database where just the module to be tested plus its dependencies of course is installed.
Now the above command takes only several seconds to execute at my machine even if the tested code uses objects from dependent modules. If only I could prevent the module to make an update each time while running the tests in order to make it even faster and more efficient ...
It's really difficult to do TDD with Odoo because most modules depend on other modules that still depend on some other modules and so on...
But what works for me is to have a custom module that installs all the other custom modules i've created,
This same module also contains all the tests for "anything custom" I've done on Odoo. Though you can place each modules tests within the module itself but I've had some problems when some tests didn't run for no reason. So i decided to place all of them in one module
so whenever i push a new commit. This is the command i use to run tests (assuming my module is named all_modules)
python odoo.py --addons=addons_path -i all_modules --log-level=test -d Test_Database --test-enable --xmlrpc-port=80xx --stop-after-init
Before doing this, I already have a Database (Test_Database) that's a clone of my production environment (so i can test against real Data) where the tests are run against.
For deployment i just use rsync and copy the files over.
My tests are reasonably fast (~5min) because i'm testing against a cloned DB and also because i'm only running tests for the custom modules I've built
I know this isn't standard TDD but Odoo doesn't conform to a lot of patterns in software development and most times you have to just strike a balance and find out what works for you.
Personally I've found run-bot too complicated and resource hungry.
PS: It's also helpful to have Selenium tests.
Webstorm has great test running support, which I can use to start my test suite by telling it "run the file testStart.js". I can then define testStart.js to do the setup for my test environment (eg. creating a Sinon sandbox) and to bring in all the tests themselves.
That works great, when I run the whole suite. But Webstorm has a feature that let's you re-run just a single failing test, and when I try to use that feature I run in to a problem: my test setup code doesn't get run because the individual test file doesn't invoke the setup code.
So, I'm looking for a solution. The only options I see so far are:
instead of having a separate testStart.js file I could move the setup code in to a testSetup.js file and make every test require it. DOWNSIDE: I have to remember to import the setup file in every single test file (vs. never having to import it in my current scheme)
use Mocha's --require option to run a testSetup.js. DOWNSIDE: Code require-ed in this way doesn't have access to the Mocha code, so I'm not sure how I can call beforeEach/afterEach
use some other Mocha or Webstorm option that I don't know about to run the test setup code. DOWNSIDE: Not sure if such an option even exists
If anyone else has run in to this problem I'd love to hear if any of the above solutions can be made to work (or if there's another solution I hadn't considered).
I wound up just importing testSetup.js in to every test file. It was a pain and violated the DRY principle, but it worked.
If anyone else has a better solution though I'll happily accept it.
I am using Specflow and firing the nunit-console.exe in TeamCity to run tests as follows:
"C:\Program Files (x86)\NUnit 2.6.4\bin\nunit-console.exe" /labels /include:regression out=TestResultRegression.txt /xml=TestResultRegression.xml /framework=net-4.0 .\MyTests.dll
How can I access the NUnit include tag (/include:regression) so that I can call certain methods or properties for test setup (ex. If include = regression, then run this certain pull these certain test case ids from the app.config file where the key is "regression")
There is no way in NUnit for you to know what runner is running you or how it is doing it. This separation of concerns is by design. You could, of course, access the command line that ran the tests and examine it, but I think that again forces the tests to know too much about their environment.
Best solution is to organize tests hierarchically so that all tests requiring a certain setup are in a namespace or fixture where that type of setup is performed.
When I use the ctest interface to cmake (add_test(...)), and run the make target make test, several of my tests fail. When I run each test directly at the command line from the binary build folder, they all work.
What can I use to debug this?
To debug, you can first run ctest directly instead of make test.
Then you can add the -V option to ctest to get verbose output.
A third neat trick from a cmake developer is to have ctest launch an xterm shell. So add
add_test(run_xterm xterm)
in your CMakeLists.txt file at the end. Then run make test and it will open up a xterm. Then see if you can replicate the test failing by running it from the xterm. If it does fail, then check your environment (i.e. run env > xterm.env from the xterm, then run it again env > regular.env from your normal session and diff the outputs).
I discovered that my tests were wired to look for external files passed on a relative path to the top of the binary cmake output folder (i.e. the one where you type make test). However, when you run the test through ctest, the current working directory is the binary folder for that particular subdirectory, and the test failed.
In other words:
This worked
test/mytest
but this didn't work
cd test; ./mytest
I had to fixed the unit tests to use an absolute path to the configuration files it needed, instead of a path like ../../../testvector/foo.txt.
The problem with ctest and googletest is that it assumes to run one command for each test case, whilst you will have potentially a lot of different test cases running in a single test executable. So when you use add_test with a Google Test executable, CTest reports one single failure whether actual number of failed test cases is 1 or 1000.
Since you say that running your test cases isolated makes them pass, my first suspicion is that your tests are somehow coupled. You can quickly check this by randomizing the test execution order using --gtest_shuffle, and see if you get the same failures.
I think the best approach to debug your failing test cases is not to use CTest, but just run the test executable using the command line options to filter the actual test cases getting run. I would start by running only the first test that fails together with the test run immediately before when the whole test suite is run.
Other useful tools to debug your test cases can be SCOPED_TRACE and extending your assertion messages with additional information.
I'd like to write one suite of SpecFlow tests that test my web application (using Selenium) in various environments.
so I have a test written like this
Given that I am on the login page
which in turn leads to a step definition that boils down to
driver.Navigate().GoToUrl("http://www.myapp.com/login.aspx");
However, I want my test to be able to run against "http://localhost" or `"http://test.myapp.com" as well, without having to recompile. The best idea I've come up with is to place these sorts of settings in the App.config file, but that has its problems as well.
Does anyone have suggestions on how best to achieve this? Basically I want to pass in environment settings for my tests at runtime.
You can do this by changing the config file through the build process using transforms and there are tools that will let you run the transform outwith the build process (so you don't have to manually change it and you avoid a build) using the command line. This has been talked about already on SO:
Web.Config transforms outside of Microsoft MSBuild?
For example using PowerShell.
I would still question whether you might be better and starting a local instance of the service that you wish to test, rather than connecting to something which is out of the tests explicit control. You could instead use a method similar to self hosting a web api or host a wcf service to do this for you. This way you can inject mocks, modify and reset the database, or perform any other action you want.
If that still isn't what you need, an alternative to config files would be to setup environment variables that can be read at run time, see How to pass Command line argument to specflow test scenario