Meson: How to run tests that depend on an external process? - meson-build

I'm writing a meson.build file where I run several tests. These tests need a server running on a port in order to run successfully.
I started with this:
exe = executable('tests-client', 'tests-client.c')
test('test-client', suite: 'foo')
To run the tests I do:
$ meson test --suite foo
To launch the server, I run a script before the test call:
exe = executable('tests-client', 'tests-client.c')
run_command('start-server.sh')
test('test-client', suite: 'foo')
However, this doesn't work because run_script runs when meson builds, not when the tests are run. I also tried to run the server as if it was a test, but although it may work, it's semantically incorrect.

One possibility is to write a little script that starts the server and calls the test that is given as an argument. Then you can use Meson's add_test_setup() with the kwarg default: true.
add_test_setup('server',
exe_wrapper: find_program('start-server-before-test.sh'),
is_default: true,
)

Related

Limiting run to only one file does not work

I just installed Cypress and was test running it.
Running npm run cy-run will run all test files which takes quite a lot of time and can become confusing.
Note that I have not added a single test of mine. The tests are the default examples coming from Cypress installation.
When attempting to limit to a single file I found several sources - including this question - that all seem to agree that the following would limit the run to just one single file:
npm run cy-run --spec cypress/integration/2-advanced-examples/viewport.spec.js
But Cypress does not care and goes on to pick up all tests and run them:
Instead of trying to run this from the command line, rather just - while writing and running your tests - prefix the only chain to it.
Example, change this:
it("should do stuff", () => ...);
to this:
it.only("should do stuff", () => ...);
You can add this to describe.only as well if you want to run a whole suite - or in your case, file - alone.
Another Option:
If you'd like to only run tests that you've written, you can either just remove all those example files or change describe to xdescribe or it to xit and cypress will skip running those specified tests.
Command Line Solution:
You're missing --, add that in and it should work as per your solution.
It should be written like this:
npm run cy-run -- --spec cypress/integration/2-advanced-examples/viewport.spec.js

Execute TestNG.xml in dry run mode via eclipse

Is there a way to execute TestNG.xml in dry run mode so that I can figure out what methods gets qualified for the test run. I am using Eclipse and intend to run the tests via testng.xml. How to configure Run Configurations for this.
Newbie to Selenium-TestNG and Eclipse
I tried to provide -Dtestng.mode.dryrun=true in Run Configurations -> Arguments tab under both Program argument and VM arguments
The run configurations had no effect on the execution. The tests were executed in normal fashion. I expected the configurations would just list test methods in the console
You are going to see all tests with no failures. That what you expect when you run testNg with the argument.
To check the dryrun argument works, make your test to fail. Then run your test with "-Dtestng.mode.dryrun=true". Add the argument in "VM arguments"
Also, check your version of testNG is 6.14 or higher

How to run CTest test in debugger

Suppose in CMakeLists.txt I have
add_executable(mytarget main.cpp)
enable_testing()
add_test(mytarget_test0 mytarget -option0)
Is there any easy way how can I run mytarget in GDB with all command line options from some particular CTest test? (Other than searching for test in CMakeLists and then copy-pasting add_test parameters to command line manually?)
Real life scenario: I run all tests using ctest, one fails, I want to open it in debugger quickly.
In other build systems there are command line parameters to use gdb, for example in Meson meson test --gdb testname , in bazel bazel --run_under=gdbserver. I did not found anything similar for CTest
It is possible to get test command with arguments:
ctest -R $regex_matching_test -V -N
As output you will get something like:
Test project ../cmake-build-debug-gcc7
Constructing a list of tests
Done constructing a list of tests
1: Test command: ../cmake-build-debug-gcc7/my_tool "-v" "test0"
Test #1: my_tool_test0
Total Tests: 1
Then using regexp it is possible to grab command line args for gdb
I use the following procedure:
make clean
make # Build target and all unit tests
make test # Run all unit tests: make test
You will get an result list and some output, depending of the unit test frame work. Fix the issue and re-run the failed test within gdb until it succeeds. Restart the whole procedure.
In case you are using an IDE (e.g. QtCreator) it is easy to debug through the failed test case.

Running python unit tests as part of bamboo build

I need to run python units test cases as part of bamboo build step and the build needs to fail if unit tests fail.
For this, I have a Script step in bamboo build and i am trying to run the following in it:
python -m unittest discover /test
Here, /test folder has all the unit tests.
The output of the above script it
Ran (0) tests
So the problem is that bamboo isn't able to discover these tests. Bamboo agent is linux.
Wondering if anyone has done such a thing before and has any suggestions.
The following worked. Used -p (pattern) attribute to discover/run the unit tests in bamboo (unix build agent)
python -m unittest discover -s test -p "T*.py"
Note: 1. all my test cases start with "T" e.g. Test_check.py
2. "test" is the package where all my test cases are.
If you haven't figured it out, likely because in windows filenames aren't case sensitive but in Linux they are...
And you're test file named Test_xxxx.py isn't the same as test_xxxx.py which is the pattern that discovery is trying to use...

Is there a way to run benchmarks with failing tests?

Is it expected that benchmarks don't run unless all tests in the package have passed.
I've looked at the testing package doc and the testing flags and I can't find it documented that benchmarks run only after all tests pass.
Is there a way to force benchmark functions to run even when some tests in the package have failed ?
You can skip the failing tests using the -run flag, or choose to run none at all
go test -bench . -run NONE