Is it possible to run multiple fitness suite tests at once?
I am aware of the -Dslim.port option and setting this value to 0. Then fitness will supposedly choose an available port. However, I wasn't able to see this in action. I've always ran into this error: java.net.BindException: Address already in use (Bind failed).
I've read some documentation here: http://fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.WritingAcceptanceTests.SliM.SlimProtocol.PortManagement.
My use case for this would be on the UI of fitnesse and not the command line.
I believe we resolved/found the issue. It was caused by a non-concurrent safe fixture (fitnesse.fixtures.SetUp of FitNesse's own test set).
The take-away: build your own test suites directly under the FitNesseRoot directory and not below FitNesseRoot/FitNesse/SuiteAcceptanceTests/SuiteSlimTests
Related
Because of the resource exhaustion there is a need to run test cases serially, without threads (integration tests for CUDA code). Went through source code (e.g. tweaking GetThreadCount()) and trying to find other ways how to influence gmock/gtest framework to run tests serially but found no way out.
Apparently at first did not find any command line arguments that could influence it. Feel like the only way out is to create many binaries or create a script that utilizes --gtest-filter. I would not like to mess with secret synchronization primitives between test cases.
I'm just curious if there's any known unwanted effect of this flag on automation, or if it can make my tests less valid.
I'm currently running tests with this flag and it doesn't seem to hurt anything. Is it just overlooked?
https://peter.sh/experiments/chromium-command-line-switches/#browser-test
https://github.com/GoogleChrome/puppeteer/blob/master/lib/Launcher.js#L38
The --browser-test activates an internal test used by the Chromium developers regarding canvas repaints.
Some older code in the repository, gives this hint
Tells Content Shell that it's running as a content_browsertest.
And this issue in the Chromium repository contains more information:
We need a test that checks canvas capture happens for N times when there are N repaints. This test is not appropriate for webkit layout tests as it is slow and there are mock streams involved.
Looks like they added a special flag for this test.
Therefore, you should not activate this flag as this test is about internal browser tests by the developer team not about testing websites.
We currently use Behat 3 to automate BDD tests for our website.
The current setup uses Jenkins to run Selenium which attaches to Firefox and uses XVFB to render (this allows us to save screenshots when anything goes wrong).
This is great for testing that the site (including JavaScript) works and that a user can perform each documented task successfully.
I am looking to expand our testing facilities, and one thing I would like to add is the ability to check multiple browsers. This is very important as we get occasional quirks that can break functionality.
Since the tests currently take slightly over an hour to run (and we have 4 suites for that site on Jenkins), I'd preferably like to run all the browsers at the same time. If I can't find a way to do it concurrently, then I likely will just set up multiple Behat profiles and run each one in series.
One thing I've been looking at as a possible solution is Ghostlab. This would allow us to test across, multiple browsers and multiple devices, including mobile, at the same time. The problem is that I can't find a way of joining this to Behat in a meaningful way.
I could run one browser connected to Ghostlab, which would cause the same actions to be taken across all connected browsers, however, were a browser other than the one controlled by Selenium to break, I do not know how we would capture that information.
TL;DR: Is there any way for me to run BDD (preferable Behat) tests across multiple browsers in parallel, and capture information from any browser that fails?
This is what multi-configuration jobs (or matrix jobs) are designed for in Jenkins.
You specify your job configuration once, but add one or more variables that should change each time, building a matrix of combinations (in your case, the matrix has one dimension: browser).
Jenkins then runs one main build with multiple sub-builds in parallel — one for each combination in the matrix. You can then clearly see the results for each combination.
This requires that your test job can be parameterised, i.e. you can choose at runtime which browser should be run, rather than running all tests together in a single job.
The Jenkins wiki has minimal documentation on this feature, but there are a few good blog posts (and Stack Overflow questions) out there on how to set it up.
A matrix job will use all available "executors" in Jenkins, to run builds in parallel as much as possible.
In a default Jenkins installation, there are two executors availble, but you can change this, or extend Jenkins by adding further build machines.
I'd like to make some changes to my copy of the rust standard library, then run the tests in the source files I changed. I do not need to test the compiler itself. How can I do this without testing a lot of things I am not changing and do not care about?
Here are some things I've already tried. A note - the specific file I want to play around with is libstd/io/net/pipes.rs in rust 0.12.0.
I tried rustc --test pipes.rs - the imports and options are not set up properly, it seems, and a multitude of errors is the result.
Following the rust test suite documentation, I tried make check-stage1-std NO_REBUILD=1, but this failed with "can't find crate for `green`". A person on the #rust-internals irc channel informed me that "make check-stage1 breaks semi-often as it's not the 'official way' to run tests."
Another person on that channel suggested make check-stage0-std, which seems to check libstd, but doesn't restrict testing to the file I changed, even if I use the TESTNAME flag as specified in the rust test suite documentation.
As of 2022 the way to run the test suite for the rust standard library is documented at https://rustc-dev-guide.rust-lang.org/tests/running.html .
While the documentation mentions the ability to test a single file it appears to malfunction
$ ./x.py test library/core/tests/str.rs
Updating only changed submodules
Submodules updated in 0.01 seconds
Building rustbuild
Finished dev [unoptimized] target(s) in 0.09s
thread 'main' panicked at 'error: no rules matched library/core/tests/str.rs', src/bootstrap/builder.rs:286:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Build completed unsuccessfully in 0:00:00
Later on it mentions a different syntax like ./x.py test library/alloc/ --test-args str which appears to succesfully run the unit tests in library/alloc/tests/str.rs
make check-stage1-std NO_REBUILD=1 or ... check-stage2-std ... should work, if you've done a full build previously. They just build the test runner directly without doing the rest of the bootstrap.
In any case, the full std test runner is built always, since, as you noticed, the imports etc are set up for the full crate. TESTNAME is the correct way to restrict which tests are run, but there's no way to restrict tests are built.
Another option is to pull the test/relevant code into an external file, and yet another is to build the test runner by running rustc manually on libstd/lib.rs: rustc --test lib.rs. One could edit the rest of the crate to remove the tests/code you're not interested in.
There are a few tests in my testing solution that must be run first or else later tests will fail. I want a way to ensure these are run first and in a specific order. Is there any way of doing this other than using a .orderedtest file?
Some problems with the .orderedtest:
Certain tests should be run in a random order after the "set up" tests are finished
Ordered test does not seem to call the ClassInitialize method
Isn't an orderedtest a form or test list that is deprecated in VS/TFS 2012?
My advice would be to fix your tests to remove the dependencies (i.e. make them proper "unit" tests) - otherwise they are bound to cause problems later, e.g.:
causing a simple failure to cascade so that hundreds of tests fail and make it hard to find the root cause
failing unexpectedly because someone has inadvertently modified the execution order
reporting passes when in fact they should be failing, just because the initial state is not as they required
You could try approaches like:
keep the tests separate but make each of them set up and tear down the test environment that they require. (A shared class to provide the initial state would be helpful here)
merge the related tests into a single one, so that you can control the setup, execution, and close-down in a robust way.