How to test the rust standard library? - testing

I'd like to make some changes to my copy of the rust standard library, then run the tests in the source files I changed. I do not need to test the compiler itself. How can I do this without testing a lot of things I am not changing and do not care about?
Here are some things I've already tried. A note - the specific file I want to play around with is libstd/io/net/pipes.rs in rust 0.12.0.
I tried rustc --test pipes.rs - the imports and options are not set up properly, it seems, and a multitude of errors is the result.
Following the rust test suite documentation, I tried make check-stage1-std NO_REBUILD=1, but this failed with "can't find crate for `green`". A person on the #rust-internals irc channel informed me that "make check-stage1 breaks semi-often as it's not the 'official way' to run tests."
Another person on that channel suggested make check-stage0-std, which seems to check libstd, but doesn't restrict testing to the file I changed, even if I use the TESTNAME flag as specified in the rust test suite documentation.

As of 2022 the way to run the test suite for the rust standard library is documented at https://rustc-dev-guide.rust-lang.org/tests/running.html .
While the documentation mentions the ability to test a single file it appears to malfunction
$ ./x.py test library/core/tests/str.rs
Updating only changed submodules
Submodules updated in 0.01 seconds
Building rustbuild
Finished dev [unoptimized] target(s) in 0.09s
thread 'main' panicked at 'error: no rules matched library/core/tests/str.rs', src/bootstrap/builder.rs:286:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Build completed unsuccessfully in 0:00:00
Later on it mentions a different syntax like ./x.py test library/alloc/ --test-args str which appears to succesfully run the unit tests in library/alloc/tests/str.rs

make check-stage1-std NO_REBUILD=1 or ... check-stage2-std ... should work, if you've done a full build previously. They just build the test runner directly without doing the rest of the bootstrap.
In any case, the full std test runner is built always, since, as you noticed, the imports etc are set up for the full crate. TESTNAME is the correct way to restrict which tests are run, but there's no way to restrict tests are built.
Another option is to pull the test/relevant code into an external file, and yet another is to build the test runner by running rustc manually on libstd/lib.rs: rustc --test lib.rs. One could edit the rest of the crate to remove the tests/code you're not interested in.

Related

LLVM lit testing: is it possible to configure number of threads via `lit.cfg`?

I wonder if it's possible to configure the number of threads for testing in a lit.cfg file.
lit offers a command-line flag for specifying the number of threads:
llvm/utils/lit/lit.py -j1 <test directory>
However, I'm not sure how to do this in a lit.cfg file. I want to force all tests in a subdirectory to be run with -j1 - not sure if this is possible.
Edit: for reference, I'm working on the Swift codebase which has a large test suite (4000+ tests) with multiple test subdirectories.
I want to run just one subdirectory with -j1 and the rest with the default number of threads (-j12 for my machine).
I was wondering about that too a while back, but I don't think there is one because of this line here. Usually, the main project compilation times dwarf the lit tests execution time.
It is easy to change, but I'd suggest using your build configuration to this (e.g. make or cmake). So, make test could execute something like lit -j $(nproc) underneath.
Edit (after OP update):
I haven't worked with the swift repo, but maybe you could hack your way around. One thing I could see is that you could influence the LIT_ARGS cmake variable with the options you want by appending to it.
Now to force a single process execution for a specific directory, you may add a lit.local.cfg that sets the singleProcess flag. This seems to override multi-thread execution:
config.singleProcess = True

The 'right' way to run unit tests in Clojure

Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.

Serial execution of package tests

I have implemented several packages for a web API, each with their own test cases. When each package is tested using go test ./api/pkgname the tests pass. If I want to run all tests at once with go test ./api/... test cases always fail.
In each test case, I recreate the entire schema using DROP SCHEMA public CASCADE followed by CREATE SCHEMA public and apply all migrations. The test suite reports errors back at random, saying a relation/table does not exist, so I guess each test suite (per package) is run in parallel somehow, thus messing up the DB state.
I tried to pass along some test flags like go test -cpu 1 -parallel 0 ./src/api/... with no success.
Could the problem here be tests running in parallel, and if yes, how can I force serial execution?
Update:
Currently I use this workaround to run the tests, but I still wonder if there's a better solution
find <dir> -type d -exec go test {} \;
As others have pointed out, -parallel doesn't do the job (it only works within packages). However, you can use the flag -p=1 to run through the package tests in series. This is documented here:
http://golang.org/src/cmd/go/testflag.go
but (afaict) not on the command line, go help, etc. I'm not sure it is meant to stick around (although I'd argue that if it is removed, -parallel should be fixed.)
The go tool is provided to make running unit tests easier using the convention that *_test.go files contain unittests in them. Because it assumes they are unittests it also assumes they are hermetic. It sounds like your tests either aren't unittests or they are but violate the assumptions that a unittest should fulfill.
In the case that you mean for these tests to be unittests then you probably need a mock database for your unittests. A mock, preferrably in memory, of your database will ensure that the unittest is hermetic and can't be interfered with by other unittests.
In the case that you mean for these tests to be integration tests you are probably better off not using the go tool for these tests. What you probably want is to create a seperate test binary whose running you can control and write you integration test scripts in there.
The good news is that creating a mock in Go is insanely easy. Change your code to take an interface with the methods you care about for the databases and then write an in memory implementation of that interface for testing purposes and pass it into your application code that you want to test.
Just to clarify, #Jeremy's answer is still the accepted one:
Since my integration tests were only run on one package (api), I removed the separate test binary in the end and created a pattern to separate test types by:
Unit tests use the normal TestX name
Integration tests use Test_X
I created shell scripts (utest.sh/itest.sh) to run either of those.
For unit tests go test -run="^(Test|Benchmark)[^_](.*)"
For integration tests go test -run"^(Test|Benchmark)_(.*)"
Run both using the normal go test

Any way to run code in SenTest only on a success?

In my Mac Cocoa unit tests, I would like to output some files as part of the testing process, and delete them when the test is done, but only when there are no failures. How can this be done (and/or what's the cleanest way to do so)?
Your question made me curious so I looked into it!
I guess I would override the failWithException: method in the class SenTestCase (the class your tests run in inherits from this), and set a "keep output files" flag or something before calling the super's method.
Here's what SenTestCase.h says about that method:
/*"Failing a test, used by all macros"*/
- (void) failWithException:(NSException *) anException;
So, provided you only use the SenTest macros to test and/or fail (and chances are this is true in your case), that should cover any test failure.
I've never dug into the scripts for this, but it seems like you could customize how you call the script that actually runs your tests to do this. In Xcode 4, look at the last step in the Build Phases tab of your test target. Mine contains this:
# Run the unit tests in this test bundle.
"${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests"
I haven't pored through the contents of this script or the many subscripts it pulls in on my machine, but presumably they call otest or some other test rig executable and the test results would be returned to that script. After a little time familiarizing yourself with those scripts you would likely be able to find a straightforward way to conditionally remove the output files based on the test results.

Running Scala tests automatically either after test change or tested class change

I'm wondering if there is any solution to let Scala tests run automatically upon change of test class itself or class under the test (just to test automatically pairs Class <---> ClassTest) would be a good start.
sbt can help you with this. After you setup project, just run
~test
~ means continuous execution. So that sbt will watch file system changes and when changes are detected it recompiles changed classes and tests your code. ~testQuick can be even more suitable for you, because it runs only tests, that were changed (including test class and all it's transitive dependencies). You can read more about this here:
http://code.google.com/p/simple-build-tool/wiki/TriggeredExecution
http://php.jglobal.com/blog/?p=363
By the way, ~ also works with other tasks like ~run.