There's a lot of similar posts in the past 1 2 3 4 but they all seem outdated or irrelevant.
My question is: how do you write regression tests for Rust binaries?
I could set them up as "unit tests" in my src/main.rs file, but that's annoying. Ideally, it would be set up as
root
|---src
|---main.rs
|---foo.rs
|---bar.rs
|---tests
|---regress1.rs
|---regress2.rs
Two options:
Split your code into a library and a binary: src/lib.rs and src/main.rs. Then you can write tests/ tests that can load the library part.
This option is best if you specifically want to take advantage of the fact that tests/ tests ("integration tests") are individual binaries on their own (e.g. if the code you want to test uses global variables or system calls that affect global state).
You can write #[test] tests in your binary's code without putting them directly in your src/main.rs file. Just write mod tests; or mod tests { mod regress1; } and put your tests in src/tests/regress1.rs, and in that file write #[test] functions as usual. (Or, if you really want them in a different directory, use the #[path] attribute on mod.)
This option allows faster test execution, because the tests aren't separate binaries and will be run parallel in threads by the Rust test harness.
Related
Long ago, when Kotlin version 1.3.20 was released (https://blog.jetbrains.com/kotlin/2019/01/kotlin-1-3-20-released/), the ability to build in parallel using Gradle Workers was added. Simply adding the kotlin.parallel.tasks.in.project = true setting does not give any gain in build speed. As far as I understand, this parameter can be useful only if I have several folders with classes independent of each other within the same project. I saw the use of this setting when assembling the gradle itself, but did not see anywhere that separate source sets were created for each folder.
Could you provide examples of how to correctly describe the build process in build.gradle.kts so that mentioned option is really used and gives an increase in build speed when there are several processor cores.
As of yet, there's no simple way to parallelize compilation of a single source set containing Kotlin code (like just the main sources), as the compiler has to analyze all of the sources together and resolve cross-references within the source set.
By default, without any additional options, Gradle runs compilation of Kotlin sources in parallel only in different subprojects. The option kotlin.parallel.tasks.in.project also allows Gradle to run parallel compilation tasks in one project, but that only works for different source sets (that don't depend on each other!), or different targets.
For example, in multiplatform projects, if you have several targets, kotlin.parallel.tasks.in.project allows Gradle to build the compilation outputs (JVM/Android classes, *.js, Kotlin/Native *.klibs and binaries) in parallel. In Android projects, if you build multiple product variants, this option also allows parallel Kotlin compilation for those variants.
In simpler project layouts, where you only have main and test source sets and a single target, there's no way to improve Kotlin compilation speed by using multiple processors, unless you split one project into several projects.
I am wondering what code will be compiled into the go binary if you compile a binary using go build ./... . This will compile a binary that has a cli program. For this cli program, I have test code and non test code. I currently have several flavours of test code:
foo_test.go in package foo_test
foo_internal_test.go in package foo
testutil.go in package testutil that provides test utility functions
No test code is actually referenced in the non test code. The testutil functions are only imported in the test files.
If the test code is infact compiled into the binary , how much of a problem is this?
A go binary only includes code reachable from its main() entry point. For test binaries main() is the test runner.
As to "how much of a problem" it is if it were included... none. It would increase the binary size and compilation time somewhat but otherwise have no impact - code that isn't executed, by definition, does nothing.
I believe that if you have an init() function in an otherwise unreachable file, it will still be linked into the executable.
_test.go files would be still excluded.
This bit us when we had some test helper code that was not in _test files. One had an init() function which ran on the executable startup.
I created a new binary using Cargo:
cargo new my_binary --bin
A function in my_binary/src/main.rs can be used for a test:
fn function_from_main() {
println!("Test OK");
}
#[test]
fn my_test() {
function_from_main();
}
And cargo test -- --nocapture runs the test as expected.
What's the most straightforward way to move this test into a separate file, (keeping function_from_main in my_binary/src/main.rs)?
I tried to do this but am not sure how to make my_test call function_from_main from a separate file.
The Rust Programming Language has a chapter dedicated to testing which you should read to gain a baseline understanding.
It's common to put unit tests (tests that are more allowed to access internals of your code) into a test module in each specific file:
fn function_from_main() {
println!("Test OK");
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn my_test() {
function_from_main();
}
}
Modules can be moved to new files, although this is uncommon for the unit test module:
main.rs
fn function_from_main() {
println!("Test OK");
}
#[cfg(test)]
mod test;
test.rs
use super::*;
#[test]
fn my_test() {
function_from_main();
}
See Separating Modules into Different Files for detailed information on how files and modules map to each other.
The more common case for tests in a separate file are integration tests. These are also covered in the book by a section devoted to tests outside of the crate. These types of tests are well-suited for exercising the code as a consumer of your code would.
That section of the documentation includes an introductory example and descriptive text:
We create a tests directory at the top level of our project directory,
next to src. Cargo knows to look for integration test files in this
directory. We can then make as many test files as we want to in this
directory, and Cargo will compile each of the files as an individual
crate.
Let’s create an integration test. With the code in Listing 11-12 still
in the src/lib.rs file, make a tests directory, create a new file
named tests/integration_test.rs, and enter the code in Listing 11-13:
Filename: tests/integration_test.rs
use adder;
#[test]
fn it_adds_two() {
assert_eq!(4, adder::add_two(2));
}
Listing 11-13: An integration test of a function in the adder crate
We’ve added use adder at the top of the code, which we didn’t need in
the unit tests. The reason is that each test in the tests directory is
a separate crate, so we need to bring our library into each test
crate’s scope.
Note that the function is called as adder::add_two. Further details about Rust's module system can be found in the Packages, Crates, and Modules chapter.
Since these tests exercise your crate as a user would, if you want to test a binary, you should be executing the binary. Crates like assert_cmd can help reduce the pain of this type of test.
In other cases, you should break your large binary into a large library and a small binary. You can then write integration tests for the public API of your library.
See also:
Rust package with both a library and a binary?
If you have a module foo.rs and want to place your unit tests next to it in a file called foo_test.rs, you'll find that this isn't always the place that Rust will look for a child module.
You can use the #[path] attribute to specify the location of the file corresponding to the module:
#[cfg(test)]
#[path = "./foo_test.rs"]
mod foo_test;
This is explained in the blog post Better location for unit tests in Rust.
You're right; function_from_main is inaccessible outside of main.rs.
You need to create an src/lib.rs and move the functions you want to test piecemeal. Then you'll be able to use extern crate my_binary; from your test module, and have your functions appear under the my_binary namespace.
I do believe you should follow the advice of reading the Rust book chapter on testing, however, that still won't quite answer your question, how to separate test and source files.
So say you have a lib.rs source file and wanted a test_lib.rs file. To do this all you need is:
In your lib.rs:
mod test_lib;
// rest of source
Then in your test_lib.rs:
#[cfg(test)]
use super::*;
#[test]
fn test1() {
// test logic
}
// more tests
Then, both you and cargo test should be happy.
To embellish* #mmai's answer:
#[rustfmt::skip]
#[cfg(test)]
#[path = "./foo_test.rs"]
mod foo_test;
cargo fmt seems to be unhappy following the path to the new test file. If anyone has a better solution I'd be happy to hear it, because I wish cargo fmt would format my test files too!
*Sorry: I'd add this just as a comment on the existing answer but I don't have enough reputation.
I’ve spent some time researching this and though I’ve found some relevant info,
Here’s what I’ve found:
SO question: “What is the clojure equivalent of the Python idiom if __name__ == '__main__'?”
Some techniques at RosettaCode
A few discussions in the Cojure Google Group — most from 2009
but none of them have answered the question satisfactorily.
My Clojure source code file defines a namespace and a bunch of functions. There’s also a function which I want to be invoked when the source file is run as a script, but never when it’s imported as a library.
So: now that it’s 2012, is there a way to do this yet, without AOT compilation? If so, please enlighten me!
I'm assuming by run as a script you mean via clojure.main as follows:
java -cp clojure.jar clojure.main /path/to/myscript.clj
If so then there is a simple technique: put all the library functions in a separate namespace like mylibrary.clj. Then myscript.clj can use/require this library, as can your other code. But the specific functions in myscript.clj will only get called when it is run as a script.
As a bonus, this also gives you a good project structure, as you don't want script-specific code mixed in with your general library functions.
EDIT:
I don't think there is a robust within Clojure itself way to determine whether a single file was launched as a script or loaded as a library - from Clojure's perspective, there is no difference between the two (it all gets loaded in the same way via Compiler.load(...) in the Clojure source for anyone interested).
Options if you really want to detect the manner of the launch:
Write a main class in Java which sets a static flag then launched the Clojure script. You can easily test this flag from Clojure.
Use AOT compilation to implement a Clojure main class which sets a flag
Use *command-line-args* to indicate script usage. You'll need to pass an extra parameter like "script" on the command line.
Use a platform-specific method to determine the command line (e.g. from the environment variables in Windows)
Use the --eval option in the clojure.main command line to load your clj file and launch a specific function that represents your script. This function can then set a script-specific flag if needed
Use one of the methods for detecting the Java main class at runtime
I’ve come up with an approach which, while deeply flawed, seems to work.
I identify which namespaces are known when my program is running as a script. Then I can compare that number to the number of namespaces known at runtime. The idea is that if the file is being used as a lib, there should be at least one more namespace present than in the script case.
Of course, this is extremely hacky and brittle, but it does seem to work:
(defn running-as-script
"This is hacky and brittle but it seems to work. I’d love a better
way to do this; see http://stackoverflow.com/q/9027265"
[]
(let
[known-namespaces
#{"clojure.set"
"user"
"clojure.main"
"clj-time.format"
"clojure.core"
"rollup"
"clj-time.core"
"clojure.java.io"
"clojure.string"
"clojure.core.protocols"}]
(= (count (all-ns)) (count known-namespaces))))
This might be helpful: the github project lein-oneoff describes itself as "dependency management for one-off, single-file clojure programs."
This lets you define everything in one file, but you do need the oneoff plugin installed in order to run it from the command line.
I'm writing tests for an OCaml module. Some of the functions in the module are not meant to be publicly visible, and so they're not included in the signature (.mli file).
I can't call these functions from my tests, because they're not visible outside of the module. So I'm having a hard time testing them. Is there a good way to get around this? For example, a way to tell ocamlc not to read the signature from the .mli file when it's compiling tests?
Some ideas:
Actually export the test functions, but use ocamldoc's stop comment (**/**) feature to avoid displaying the exports in the documentation.
Put all of your tests entirely in another module. However, this is difficult if you have abstract types because your tests may very well need access to the internal implementation.
Create a submodule Test, where all your tests go. That way it is clear what functions are just for testing. Possibly combine this with the (**/**) feature to also hide the sub-module from documentation.
I've heard that people sometimes separate their .mli files from their .ml files (in a different directory) so that they can compile with or without them (by telling ocamlc to look in the separate directory or not). I just tried a few experiments with this. I think it can be made to work, but it seems a little bit error prone to me. Maybe you could put the tests of the internal functions into the module. Exporting the test functions might not violate the modularity too badly. (Though of course it clutters up the module.)