I am building some tests that do timestamp conversions in Rust using the chrono crate. I need to make sure they take into account the local time zone but the tests will be run in multiple time zones and so will fail for most testers. How can I force Rust or chrono within the code to use a specific time zone when running tests?
I know about setting env TZ=CST or similar. Since I cannot control that part of the execution environment for all those running cargo test, I don't think this works for us.
If all tests should run in the same timezone, you can use std::sync::Once to initialize the TZ-environment variable as pointed out in the comments. Technically, since there is no race condition, all tests could initialize the env to that timezone.
If tests need to set their own time-zone - valid only for that one test - it's probably safest to still modify the timezone for the entire process (including chrono, yet somewhere down in libc dragons may access the tz as well). As you pointed out yourself, multiple tests need to synchronize over their shared environment. You can do that with a lazy_static:
#[macro_use]
extern crate lazy_static;
lazy_static! {
static ref TZ_LOCK: std::sync::Mutex<()> = std::sync::Mutex::new(());
}
fn with_tz<R, F: FnOnce() -> Result<(), R>>(tz: &str, f: F) -> Result<(), R> {
let tz_lock = TZ_LOCK.lock();
std::env::set_var("TZ", tz);
f()
}
#[test]
fn foobar() -> Result<(), ()> {
with_tz("CET", || {
Ok(())
})
}
You can get more fancy with this by using a more complex TZ_LOCK where all threads which currently want to run under the same timezone get to run simultaneously.
Related
I am trying to participate in online Codeforces contests using Kotlin.
My understanding is I should use Kotlin script if my code is contained within a single file.
If I run the following file locally (version 1.6.10):
kotlin just_main.main.kts
// just_main.main.kts
fun main() {
println("Hello World")
}
Nothing happens. I need to add an explicit call for it to actually execute main:
// top_level_call.main.kts
fun main() {
println("Hello World")
}
main()
So far, so normal. The problem occurs when I try to submit my solution to the Codeforces online judge. The judge expects no top-level code and runs the main function instead. So just_main runs fine, but top_level_call produces a compilation error:
Can't compile file:
program.kt:43:1: error: expecting a top level declaration
main()
^
This leads to the awkward situation of me having to add the main() call when I want to try my solution locally, but having to remove it every time I upload an attempt.
Is there a way to have my local Kotlin behave the same as the online judge, meaning implicitly running any main functions (meaning just_main would produce output)?
I haven't found a way to do this with Kotlin script files, but you can also use normal .kt files without having any classes in the file (my understanding is that Kotlin magically turns them into Java class bytecode/files):
kotlinc a.kt && kotlin AKt < a.in
This "runs" a.kt with standard input from a.in.
(And yes I only found this after I already wrote the question)
Before I begin, let me note that I think there have been many related questions and answers, all of which proved useful to me. However, I couldn't find anyone who thoroughly described a method to do everything I wanted, so I thought I would document the problem and my solution and ask if there were better approaches.
Let's suppose that I have some slow--but definitely correct--code to perform a certain task that I keep in my project to test a faster implementation. For concreteness, define:
pub fn fast(x: T) -> U {
// does stuff and eventually returns a value `out`
// ...
debug_assert_eq!(out, slow(x))
out
}
#[cfg(any(test, debug_assertions))]
pub fn slow(x: T) -> U { ... }
This is all fine and good. However, now suppose that I would like to add some benchmarks to demonstrate how good my fast implementation is...
Attempt 1: Criterion
I think that a standard way to set up benchmarking is to put a benches/ directory in the project, add a [[bench]] to Cargo.toml with the harness disabled, and use the criterion crate. However, if I understand correctly, if we then run cargo bench, the benchmark will have to take the position of a user that cannot access crate features defined only during testing. Thus, slow will not be resolved and the command will fail.
A Quick Aside: Another thing that derailed me for a while is that I kept wanting to use bench as a cfg flag but couldn't find anything about this. As it turns out, I think that test also covers the benching case. (I think all the seasoned rustaceans will be laughing at me, but this seems like a useful thing to note for anyone in a similar situation).
Attempt 2: The Nightly Test Crate
Since the previous method didn't seem fruitful, another popular option seemed to be to use the unstable test crate. This results in a project structure that looks like:
Cargo.toml
src/
lib.rs
bench.rs
Our original file is then revised to be:
// Include the unstable feature
#![feature(test)]
pub fn fast(x: T) -> U { ... }
#[cfg(any(test, debug_assertions))]
pub fn slow(x: T) -> U { ... }
#[cfg(test)]
mod bench;
And then bench.rs should look something like:
extern crate test;
use test::Bencher;
#[bench]
fn bench_it(b: &mut Bencher) {
b.iter(|| {}) // gotta go fast
}
This seemed to do everything I wanted upon running cargo +nightly bench. However, it is also super desirable for the project to be compilable outside of testing without the use of nightly or extra feature flags. That is, I still want to be able to run cargo build and cargo test and not get yelled at for requesting unstable features on a stable channel.
Attempt 2.5: Enter Build Scripts
(Once again, each of the parts is well-documented in other questions, I'm just collecting everything here for fun). Using a bunch of other posts, I learned that we can check for nightly and conditionally enable features by way of a build script. Our project now looks like this:
Cargo.toml
build.rs
src/
lib.rs
bench.rs
And we need to add rustc_version to our [build-dependencies] in Cargo.toml. We then add the following build script:
use rustc_version::{version_meta, Channel};
fn main() {
// Set feature flags based on the detected compiler version
match version_meta().unwrap().channel {
Channel::Stable => {
println!("cargo:rustc-cfg=RUSTC_IS_STABLE");
}
Channel::Beta => {
println!("cargo:rustc-cfg=RUSTC_IS_BETA");
}
Channel::Nightly => {
println!("cargo:rustc-cfg=RUSTC_IS_NIGHTLY");
}
Channel::Dev => {
println!("cargo:rustc-cfg=RUSTC_IS_DEV");
}
}
}
Finally, if we update lib.rs to be the following:
// Include the unstable feature
#![cfg_attr(RUSTC_IS_NIGHTLY, feature(test))] // <-- Note the change here!
pub fn fast(x: T) -> U { ... }
#[cfg(any(test, debug_assertions))]
pub fn slow(x: T) -> U { ... }
#[cfg(all(RUSTC_IS_NIGHTLY, test))] // <-- Note the change here!
mod bench;
I think we get everything we want.
So... thanks for joining me on this adventure. Would appreciate commentary on whether or not this was the right approach. Also, you might ask "why keep the benchmark around once we know it's slower?" I suppose this might be fair, but perhaps the test could be changed or I'd like to prove the new implementation is faster to a third party that won't just trust me.
panic! allows the setting of a custom (albeit global) hook. Is there anything comparable for early returns with the ? operator? I have a function that needs to close some resources in a special way before exiting. I could write a function ok_or_close() that closes the resources before returning the error:
fn opens_resources() -> Result<(), MyError> {
//Opens some stuff.
//Now a bunch of functions that might raise errors.
ok_or_close(foo(), local variables)?;
ok_or_close(bar(), local variables)?;
ok_or_close(baz(), local variables)?;
ok_or_close(Ok(()), local variables)
}
But that seems verbose. What I'd really like to do is this:
fn opens_resources() -> Result<(), MyError> {
//Opens some stuff.
//Now a bunch of functions that might raise errors.
foo()?;
bar()?;
baz()?;
on_err:
//Closes some stuff. Would prefer not to make
// this a function, uses many local variables.
Ok(())
}
Is there a way to do this or a pattern of programming that gets around this?
The closest thing to this would be the Try trait which allows you to implement how ? affect a specific type, but sadly it is still a nightly experiment as stated here
If you're interested in this features I'd recommend you give a +1 at this issue
I need to redirect output of a spawned child process. This is what I tried but it doesn't work:
Command::new(cmd)
.args(&["--testnet",
"--fast",
format!("&>> {}", log_path).as_str()])
.stdin(Stdio::piped())
.stdout(Stdio::inherit())
.spawn()
You can't redirect output with > when starting another program like that. Operators like >, >>, | and similar ones are interpreted by your shell and are not a native functionality of starting programs. Since the Command API doesn't emulate a shell, this won't work. So instead of passing it in args you have to use other methods of the process API to achieve what you want.
Short lived program
If the program to start is usually finished immediately, you might just want to wait until it's done and collect its output then. Then you can simply use the Command::output():
use std::process::Command;
use std::fs::File;
use std::io::Write;
let output = Command::new("rustc")
.args(&["-V", "--verbose"])
.output()?;
let mut f = File::create("rustc.log")?;
f.write_all(&output.stdout)?;
(Playground)
Note: the code above has to be in a function that returns a Result in order for the ? operator to work (it just passes errors up).
Long lived program
But maybe your program is not short lived and you don't want to wait until it's finished before doing anything with the output. In that case you should capture stdout and call Command::spawn(). Then you can access the ChildStdout which implements Read:
use std::process::{Command, Stdio};
use std::fs::File;
use std::io;
let child = Command::new("ping")
.args(&["-c", "10", "google.com"])
.stdout(Stdio::piped())
.spawn()?;
let mut f = File::create("ping.log")?;
io::copy(&mut child.stdout.unwrap(), &mut f)?;
(Playground)
That way, ping.log is written on the fly every time the command outputs new data.
They have added a new (in 7/6/2017 https://github.com/rust-lang/rust/pull/42133) std::process::Stdio::from function that also accepts files and works on both windows and unix systems.
example:
use std::fs::File;
use std::process::Command;
fn main() {
let f = File::create("path/to/some/log.log").unwrap();
let mut cmd = Command::new("some command")
.stdout(std::process::Stdio::from(f))
.spawn()
.unwrap();
cmd.wait().unwrap();
}
To directly use a file as output without intermediate copying from a pipe, you have to pass the file descriptor. The code is platform-specific, but with conditional compilation you can make it work on Windows too.
let f = File::create("foo.log").unwrap();
let fd = f.as_raw_fd();
// from_raw_fd is only considered unsafe if the file is used for mmap
let out = unsafe {Stdio::from_raw_fd(fd)};
let child = Command::new("foo")
.stdout(out)
.spawn().unwrap();
I'm writing some Rust code that manipulates raw pointers. These raw pointers are then exposed to users through structures that use ContravariantLifetime to tie the lifetime of the struct to my object.
I'd like to be able to write tests that validate that the user-facing structures cannot live longer than my object. I have code like the following:
fn element_cannot_outlive_parts() {
let mut z = {
let p = Package::new();
p.create() // returns an object that cannot live longer than p
};
}
This fails to compile, which is exactly what I want. However, I'd like to have some automated check that this behavior is true even after whatever refactoring I do to the code.
My best idea at the moment is to write one-off Rust files with this code and rig up bash scripts to attempt to compile them and look for specific error messages, which all feels pretty hacky.
The Rust project has a special set of tests called "compile-fail" tests that do exactly what you want.
The compiletest crate is an extraction of this idea that allows other libraries to do the same thing:
fn main() {
let x: (u64, bool) = (true, 42u64);
//~^ ERROR mismatched types
//~^^ ERROR mismatched types
}
One idea that gets halfway there is to use Cargo's "features".
Specify tests with a feature flag:
#[test]
#[cfg(feature = "compile_failure")]
fn bogus_test() {}
Add this to Cargo.toml:
[features]
compile_failure = []
And run tests as
cargo test --features compile_failure
The obvious thing missing from this is the automatic checking of "was it the right failure". If nothing else, this allows me to have tests that are semi-living in my codebase.
You are able to annotate a test that you expect to fail.
#[should_fail]
As such, you can write a test that attempts to breach the life time it should have, and thus fail, which would actually be a pass.
For an example of a test for 'index out of bounds' see below (pulled from the Rust guides)
#[test]
#[should_fail]
fn test_out_of_bounds_failure() {
let v: &[int] = [];
v[0];
}
I believe that this example would be a compilation error, so it would stand to reason your compile lifetime violation error would be caught by this too.