Go testing customization with testing.TB - testing

I'm trying to customize the testing.T with my own assert method to lower the number of lines I'm writing. I tried the following, ended with an error: "wrong signature for TestCustom, must be: func TestCustom(t *testing.T)".
How can I make TestCustom use CustomTester interface with a new method, assert?
I don't want to use a 3rd-party framework.
custom_testing.go
type CustomTester struct {
testing.TB
}
func (t *CustomTester) assert(exp interface{}, act interface{}) {
if exp != act {
t.Errorf("expected: %v. got: %v\n", exp, act)
}
}
// I want testing package inject testing.T here
// But, by using my own wrapper: CustomTester struct with,
// my own assert method to get rid of using t as an argument,
// in each assert like: assert(t, exp, act)
func TestCustom(t *testing.TB) {
t.assert(3, len(foo))
}
NOTE: I also tried this, it works but, I don't want to pass t each time when I'm testing:
working_not_wanted.go
func assert(t *testing.TB, exp interface{}, act interface{}) {
if exp != act {
t.Errorf("expected: %v. got: %v\n", exp, act)
}
}
func TestCustom(t *testing.T) {
assert(t, 3, len(foo))
}

The Go testing framework executes test functions of a specific signature, and that signature takes a *testing.T. If you want to use the stdlib testing system, your test functions have to have the required signature.
You could wrap it with one line in every test function:
func MyTest(stdt *testing.T) {
// This line:
t := &CustomTester{stdt}
t.assert(true)
t.Error("An error done happened")
}
There are other ways to do it, but there is no way to have a testing function, run by go test, using the stdlib testing package, that takes anything other than *testing.T as its sole parameter.

Related

How to work with custom string errors in rust? [duplicate]

In Rust the main function is defined like this:
fn main() {
}
This function does not allow for a return value though. Why would a language not allow for a return value and is there a way to return something anyway? Would I be able to safely use the C exit(int) function, or will this cause leaks and whatnot?
As of Rust 1.26, main can return a Result:
use std::fs::File;
fn main() -> Result<(), std::io::Error> {
let f = File::open("bar.txt")?;
Ok(())
}
The returned error code in this case is 1 in case of an error. With File::open("bar.txt").expect("file not found"); instead, an error value of 101 is returned (at least on my machine).
Also, if you want to return a more generic error, use:
use std::error::Error;
...
fn main() -> Result<(), Box<dyn Error>> {
...
}
std::process::exit(code: i32) is the way to exit with a code.
Rust does it this way so that there is a consistent explicit interface for returning a value from a program, wherever it is set from. If main starts a series of tasks then any of these can set the return value, even if main has exited.
Rust does have a way to write a main function that returns a value, however it is normally abstracted within stdlib. See the documentation on writing an executable without stdlib for details.
As was noted by others, std::process::exit(code: i32) is the way to go here
More information about why is given in RFC 1011: Process Exit. Discussion about the RFC is in the pull request of the RFC.
The reddit thread on this has a "why" explanation:
Rust certainly could be designed to do this. It used to, in fact.
But because of the task model Rust uses, the fn main task could start a bunch of other tasks and then exit! But one of those other tasks may want to set the OS exit code after main has gone away.
Calling set_exit_status is explicit, easy, and doesn't require you to always put a 0 at the bottom of main when you otherwise don't care.
Try:
use std::process::ExitCode;
fn main() -> ExitCode {
ExitCode::from(2)
}
Take a look in doc
or:
use std::process::{ExitCode, Termination};
pub enum LinuxExitCode { E_OK, E_ERR(u8) }
impl Termination for LinuxExitCode {
fn report(self) -> ExitCode {
match self {
LinuxExitCode::E_OK => ExitCode::SUCCESS,
LinuxExitCode::E_ERR(v) => ExitCode::from(v)
}
}
}
fn main() -> LinuxExitCode {
LinuxExitCode::E_ERR(3)
}
You can set the return value with std::os::set_exit_status.

Testing log.Fatalf in go?

I'd like to achieve 100% test coverage in go code. I am not able to cover the following example - can anyone help me with that?
package example
import (
"io/ioutil"
"log"
)
func checkIfReadable(filename string) (string, error) {
_, err := ioutil.ReadFile(filename)
if err != nil {
log.Fatalf("Cannot read the file... how to add coverage test for this line ?!?")
}
return "", nil
}
func main() {
checkIfReadable("dummy.txt")
}
Some dumy test for that:
package example
import (
"fmt"
"testing"
)
func TestCheckIfReadable(t *testing.T) {
someResult, err := checkIfReadable("dummy.txt")
if len(someResult) > 0 {
fmt.Println("this will not print")
t.Fail()
}
if err != nil {
fmt.Println("this will not print")
t.Fail()
}
}
func TestMain(t *testing.T) {
...
}
The issue is that log.Fatalf calls os.Exit and go engine dies.
I could modify the code and replace built-in library with my own - what makes the tests less reliable.
I could modify the code and create a proxy and a wrapper and a .... in other words very complex mechanism to change all calls to log.Fatalf
I could stop using built-in log package... what is equal to asking "how much is go built-in worth?"
I could live with not having 100% coverage
I could replace log.Fataf with something else - but then what is the point for built-in log.Fatalf?
I can try to mangle with system memory and depending on my OS replace memory address for the function (...) so do something obscure and dirty
Any other ideas?
Use log.Print instead of log.Fatal and return the error value that you declared in signature of function checkIfReadable. Or don't the error it and return it to some place that knows better how to handle it.
The function log.Fatal is strictly for reporting your program's final breath.
Calling log.Fatal is a bit worse than calling panic (there is also log.panic), because it does not execute deferred calls. Remember, that overusing panic in Go is considered a bad style.
A good way to get 100% test coverage and not fail at the same time is to use recover() to catch the panic that is thrown by log.Fatalf().
Here are the docs for recover. I think it fits your use case nicely.

Golang test mock functions best practices

I am developing some tests for my code (using the testing package), and I am wondering what's the best way to mock functions inside the tested function:
Should I pass the function as parameter?
In that case, what if that function calls another function? Should I pass both the first and second function as parameters in the tested one?
Note: some of the functions are called on objects (i.e. someObj.Create()) and use HTTP API calls.
UPDATE for clarification:
Example: functions
func f1() error {
... //some API call
}
func (s *SomeStruct) f2() error {
return f1
}
func f3() error {
return nil
}
func f4() error {
...
err = obj.f2()
...
err = f3()
...
}
For the above: if I want to test f4, what's the best way to mock f2 and f3?
If I pass f2 and f3 to f4 as parameters it would work, but then what for the f2 test? Should I pass f1 to f2 as parameter?
And if that's it, should then f4 have f1 as well in the parameters?
As a general guideline, functions aren't very mockable so its in our best interests to mock structs that implement a certain interface that may be passed into functions to test the different branches of code. See below for a basic example.
package a
type DoSomethingInterface interface {
DoSomething() error
}
func DoSomething(a DoSomethingInterface) {
if err := a.DoSomething(); err != nil {
fmt.Println("error occurred")
return
}
fmt.Println("no error occurred")
return
}
package a_test
import (
"testing"
"<path to a>/a"
)
type simpleMock struct {
err error
}
func (m *simpleMock) DoSomething() error {
return m.err
}
func TestDoSomething(t *testing.T) {
errorMock := &simpleMock{errors.New("some error")}
a.DoSomething(errorMock)
// test that "an error occurred" is logged
regularMock := &simpleMock{}
a.DoSomething(regularMock)
// test "no error occurred" is logged
}
In the above example, you would test the DoSomething function and the branches that happens eg. you would create an instance of the mock with an error for one test case and create another instance of the mock without the error to test the other case. The respective cases are to test a certain string has been logged to standard out; in this case it would be "error occurred" when simpleMock is instantiated with an error and "no error occurred" when there simpleMock is not instantiated with an error.
This can of course be expanded to other cases eg. the DoSomething function actually returns some kind of value and you want to make an assertion on the value.
Edit:
I updated the code with the concern that the interface lives in another package. Note that the new updated code has a package a that contains the interface and the function under test and a package a_test that is merely a template of how to approach testing a.DoSomething.
I'm not sure what you're trying to do here but I'll explain how testing should be done in Go.
Lets say we have an application with the following directory hierarchy:
root/
pack1/
pack1.go
pack1_test.go
pack2/
pack2.go
pack2_test.go
main.go
main_test.go
We'll assume that pack2.go has the functions you want to test:
package pack2
func f1() error {
... //some API call
}
func (s *SomeStruct) f2() error {
return f1
}
func f3() error {
return nil
}
func f4() error {
...
err = obj.f2()
...
err = f3()
...
}
Looks good so far. Now if you want to test the functions in pack2, you would create a file called pack2_test.go. All test files in go are named similarly (packagename_test.go). Now lets see the inside of a typical test for a package (pack2_test.go in this example):
package pack2
import (
"testing"
"fmt"
)
TestF1(*testing.T) {
x := "something for testing"
f1() // This tests f1 from the package "pact2.go"
}
TestF2(*testing.T) {
y := new(somestruct)
y.f2() // tests f2 from package "pact2.go"
}
TestF3(*testing.T) {
/// some code
f3() // tests f3
}
TestF4(*testing.T) {
/// code
f3() // you get the gist
}
Let me explain. Notice how in pack2_test.go, the first line says that the package is pack2. In a nutshell, this means that we're in the "scope" of the package pack2 and thus all the functions found in pack2 can be called as if you're within pack2. Thats why, within the Testf* functions, we could've called the functions from pack2. Another thing to note is the imported package "testing". This helps with two things:
First, it provides some functionality for running tests. I won't go into that.
Second, it helps identify the functions that go test should run.
Now to the functions. Any function within a test package that has the prefix "Test" and the parameters "t *testing.T" (you can use "*testing.T" when you don't need to use the testing functionality) will be executed when you run go test. You use the variable t to reference the testing functionality I mentioned. You can also declare functions without the prefix and call them within the prefixed functions.
So, if I go to my terminal and run go test, it will execute the functions you want to test, specified in pack2_test.go
You can learn more about testing here and here

Are there any conventions for aggregating multiple errors as the causes of another error?

I'm writing a function that iterates over a vector of Result and returns success if they all were successful, or an error if any failed. Limitations in error::Error are frustrating me and I'm not sure how to work around them. Currently I have something like:
let mut errors = Vec::new();
for result in results {
match result {
Err(err) => errors.push(err),
Ok(success) => { ... }
}
}
if errors.is_empty() {
return Ok(())
else {
return Err(MyErrorType(errors))
}
The problem with my current approach is that I can only set one error to be the cause of MyErrorType, and my error's description needs to be a static String so I can't include the descriptions of each of the triggering failures. All of the failures are potentially relevant to the caller.
There is no convention that I know of, and indeed I have never had the issue of attempting to report multiple errors at once...
... that being said, there are two points that may help you:
There is no limitation that the description be a 'static String, you are likely confusing &'static str and &str. In fn description(&self) -> &str, the lifetime of str is linked to the lifetime of self (lifetime elision) and therefore an embedded String satisfies the constraints
Error is an interface to deal with errors uniformly. In this case, indeed, only a single cause was foreseen, however it does not preclude a more specific type to aggregate multiple causes and since Error allows downcasting (Error::is, Error::downcast, ...) the more specific type can be retrieved by the handler and queried in full
As such, I would suggest that you create a new concrete type solely dedicated to holding multiple errors (in a Vec<Box<Error>>), and implementing the Error interface. It's up to you to decide on the description and cause it will expose.
A single type will let your clients test more easily for downcasting than having an unknown (and potentially growing as time goes) number of potential downcast targets.
expanding a bit on point 1 of Matthieu's good answer.
The point where you're likely running into trouble (I know I did when I tried to implement Error) is that you want to have a dynamic description().
// my own error type
#[derive(Debug)] struct MyError { value: u8 }
impl fmt::Display for MyError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "bummer! Got a {}", self.value)
}
}
// I am now tempted to add the problematic value dynamically
// into the description, but I run into trouble with lifetimes
// this DOES NOT COMPILE because the String I'm building
// goes out of scope and I can't return a reference to it
impl error::Error for MyError {
fn description(&self) -> &str {
&format!("can't put a {} here!", self.value)
}
}
solution 1
Don't dynamically build description(). Just use a static str. This is what most implementations of Error on github seem to do.
If you need to retrieve and display (or log) the value you can always access it from your MyError type. Plus Display (that you must implement for all Error impls) does allow you to create dynamic strings.
I created a contrived example on the playground that shows how to track multiple errors.
solution 2
(what Matthieu is suggesting) you can store the error message in the error itself.
#[derive(Debug)] struct MyError { value: u8, msg: String }
impl MyError {
fn new(value: u8) -> MyError {
MyError { value: value, msg: format!("I don't like value {}", value) }
}
}
// now this works because the returned &str has the same lifetime
// as self
impl error::Error for MyError {
fn description(&self) -> &str {
&self.msg
}
}

std::error::FromError idiomatic usage

I'm trying to involve std::error::FromError trait as widely as possible in my projects to take advantage of try! macro. However, I'm a little lost with these errors conversions between different mods.
For example, I have mod (or crate) a, which has some error handling using it's own Error type, and implements errors conversion for io::Error:
mod a {
use std::io;
use std::io::Write;
use std::error::FromError;
#[derive(Debug)]
pub struct Error(pub String);
impl FromError<io::Error> for Error {
fn from_error(err: io::Error) -> Error {
Error(format!("{}", err))
}
}
pub fn func() -> Result<(), Error> {
try!(writeln!(&mut io::stdout(), "Hello, world!"));
Ok(())
}
}
I also have mod b in the same situation, but it implements error conversion for num::ParseIntError:
mod b {
use std::str::FromStr;
use std::error::FromError;
use std::num::ParseIntError;
#[derive(Debug)]
pub struct Error(pub String);
impl FromError<ParseIntError> for Error {
fn from_error(err: ParseIntError) -> Error {
Error(format!("{}", err))
}
}
pub fn func() -> Result<usize, Error> {
Ok(try!(FromStr::from_str("14")))
}
}
Now I'm in my current mod super, which has it's own Error type, and my goal is to write a procedure like this:
#[derive(Debug)]
struct Error(String);
fn func() -> Result<(), Error> {
println!("a::func() -> {:?}", try!(a::func()));
println!("b::func() -> {:?}", try!(b::func()));
Ok(())
}
So I definitely need to implement conversions from a::Error and b::Error for my Error type:
impl FromError<a::Error> for Error {
fn from_error(a::Error(contents): a::Error) -> Error {
Error(contents)
}
}
impl FromError<b::Error> for Error {
fn from_error(b::Error(contents): b::Error) -> Error {
Error(contents)
}
}
Ok, it works up until that time. And now I need to write something like this:
fn another_func() -> Result<(), Error> {
let _ = try!(<usize as std::str::FromStr>::from_str("14"));
Ok(())
}
And here a problem raises, because there is no conversion from num::ParseIntError to Error. So it seems that I have to implement it again. But why should I? There is a conversion implemented already from num::ParseIntError to b::Error, and there is also a conversion from b::Error to Error. So definitely there is a clean way for rust to convert one type to another without my explicit help.
So, I removed my impl FromError<b::Error> block and tried this blanket impl instead:
impl<E> FromError<E> for Error where b::Error: FromError<E> {
fn from_error(err: E) -> Error {
let b::Error(contents) = <b::Error as FromError<E>>::from_error(err);
Error(contents)
}
}
And it's even worked! However, I didn't succeed to repeat this trick with a::Error, because rustc started to complain about conflicting implementations:
experiment.rs:57:1: 62:2 error: conflicting implementations for trait `core::error::FromError` [E0119]
experiment.rs:57 impl<E> FromError<E> for Error where a::Error: FromError<E> {
experiment.rs:58 fn from_error(err: E) -> Error {
experiment.rs:59 let a::Error(contents) = <a::Error as FromError<E>>::from_error(err);
experiment.rs:60 Error(contents)
experiment.rs:61 }
experiment.rs:62 }
experiment.rs:64:1: 69:2 note: note conflicting implementation here
experiment.rs:64 impl<E> FromError<E> for Error where b::Error: FromError<E> {
experiment.rs:65 fn from_error(err: E) -> Error {
experiment.rs:66 let b::Error(contents) = <b::Error as FromError<E>>::from_error(err);
experiment.rs:67 Error(contents)
experiment.rs:68 }
experiment.rs:69 }
I can even understand the origin of problem (one type FromError<E> can be implemented both for a::Error and b::Error), but I can't get how to fix it.
Theoretically, maybe this is a wrong way and there is another solution for my problem? Or I still have to repeat manually all errors conversions in every new module?
there is no conversion from num::ParseIntError to Error
It does seem like you doing the wrong thing, conceptually. When a library generates an io::Error, like your first example, then it should be up to that library to decide how to handle that error. However, from your question, it sounds like you are generating io::Errors somewhere else and then wanting to treat them as the first library would.
This seems very strange. I wouldn't expect to hand an error generated by library B to library A and say "wrap this error as if you made it". Maybe the thing you are doing should be a part of the appropriate library? Then it can handle the errors as it normally would. Perhaps you could just accept a closure and call the error-conversion as appropriate.
So definitely there is a clean way for Rust to convert one type to another without my explicit help.
(Emphasis mine). That seems really scary to me. How many steps should be allowed in an implicit conversion? What if there are multiple paths, or even if there are cycles? Having those as explicit steps seems reasonable to me.
I can even understand the origin of problem [...], but I can't get how to fix it.
I don't think it is possible to fix this. If you could implement a trait for the same type in multiple different ways, there's simply no way to pick between them, and so the code is ambiguous and rejected by the compiler.