How to run Go examples, which don't have output comments? - testing

Testable Go examples look awesome.
func ExampleReverse() {
fmt.Println(stringutil.Reverse("hello"))
// Output: olleh
}
The above, for example, is equivalent to a unit test that asserts:
stringutil.Reverse("hello") == "olleh"
According to the golang blog, we can write examples that don't have an output comment, but then the go test and go test -run ExampleReverse commands only compile the example and don't run it:
If we remove the output comment entirely then the example function is compiled but not executed. Examples without output comments are useful for demonstrating code that cannot run as unit tests, such as that which accesses the network, while guaranteeing the example at least compiles.
The output of such examples, although not testable, could still be useful for the user to produce and read. And the examples themselves - useful to run on their computer.
So is there a way or a tool that can run example functions in *_test.go files from the terminal?

You can call the Example* functions from a regular Test* function.
func ExampleOutput() {
fmt.Println("HEELLO")
}
func TestExampleOutput(t *testing.T) {
if !testing.Verbose() {
return
}
ExampleOutput()
}
This body of this example would show up under Output in the docs, and if you don't want the output every time, it's limited to only calling it with the -v flag.
Specifically, to run only the example you're interested in you can either:
go test path/to/pkg -run TestExampleOutput -v
Or to compile once and run multiple times:
go test path/to/pkg -c
./pkg.test -test.run TestExampleOutput -test.v

Related

Ctest get number of tests passed/failed in script

Is there a straightforward way when using ctest to get the number of tests passed (and/or failed) within a script, e.g., BASH, without grep-ping through a generated output file?
a straightforward way ... without grep-ping
No, I believe there is not.
You can also "grep" the count the lines Test failed. and Test passed. from CMake the_build_dir/Testing/Temporary/LastTest.log.
You could potentially generate ctest XML report to a dashboard and then parse the XML reports (instead of sending them). It's nowhere as straightforward, as ctest script has to be written that configures, builds and tests the project and then separate XML tool needs to parse the result.
You can also run a cdash server and let that ctest script upload the results to cdash and then query cdash server with simple curl 'https://your.cdash.server/api/v1/index.php?project=TheProjectName' | jq '.buildgroups[] | select(.id == 2).builds[] | { "pass": .test.pass, "fail": .test.fail, }. The querying is simple, but.. it needs to run a cdash server and also test with ctest script, it's not near straightforward..
Btw, it's easy to get the number of failed tests - it's just wc -l the_build_dir/Testing/Temporary/LastTestsFailed.log.

ctest won't output stdout from printf if test fails

In a quite simple test-case, the output of printf() is not shown, if the test fails. I use µunit as a framework and the test routine itself is trivial:
static MunitResult test(...)
{
// Some variable initialisation
printf("Test running...\n");
//Do the test
bool bResult = tested_method();
munit_assert(bResult == true);
}
If I comment out the assertion, i.e. the test succeeds, the printf-output is shown. It isn't if the test fails. Running other test routines works as expected and shows their output from printf() correctly.
I invoke ctest like this to run the test:
ctest -V --output-on-failure -R '.*nameoftest.*'
The whole is running inside a docker container on Windows 10.
How can I make ctest display all output the test-routine sends on stdout?
Thanks for your help and have a nice day!
The solution, in my case, was, to call the generated elf-executable directly, and not via ctest. It seems that ctest adds another layer of output-redirection which I wasn't able to circumvent. By calling the binary directly, I could get all the output and logs I desired.
This is not a direct solution to the problem, but a workaround I found acceptable.

How to test "main()" routine from "go test"?

I want to lock the user-facing command line API of my golang program by writing few anti-regression tests that would focus on testing my binary as a whole. What testing "binary as a whole" means is that go-test should:
be able to feed STDIN to my binary
be able to check that my binary produces correct STDOUT
be able to ensure that error cases are handled properly by binary
However, it is not obvious to me what is the best practice to do that in go? If there is a good go test example, could you point me to it?
P.S. in the past I have been using autotools. And I am looking for something similar to AT_CHECK, for example:
AT_CHECK([echo "XXX" | my_binary -e arg1 -f arg2], [1], [],
[-f and -e can't be used together])
Just make your main() single line:
import "myapp"
func main() {
myapp.Start()
}
And test myapp package properly.
EDIT:
For example, popular etcd conf server uses this technique: https://github.com/coreos/etcd/blob/master/main.go
I think you're trying too hard: I just tried the following
func TestMainProgram(t *testing.T) {
os.Args = []string{"sherlock",
"--debug",
"--add", "zero",
"--ruleset", "../scripts/ceph-log-filters/ceph.rules",
"../scripts/ceph-log-filters/ceph.log"}
main()
}
and it worked fine. I can make a normal tabular test or a goConvey BDD from it pretty easily...
If you really want to do such type of testing in Go, you can use Go os/exec package https://golang.org/pkg/os/exec/ to execute your binary and test it as a whole - for example, executing go run main.go command. Essentially it would be an equivalent of a shell script done in Go. You can use StdinPipe https://golang.org/pkg/os/exec/#Cmd.StdinPipe and StdouPipe/StderrPipe (https://golang.org/pkg/os/exec/#Cmd.StdoutPipe and https://golang.org/pkg/os/exec/#Cmd.StderrPipe) to feed the desired input and verify output. The examples on the package documentation page https://golang.org/pkg/os/exec/ should give you a good starting point.
However, the testing of compiled programs goes beyond the unit testing so it is worth to consider other tools (not necessarily Go-based) that more typically used for functional / acceptance testing such as Cucumber http://cucumber.io.

I am trying to write a simple script in Go but get bad interpreter: Permission denied error

I am trying to write a script in Go but I get this error:
bad interpreter: Permission denied
My super simple script is as follow:
#!/usr/local/Cellar/go/1.0.2/bin
fmt.Println("Hello World")
I don´t know if this is possible but I would really like to write scripts in Go since I like the language a lot.
Go isn't a scripting language. Like in C you have to compile your source code to make an executable.
From the "Getting Started" :
Create a file named hello.go and put the following program in it:
package main
import "fmt"
func main() {
fmt.Printf("hello, world\n")
}
Then run it with the go tool:
$ go run hello.go
hello, world
In the spirit of Python, there are attempts to make Go scripts kinda possible. Here's for example what you can do with gorun :
#!/usr/bin/gorun
package main
func main() {
println("Hello world!")
}
But that's not really the logic of Go and that's not nearly as simple as what you typed in your question.

How do I use a Unix script to select a Verilog test file?

I have to do the verification of DPRAM.
Each test case is written in different file named test1.v,test2.v etc.
I want to write a script(unix) such that when I type run test1.v then only that test case will run.
Note :- test1.v contents only task which includes read assert,write assert etc.
The test bench is a separate file which includes clock and component instantiation.
when run test1.v is done then it should link the test1.v task to the testbench and then output is obtained.
I have done the coding in verilog
How to do this?
So, as far as I can make out, your different tests, or 'testcases' are in files named test<n>.v. And I'll assume that each of these testcases has a task that has the same name in all files, say run_testcase. This means that your testbench (testbench.v, say) must look something like:
module testbench();
...
`include "test.v" // <- problem is this line
...
initial begin
// Some setup
run_testcase();
//
$finish;
end
endmodule
So your problem is the include line - a different file needs to be included depending on the testcase. I can think of two ways of solving this first one is as toolic suggested - using a symbolic link to 'rename' the testcase file. So an example wrapper script (run_sim1) to launch your sim might look a bit like:
#! /usr/bin/env sh
testcase=$1
ln -sf ${testcase} test.v
my_simulator testbench.v
Another way is to use a macro, and define this in the wrapper script for your simulation. Your testbench would be modified to look like:
...
`include `TESTCASE
...
And the wrapper script (run_sim2):
#! /usr/bin/env sh
testcase=$1
my_simulator testbench.v +define+TESTCASE=\"${testcase}\"
The quotes are important here, as the verilog include directive expects them. Unfortunately, we can't leave the quotes in the testbench because it will then look like a string to verilog, and the TESTCASE macro won't be expanded.
One way to do it is to have the testbench file include a test file with a generic name:
`include "test.v"
Then, have your script create a symbolic link to the test you want to run. For example, in a shell script or Makefile, to run test1.v:
ln -sf test1.v test.v
run_sim
To run test2.v, your script would substitute test2 for test1, etc.