Golang testing: "no test files" - testing

I'm creating a simple test within my package directory called reverseTest.go
package main
import "testing"
func TestReverse(t *testing.T) {
cases := []struct {
in, want string
}{
{"Hello, world", "dlrow ,olleH"},
{"Hello, 世界", "界世 ,olleH"},
{"", ""},
}
for _, c := range cases {
got := Reverse(c.in)
if got != c.want {
t.Errorf("Reverse(%q) == %q, want %q", c.in, got, c.want)
}
}
}
whenever i try to run it the output is
exampleFolder[no test files]
this is my go env
GOARCH="amd64"
GOBIN=""
GOCHAR="6"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/juan/go"
GORACE=""
GOROOT="/usr/lib/go"
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
TERM="dumb"
CC="gcc"
GOGCCFLAGS="-g -O2 -fPIC -m64 -pthread"
CXX="g++"
CGO_ENABLED="1"
Any help will be greatly appreciated. Thanks!!

Files containing tests should be called name_test, with the _test suffix. They should be alongside the code that they are testing.
To run the tests recursively call go test -v ./...
From How to Write Go Code:
You write a test by creating a file with a name ending in _test.go that contains functions named TestXXX with signature func (t *testing.T). The test framework runs each such function; if the function calls a failure function such as t.Error or t.Fail, the test is considered to have failed.

It's possible you don't have any test files in the root package and running go test -v does not test sub-packages, only the root package.
For example
.
├── Dockerfile
├── Makefile
├── README.md
├── auth/
│   ├── jwt.go
│   ├── jwt_test.go
├── main.go
As you see there are no test files in the root package, only the main.go file. You will get "no test files."
The solution is to test all packages within the current working directory, recursively
go test -v ./...
Or if you use govendor
govendor test +local
Or you can specify which package (directory) to test
go test -v ./packagename
Or test a package recursively
go test -v ./packagename/...

Your test function within your _test file must start with the prefix "Test"
GOOD:
func TestName (
BAD:
func NameTest (
This function will not be executed as a test and results with the reported error

To run all the tests use below command
> go test ./...
//For verbose output use -v flag
> go test -v ./...

I faced same problem.
In addition to previous answers i find an issue when impossible to run test if your package's folder name is testing.
Terminal demonstration of the issue below:
with testing folder name:
~/go/src/testing$ go test
? testing [no test files]
without testing folder name:
~/go/src/testing_someothername$ go test
PASS
ok testing_someothername 0.089s
In my case it was helpful

I faced same problem. I fixing them by appending various packages
go test -v ./ ./2ndpackage ./3rdpackage ./4thpackages
this solved the issue.
Also I added "_" between Test keyword and function name
Test_FuncName

no test files mean you need to rename your test file to reflect the file you want to test.
Example
main.go
main_test.go
Where main.go is the file containing your code. main_test.go is the file containing your test code.

Related

Avoiding absolute paths in included cmake files [duplicate]

Suppose my project's CMakeLists.txt includes foo.cmake:
include(foo)
In foo.cmake, i want to know the path of foo.cmake.
How can I do that?
Note that CMAKE_CURRENT_LIST_DIR gives the directory of the including CMakeLists.txt, not that of the included foo.cmake, and is thus not what I want.
Of course, foo.cmake might be included by several projects (i.e., by several CMakeLists.txt files).
People have reported seemingly contradictory facts about how CMAKE_CURRENT_LIST_DIR behaves. Now I know the reason for the confusion:
First, in my Linux environment:
$ cd /path/to/home
$ mkdir cmake-test
$ cd cmake-test
$ mkdir source
$ mkdir source/subdirectory
$ mkdir build
I create these two files:
$ cat source/CMakeLists.txt
include(subdirectory/foo.cmake)
$ cat source/subdirectory/foo.cmake
message("CMAKE_CURRENT_LIST_DIR is ${CMAKE_CURRENT_LIST_DIR}")
CMake works as reported by Fraser and Robert Dailey:
$ cd build
$ cmake ../source
CMAKE_CURRENT_LIST_DIR is /path/to/home/cmake-test/source/subdirectory
[...]
However, I add a function to foo.cmake, which I call from CMakeLists.txt:
$ cat ../source/subdirectory/foo.cmake
message("CMAKE_CURRENT_LIST_DIR is ${CMAKE_CURRENT_LIST_DIR}")
function(bar)
message("CMAKE_CURRENT_LIST_DIR in bar() is ${CMAKE_CURRENT_LIST_DIR}")
endfunction()
$ cat ../source/CMakeLists.txt
include(subdirectory/foo.cmake)
bar()
Then:
$ cmake ../source
CMAKE_CURRENT_LIST_DIR is /path/to/home/cmake-test/source/subdirectory
CMAKE_CURRENT_LIST_DIR in bar() is /path/to/home/cmake-test/source
[...]
So, the value of CMAKE_CURRENT_LIST_DIR in foo.cmake is not the same at the time foo.cmake is included and when bar() is called. This is according to the specification of CMAKE_CURRENT_LIST_DIR.
Here is one possible solution for accessing the directory of foo.cmake from within bar():
$ cat ../source/subdirectory/foo.cmake
set(DIR_OF_FOO_CMAKE ${CMAKE_CURRENT_LIST_DIR})
function(bar)
message("DIR_OF_FOO_CMAKE in bar() is ${DIR_OF_FOO_CMAKE}")
endfunction()
after which I get the behavior I was looking for:
$ cmake ../source
DIR_OF_FOO_CMAKE in bar() is /path/to/home/cmake-test/source/subdirectory
[...]
In CMake 3.17, you have a new variable available, called CMAKE_CURRENT_FUNCTION_LIST_DIR, which can be used inside a function. It is undefined outside of a function definition.
function(foo)
configure_file(
"${CMAKE_CURRENT_FUNCTION_LIST_DIR}/some.template.in"
some.output
)
endfunction()
Prior to CMake 3.17, CMAKE_CURRENT_FUNCTION_LIST_DIR functionality has to be approximated with CMAKE_CURRENT_LIST_DIR by the following workaround, taken from CMake documentation:
set(_THIS_MODULE_BASE_DIR "${CMAKE_CURRENT_LIST_DIR}")
function(foo)
configure_file(
"${_THIS_MODULE_BASE_DIR}/some.template.in"
some.output
)
endfunction()
See CMAKE_CURRENT_LIST_DIR:
Full directory of the listfile currently being processed.
As CMake processes the listfiles in your project this variable will
always be set to the directory where the listfile which is currently
being processed (CMAKE_CURRENT_LIST_FILE) is located. The value has
dynamic scope. When CMake starts processing commands in a source file
it sets this variable to the directory where this file is located.
When CMake finishes processing commands from the file it restores the
previous value. Therefore the value of the variable inside a macro or
function is the directory of the file invoking the bottom-most entry
on the call stack, not the directory of the file containing the macro
or function definition.
Example
I have the following structure:
C:\Work\cmake-test\CMakeLists.txt
C:\Work\cmake-test\subfolder\test.cmake
In my CMakeLists.txt:
include( subfolder/test.cmake )
In my test.cmake:
message( "Current dir: ${CMAKE_CURRENT_LIST_DIR}" )
The result I get when I run CMake from C:\Work\cmake-test is:
Current dir: C:/Work/cmake-test/subfolder
The include() command searches for modules in ${CMAKE_MODULE_PATH} first and then in CMake Modules dir.
So you can just check for file presence with if(EXISTS ${CMAKE_MODULE_PATH}/foo.cmake) and if(EXISTS ${CMAKE_ROOT}/Modules/foo.cmake).

Command to check if there is any range versions in the dependencies section of the package.json

Basically I want CI to fail if the dependencies section of the package.json contains any range operator. devDependencies could contain anything thought. Some CLI command would be perfect. Any suggestions?
Short answer: Unfortunately, there is no existing built-in npm command/feature to achieve this. However, you can utilize your own custom nodejs script. The nodejs script can then be invoked via a command if you define it in the scripts section of your package.json.
The following describes how to achieve this.
Solution
check-deps.js
Create a nodejs script as follows. Let's name the script check-deps.js and save it somewhere in your project directory.
const isSemverRange = require('is-semver-range');
const pkgPath = './path/to/your/package.json';
const pkgData = require(pkgPath);
function hasSemverRange({ dependencies = {}}) {
return Object.values(dependencies).some(semver => isSemverRange(semver));
}
if (hasSemverRange(pkgData)) {
console.log(`Semver range(s) found in dependencies section of ${pkgPath}`);
process.exit(1);
}
Explanation of check-deps.js:
Firstly we require the is-semver-range package, which we'll use to help check for any semver ranges. To install this package; cd to your project directory and run the following command:
npm i -D is-semver-range
We then define a path to the package.json file (i.e. the file we want to check), and subsequently we require its contents.
const pkgPath = './path/to/your/package.json'; // <-- Redefine path.
const pkgData = require(pkgPath);
Note: you'll need to redefine your path to package.json as necessary.
The hasSemverRange function parameter definition utilizes object destructuring to unpack the dependencies object, and assigns an empty object as a default value to avoid errors occurring if the dependencies section is missing from package.json.
In the function body we pass in the dependencies object to the Object.values method, and utilize the Array.some() method to test whether at least one of the values is a semver range.
This function returns true if the value of any property/key of the dependencies object is as a semver range, otherwise it returns false.
Finally, in the if statement condition we invoke the hasSemverRange function, passing to it the parsed contents on package.json. If the condition is truthy we log an error message to the console, and exit the script with a non-zero exit code, i.e. process.exit(1).
package.json
In the scripts section of your package.json define a script as follows. Let's name the script check-deps:
"scripts": {
"check-deps": "node path/to/check-deps.js",
...
}
Note: you'll need to redefine your path to check-deps.js as necessary.
Running the npm script
Run the following command via your CLI to invoke the check-deps script:
npm run check-deps
If the value of any property defined in the dependencies section of your package.json is a semver range you'll see something like the following error logged to your console:
Semver range(s) found in dependencies section of ./path/to/package.json
Integrating the check with your CI tool.
It's unclear from your question which CI tool you're using. However, typically CI tools provide a feature which allows you to invoke an npm script.
For example, if your utilizing Travis CI you can define the script to run in your .travis.yml file as follows:
.travis.yml
script:
- npm check-deps
Additional Note:
You could also invoke the npm check-deps script via an existing test script which you may have already defined in your package.json by utilizing the && operator. For instance:
"scripts": {
"check-deps": "node path/to/check-deps.js",
"test": "yourCurrentTestcommands && npm run check-deps"
...
}
Note: In the test script above the yourCurrentTestcommands part should be replaced with any commands that you may currently be running.

Golang test coverage with black box _test coverage

My problem is simple, but the answer remains elusive. Suppose I have a package
package mypackage
func DoTheThing() int {
return 5
}
Now suppose I have a test using in the mypackage_test package
package mypackage_test
import "testing"
import . "mypackage"
func TestDoTheThing(t *testing.T) {
if DoTheThing() != 5 {
t.Error("there was a problem")
}
}
Now I want to know the code coverage of the package mypackage.
$ go test -cover
PASS
coverage: 0.0% of statements
ok /my/path/mypackage 0.002s
It should be 100%. I have tried also
$ go test -v -cover -coverpkg ./... ./...
=== RUN TestDoTheThing
--- PASS: TestDoTheThing (0.00s)
PASS
coverage: 0.0% of statements in ./...
ok /my/path/mypackage 0.002s coverage: 0.0% of statements in ./...
It is not a possibility for me to include the test in mypackage, so I need to know the code coverage of mypackage in this setup.
Thanks for your time.
After much keyboard mashing, I discovered it has to do with the fact that I was simlinking my project directory to $GOPATH/src/.
Copying the project into $GOPATH/src/mypackage and running go test -cover correctly returns 100% coverage.

Run additional tests by using a feature flag to "cargo test"

I have some tests that I would like to ignore when using cargo test and only run when explicitly passed a feature flag. I know this can be done by using #[ignore] and cargo test -- --ignored, but I'd like to have multiple sets of ignored tests for other reasons.
I have tried this:
#[test]
#[cfg_attr(not(feature = "online_tests"), ignore)]
fn get_github_sample() {}
This is ignored when I run cargo test as desired, but I can't get it to run.
I have tried multiple ways of running Cargo but the tests continue to be ignored:
cargo test --features "online_tests"
cargo test --all-features
I then added the feature definition into my Cargo.toml as per this page, but they continue to be ignored.
I am using workspaces in Cargo. I tried adding the feature definition in both Cargo.toml files with no difference.
Without a workspace
Cargo.toml
[package]
name = "feature-tests"
version = "0.1.0"
authors = ["An Devloper <an.devloper#example.com>"]
[features]
network = []
filesystem = []
[dependencies]
src/lib.rs
#[test]
#[cfg_attr(not(feature = "network"), ignore)]
fn network() {
panic!("Touched the network");
}
#[test]
#[cfg_attr(not(feature = "filesystem"), ignore)]
fn filesystem() {
panic!("Touched the filesystem");
}
Output
$ cargo test
running 2 tests
test filesystem ... ignored
test network ... ignored
$ cargo test --features network
running 2 tests
test filesystem ... ignored
test network ... FAILED
$ cargo test --features filesystem
running 2 tests
test network ... ignored
test filesystem ... FAILED
(some output removed to better show effects)
With a workspace
Layout
.
├── Cargo.toml
├── feature-tests
│   ├── Cargo.toml
│   ├── src
│   │   └── lib.rs
├── src
│   └── lib.rs
feature-tests contains the files from the first section above.
Cargo.toml
[package]
name = "workspace"
version = "0.1.0"
authors = ["An Devloper <an.devloper#example.com>"]
[features]
filesystem = ["feature-tests/filesystem"]
network = ["feature-tests/network"]
[workspace]
[dependencies]
feature-tests = { path = "feature-tests" }
Output
$ cargo test --all
running 2 tests
test filesystem ... ignored
test network ... ignored
$ cargo test --all --features=network
running 2 tests
test filesystem ... ignored
test network ... FAILED
(some output removed to better show effects)
With a virtual workspace
Virtual workspaces do not support specifying features (Cargo issue #4942). You will need to run the tests from within the sub project or specify the path to the appropriate Cargo.toml
Layout
.
├── Cargo.toml
└── feature-tests
├── Cargo.toml
└── src
└── lib.rs
feature-tests contains the files from the first section above.
Cargo.toml
[workspace]
members = ["feature-tests"]
Output
$ cargo test --all --manifest-path feature-tests/Cargo.toml --features=network
running 2 tests
test filesystem ... ignored
test network ... FAILED
$ cargo test --all --manifest-path feature-tests/Cargo.toml
running 2 tests
test filesystem ... ignored
test network ... ignored
(some output removed to better show effects)

npm babel ES2015: Getting command lines in the converted/output JS file

I've installed npm (v4.4.4) and babel (v6.24.0) and babel preset 2015.
All running OK when converting ES6 JS to ES5...except a couple of oddities. Maybe someone can see what this newbie is doing wrong.
1) I run babel from npm (see below) which runs OK. I added some script entries into package.JSON to make it work.
But, UNWANTED oddity...npm inserts the commands into the output JS file. (See below) Is there an npm option to say, don't put the command in the output file.
Yet....if I copy input.JS to the folder with babel.cmd and run it there, I get a clean output.JS. So it looks like npm is inserting the command lines into the output.js file.
How do I prevent the npm commands being written to output.js. (Obviously I don't want to have my JS files having to share a folder with the .bin files)
2) When I type > babel on the command line in my project folder, I get:
babel: not a command.
I EXPECT THIS. After all, I have not added node_modules/.bin to my PATH env var. Yet every YouTube video I watch about npm and babel, it works. How? No one seems to edit the PATH env var. Am I missing something?
Thanks
Milton.
INPUT JS FILE (input.js)
class House {
constructor(v) {
this.name = v;
}
}
OUTPUT JS (TRANSPILED) FILE (output.js) Note 1st 2 lines below...
> milton#1.0.0 babel C:\Projects1\01InstallReact4Dev
> babel.cmd "--presets" "es2015" "input.js"
"use strict";
function _classCallCheck(instance, Constructor)
{ if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } }
var House = function House(v) {
_classCallCheck(this, House);
this.name = v;
};
PACKAGE.JSON
"scripts": {
"babel": "babel.cmd",
"babelv": "babel.cmd -V",
"babelh": "babel.cmd -help"
}
COMMAND
> npm run babel -- --presets es2015 input.js > output.js
Thanks Again.
Milton.
You're redirecting the output of stdout to the file output.js, this includes everything that is displayed. Instead of using the stdout output of babel you can use the --out-file or -o option. This will write the output to the specified file instead of printing it to stdout (see Compile Files).
Your command would be:
npm run babel -- --presets es2015 input.js --out-file output.js
When I type > babel on the command line in my project folder, I get: babel: not a command.
You don't have node_modules/.bin/ in your shells PATH. You could add it or run it directly with ./node_modules/.bin/babel. But this is not necessary if you do it in an npm script, because npm will automatically look into node_modules/.bin/ without it being in your PATH. In this case you could define the following script:
"scripts": {
"build": "babel --presets es2015 input.js --out-file output.js"
}
And then you can simply run:
npm run build
If you'd like to transpile more than one file you should use --out-dir instead of --out-file otherwise they will be concatenated into one file. See also Compile Directories