How do I set the RUST_TEST_TASKS environment variable? - testing

I am attempting to write tests for my Rust program. Normally, these tests are run in parallel but I want to run them sequentially. I looked around and I can set this environment variable RUST_TEST_TASKS=1, but I am not sure where to do that.

The environment variable is actually RUST_TEST_THREADS

I think what they mean is setting the environment variables in the shell the test runner is running in, such as:
RUST_TEST_TASKS=1 ./my-test-runner
or exporting it:
export RUST_TEST_TASKS=1
./my-test-runner

Related

How can I pass in a URL from command line into my Gherkin test?

I'm currently running my gherkin tests with a hardcoded url browser.url("http://www.x.com");
And I run my tests via gherkin start [url]
Is there a way that I can take whatever URL I enter in my command line to start my test and make it take place of the hardcoded url in my browser.url call? I am using Selenium-webdriver and docker to execute the tests which are written in Gherkin.
Short answer is NO, unfortunately. As you pointed out, you have a hardcoded value in your URL.
But, there should be easy workarounds your problem and with minor changes that can get you to your desired outcome.
1. Since you're using WebdriverIO, which basically is a Selenium bindings implementation for NodeJS, we can actually make use of Nodes build-in objects, in our case the global object process.env.
First, in your test, go ahead and substitute the URL from browser.url("http://www.x.com"); with a variable, like so: browser.url(myUrl);.
Then, declare in your program header that variable: var myUrl = process.env.URL ?
process.env.URL : "http://www.x.com". This will tell Node: "Look, if you have this process environment variable setup, then return it, else, just give me what you had before (your hardcoded value)".
Finally, when running your test from the command line, just add the variable with your desired value, like so:
$ URL="yourUrlHere" gherkin start
2. Since you're using WebdriverIO, I would strongly advise to use their complete tool-suite to leverage a better testing experience. *looks in sadness towards that unused wdio.config.js file*
Thus, I would recommend taking a look at their Testrunner and going through the training sections. If you're keen on using Gherkin, don't worry! WebdriverIO beautifully integrates with Cucumber (as well as Mocha, or Jasmine) which uses Gherkin.
Your issue then would transform into a simple config change via the wdio.config.js file.
Let me know if this helps. I can probably muster some other solutions. Cheers!

Bamboo custom deployment-project variables

In Bamboo, I want so take a build-release and deploy it on a target host. The target host should be variable.
As far as I know, it is not possible to run deployment-projects with customized deployment-variables (as it is possible to override plan-variables on custom-builds). My question is: is that true and if yes, what is the best way to achieve what I want?
Here are some thoughts I had during research regarding this issue:
I could use a plan-variable "host" in my build job and always customize it as needed. Then I write this variable into a file that will be declared as a build-artifact. In my deployment-tasks I use the "Inject Bamboo variables configuration" task to get the variable. This solution has the disadvantage, that I always have to run a build, even if the artifacts do not change.
Global variables are not feasible because they are not build-dependent. Therefore I can not use them for my task. The reason is that it could happen that they get overwritten by another build.
Are there any better solutions/thoughts on this task?
target host should be variable
No, each host is a separate environment. Otherwise the notion of "promoting an environment" breaks apart. This may be a lot of work to implement and therefore I strongly advise using bamboo specs (in Java).
it is not possible to run deployment-projects with customized deployment-variables
I confirm: it's not possible. And again, it would break the notion of environment promotion. Rule of thumb: your environment setup should be immutable: no variable variance. The only difference b/n runs is the artifacts that are to be deployed.
You can set variables in 'Other environment settings' in deployment project while configuring Environment. This way you will not have to run build when artifacts don't change. Just change variable value before deploying the artifact.

How to put variables in IntelliJ's run configuration?

Say I have a variable named repo_path and I want to set some environmental variables or VM options based on this variable, such as SOME_VAR={$repo_path}/some_sub_path or -DsomeProperty={$repo_path} so that I don't have to type repo_path every time I use it. What is the correct way to achieve this other than typing the full address everywhere?
Why don't you define a regular system variable 'REPO_PATH' and use it in IntelliJ Run Configuration as a regular system variable :)
-DsomeProperty=$REPO_PATH
It works well under Mac OS, so I assume it will work under Ubuntu as well.

The 'right' way to run unit tests in Clojure

Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.

Serial execution of package tests

I have implemented several packages for a web API, each with their own test cases. When each package is tested using go test ./api/pkgname the tests pass. If I want to run all tests at once with go test ./api/... test cases always fail.
In each test case, I recreate the entire schema using DROP SCHEMA public CASCADE followed by CREATE SCHEMA public and apply all migrations. The test suite reports errors back at random, saying a relation/table does not exist, so I guess each test suite (per package) is run in parallel somehow, thus messing up the DB state.
I tried to pass along some test flags like go test -cpu 1 -parallel 0 ./src/api/... with no success.
Could the problem here be tests running in parallel, and if yes, how can I force serial execution?
Update:
Currently I use this workaround to run the tests, but I still wonder if there's a better solution
find <dir> -type d -exec go test {} \;
As others have pointed out, -parallel doesn't do the job (it only works within packages). However, you can use the flag -p=1 to run through the package tests in series. This is documented here:
http://golang.org/src/cmd/go/testflag.go
but (afaict) not on the command line, go help, etc. I'm not sure it is meant to stick around (although I'd argue that if it is removed, -parallel should be fixed.)
The go tool is provided to make running unit tests easier using the convention that *_test.go files contain unittests in them. Because it assumes they are unittests it also assumes they are hermetic. It sounds like your tests either aren't unittests or they are but violate the assumptions that a unittest should fulfill.
In the case that you mean for these tests to be unittests then you probably need a mock database for your unittests. A mock, preferrably in memory, of your database will ensure that the unittest is hermetic and can't be interfered with by other unittests.
In the case that you mean for these tests to be integration tests you are probably better off not using the go tool for these tests. What you probably want is to create a seperate test binary whose running you can control and write you integration test scripts in there.
The good news is that creating a mock in Go is insanely easy. Change your code to take an interface with the methods you care about for the databases and then write an in memory implementation of that interface for testing purposes and pass it into your application code that you want to test.
Just to clarify, #Jeremy's answer is still the accepted one:
Since my integration tests were only run on one package (api), I removed the separate test binary in the end and created a pattern to separate test types by:
Unit tests use the normal TestX name
Integration tests use Test_X
I created shell scripts (utest.sh/itest.sh) to run either of those.
For unit tests go test -run="^(Test|Benchmark)[^_](.*)"
For integration tests go test -run"^(Test|Benchmark)_(.*)"
Run both using the normal go test