What is the proper way to define custom attributes for conditional compilation? - testing

I'd like to be able to pass a flag to cargo test to enable logging in my tests, when I need to debug them.
I've come up with something like:
#[cfg(logging)]
// An internal module where I define some helper to configure logging
// I use `tracing` internally.
use crate::logging;
#[test]
fn mytest() {
#[cfg(logging)]
logging::enable();
// ..
assert!(true);
}
Then I can enable the logs with
RUSTFLAGS="--cfg logging" cargo test
It works but it feels like I'm abusing the rustc flag system. It also has the side effect of recompiling all the crates with my logging flag, which (besides the fact that it takes ages) may be an issue if this flag is used by one of my dependency some day.
Is there a better way to define and use custom attributes? I could add a feature to my cargo manifest, but this is not really a feature since it's just for the tests.

You wouldn't usually recompile your application, there's no need to: you can use an environment variable. For example:
if std::env::var("MY_LOG").is_ok() {
logging::enable();
}
You can then dynamically decide to log by calling your application with
MY_LOG=true cargo run
or, when already compiled,
MY_LOG=true myapp
You would usually configure more than just whether the log is on, for example the log level or the level destination. Here's a real example: https://github.com/Canop/broot/blob/master/src/main.rs#L20

Related

Karate Standalone as Mock Server with multiple Feature Files

I try to setup an integration/API test suite with Karate and consider to use Karate Netty for mocking required services. For the test setup the system under test A (a Spring Boot app) is started up completely. The Karate tests are then executed by a Maven test run against this instance.
The service A depends on multiple other services these needs to be mocked away for the tests. To do so my idea was to configure a running Karate Netty standalone instance as HTTP proxy (done by JVM args of the service A).
Now my idea was to create one test feature file: xyz-test.feature
And the required mocks for this file are defined in an associated mock feature file: xyz-mock.feature
(The test scenarios are rather complex and the responses of the external services could vary)
This means for a full test run I need to load up a couple of mock feature files. So:
What is the matching strategy for multiple mock feature files? Which scenario wins, so to say.
Is there any way to ensure, that the right mock file is used for the associated test file?
(Clearly I can reconfigure the running standalone instance and advice it to use xyz-mock.feature next.
But this would stop me from using parallel execution for my API tests, right?)
I already thought about reusing the Correlation-Id which I can send in for each test and then match against this in the mock file (it is also sent to all called services). But:
Is there a way to define a global matcher per mock file?
It sounds like you need only one mock file. You could boot 2 on different ports if you wanted, but there is no way to "merge" them into one port - if that is what you were looking for.
In my experience, you will be able to have a single mock take care of all your edge cases. This is because Karate's approach is un-conventional: you pretty much write a stateful server. But by keeping variables in memory and some clever JSON-path, you can simulate CRUD with very few lines of code: https://github.com/intuit/karate/tree/master/karate-netty#background
You can use only one at a time, by design
Given the above limitation, here's an interesting idea: add something like an extra pathMatches('/__test/reset') scenario that cleans-up your state and sets the Background variables to things like * def cats = []. Now in each feature, just call the special "reset" URL at the start. The good thing is Karate is thread-safe. Another idea as you said is you can maintain two or three different variables and use some logic to "route" based on a header, again very easy IMO. Use a map of maps, e.g:
def data = { cats1: {}, cats2: {}, cats3: {} }
And you can get the header, e.g. if it is mode: cats1
* def mode = karate.get('requestHeaders.mode[0]')
* def cats = data[mode]
not sure if this answers your question, but if the last Scenario has an "empty" description, it is a "catch all" and can in theory delegate to another server (or mock): https://github.com/intuit/karate/tree/develop/karate-netty#proxy-mode
Your question is a little confusing, so you may have to edit and re-word it if I haven't understood.
EDIT: using multiple mock files should be possible in 1.1.0 onwards: https://github.com/intuit/karate/issues/1566

WDIO - Selenium - Run Specific Test Before All Others - With Specific Capabilities

I'm using WebDriverIO and I want to do the following:
Run a single test before any test is run (createNewUsers)
Use specific capabilities (proxy settings) for that first test
Once done, use a default set of capabilities for everything else
So I can't seem to figure this out:
I've tried to add a second set of capabilities and use the exclude argument to ensure it only applies to that specific spec, however, I don't know whether this is actually possible and then how to call that specific test in my before block - so capabilities I use:
exclude: [ './newUserCreationStage/newStageUsers.js' ],
But then in my before block - how do I say run that (if it's possible):
before: function (capabilities, specs) {
expect = require('chai').expect;
RUN THIS './newUserCreationStage/newStageUsers.js'
},
I would say that your setup needs a bit different approach. First take a look at the xUnit Fixture Setup Patterns. This createNewUsers can actually be achieved via SuiteFixture Setup, Prebuilt Fixture, Setup Decorator and Creation Method. This will set the SUT in the desired state and remove the need to
Run a single test before any test is run
Even better - if you have access, you can seed a DB or call APIs to load all users, data and whatever you need before the test run. This is also known as Back Door Manipulation. Let your CI server to take care of all this as a dedicated step.
Since you are using Mocha, you can utilize tags and organise your suites and specs in a better way. This will allow you to switch drivers with capabilities according to your needs (read test needs, ones may need proxy, others not). I would suggest to also consider mocha-tags. Good fit here is the Strategy pattern. It will allow you to have many related subclasses which only differ in behavior.

GO Unit testing structured REST API projects

I am trying to write nice unit tests for my already created REST API. I have this simple structure:
ROOT/
config/
handlers/
lib/
models/
router/
main.go
config contains configuration in JSON and one simple config.go that reads and parses JSON file and fills the Config struct. handlers contains controllers (i.e. handlers of respective METHOD+URL described in router/routes.go). lib contains some DB, request responder and logger logic. models contains structs and their funcs to be mapped from-to JSON and DB. Finally router contains the router and routes definition.
Now I was searching and reading a lot about unit testing REST APIs in GO and found more or less satisfying articles about how to set up a testing server, define routes and test my requests. All fine. BUT only if you want to test a single file!
My problem is now how to set up the testing environment (server, routes, DB connection) for all handlers? With the approach found here (which I find very easy to understand and implement) I have one problem: either I have to run tests separately for each handler or I have to write test suites for all handlers in just one test file. I believe you understand that both cases are not very happy (1st because I need to preserve that running go test runs all tests that succeed and 2nd because having one test file to cover all handler funcs would become unmaintainable).
By now I have succeeded (according to the linked article) only if I put all testing and initializing code into just one func per XYZhandler_test.go file but I don't like this approach as well.
What I would like to achieve is kind of setUp() or init() that runs once with first triggered test making all required variables globally visible and initialized so that all next tests could use them already without the need of instantiating them again while making sure that this setup file is compiled only for tests...
I am not sure if this is completely clear or if some code example is required for this kind of question (other than what is already linked in the article but I will add anything that you think is required, just tell me!
Test packages, not files!
Since you're testing handlers/endpoints it would make sense to put all your _test files in either the handlers or the router package. (e.g. one file per endpoint/handler).
Also, don't use init() to setup your tests. The testing package specifies a function with the following signature:
func TestMain(m *testing.M)
The generated test will call TestMain(m) instead of running the tests
directly. TestMain runs in the main goroutine and can do whatever
setup and teardown is necessary around a call to m.Run. It should then
call os.Exit with the result of m.Run
Inside the TestMain function you can do whatever setup you need in order to run your tests. If you have global variables, this is the place to declare and initialize them. You only need to do this once per package, so it makes sense to put the TestMain code in a seperate _test file. For example:
package router
import (
"testing"
"net/http/httptest"
)
var (
testServer *httptest.Server
)
func TestMain(m *testing.M) {
// setup the test server
router := ConfigureRouter()
testServer = httptest.NewServer(router)
// run tests
os.Exit(m.Run())
}
Finally run the tests with go test my/package/router.
Perhaps you could put the setup code that you want to use from multiple unit test files into a separate package that only the unit tests use?
Or you could put the setup code into the normal package and just use it from the unit tests.
It's been asked before but the Go authors have chosen not to implicitly supply a test tag that could be used to selectively enable function compiles within the normal package files.

How to pass an argument (e.g. the hostname) to the testrunner

I'm creating a unittest- and Selenium-based test suite for a web application. It is reachable by several hostnames, e.g. implying different languages; but of course I want to be able to test e.g. my development instances as well without changing the code (and without fiddling with the hosts file which doesn't work for me anymore, because of network security considerations, I suppose).
Thus, I'd like to be able to specify the hostname by commandline arguments.
The test runner does argument parsing itself, e.g. for chosing the tests to execute.
What is the recommended method to handle this situation?
The solution I came up with finally is:
Have a module for the tests which fixes the global data, including the hostname, and provides my TestCase class (I added an assertLoadsOk method to simply check for the HTTP status code).
This module does commandline processing as well:
It checks for its own options
and removes them from the argument vector (sys.argv).
When finding an "unknown" option, stop processing the options, and leave the rest to the testrunner.
The commandline processing happens on import, before initializing my TestCase class.
It works well for me ...

How can I test command-line applications using maven?

I work on a complex, multi-module maven project. One of the modules is an executable jar that implements a command-line application. I need to integration test this application. I need to run it several times, with several different command-lines and validate the exit status and stdout/err. However, I can't find a plugin for maven that claims to support this, and also can't track down a JUnit library that supports testing command-line applications.
Before you say 'don't test the main method - instead do bla', in this case I really do mean to test the main method, not some subsidiary functionality. The whole point is to run the application as a user would in its own VM and environment, and validate that it is behaving itself - parsing command-line options correctly, exiting with the write status and hot-loading the right classes from the right plugin jars.
My current hack is to use apache-exec from within a junit test method. It appears to be working, but is quite fiddly to set up.
public void testCommandlineApp()
throws IOException
{
CommandLine cl = new CommandLine(resolveScriptNameForOS("./runme")); // add .sh/.bat
cl.addArgument("inputFile.xml");
exec.setWorkingDirectory(workingDir);
exec.setExitValues(new int[] { 0, 1, 2 });
int exitCode = exec.execute(cl);
assertEquals("Exit code should be zero", 0, exitCode);
}
Why not simply use a shell script, using the maven-exec-plugin to build your classpath?
STDOUT=$(mvn exec:java -DmainClass=yourMainClass --arg1 --arg2=value2)
RETURN_CODE=$?
# validate STDOUT
# validate RETURN_CODE
You can even use something like shunit2 if you prefer a more structured approach.