How can I test command-line applications using maven? - testing

I work on a complex, multi-module maven project. One of the modules is an executable jar that implements a command-line application. I need to integration test this application. I need to run it several times, with several different command-lines and validate the exit status and stdout/err. However, I can't find a plugin for maven that claims to support this, and also can't track down a JUnit library that supports testing command-line applications.
Before you say 'don't test the main method - instead do bla', in this case I really do mean to test the main method, not some subsidiary functionality. The whole point is to run the application as a user would in its own VM and environment, and validate that it is behaving itself - parsing command-line options correctly, exiting with the write status and hot-loading the right classes from the right plugin jars.

My current hack is to use apache-exec from within a junit test method. It appears to be working, but is quite fiddly to set up.
public void testCommandlineApp()
throws IOException
{
CommandLine cl = new CommandLine(resolveScriptNameForOS("./runme")); // add .sh/.bat
cl.addArgument("inputFile.xml");
exec.setWorkingDirectory(workingDir);
exec.setExitValues(new int[] { 0, 1, 2 });
int exitCode = exec.execute(cl);
assertEquals("Exit code should be zero", 0, exitCode);
}

Why not simply use a shell script, using the maven-exec-plugin to build your classpath?
STDOUT=$(mvn exec:java -DmainClass=yourMainClass --arg1 --arg2=value2)
RETURN_CODE=$?
# validate STDOUT
# validate RETURN_CODE
You can even use something like shunit2 if you prefer a more structured approach.

Related

Running only selected tests with dynamic input [duplicate]

This question already has an answer here:
Tag logic for Parallel Run
(1 answer)
Closed 2 years ago.
I have tried few approaches to solve my problem but with no success (I do need to improve my Java :)), so I am hopping that I am missing something or that someone can point me in the right direction.
I have multiple microservices that I need to test. I should be able to test all at once or only the ones I want. Each service has its own DB and different feature files. Note that these services may not be all up and running.
I can run tests with manually setting config for each service. Ideally I would like to pass a variable with service name in command line and the tests should start.
In current set up I use callSingle to run DBInit.feature which runs SQL scripts to populate my DB. I have also set global variables that are used in feature files. And this works fine.
Problems start when I add more feature files that are used to test the service that is not running. And when I have to use callSingle for specified service to populate its DB.
The first idea was to use different envs, but I could need 5 envs to be executed in a single run and with one report. Then I was thinking to implement runner for each service but I am not sure if these runners run in parallel and not sure how could I populate DB in this case?
Is it possible to use custom variable that will be passed to main test class.
public class DemoTestSelected {
#BeforeClass
public static void beforeClass() throws Exception {
TestBase.beforeClass();
}
#Test
public void testSelected() {
List<String> tags = Arrays.asList("~#ignore");
List<String> features = Arrays.asList("classpath:demo/cats");
String karateOutputPath = "target/surefire-reports";
Results results = Runner.path(features)
.tags(tags)
.outputCucumberJson(true)
.reportDir(karateOutputPath).parallel(5);
DemoTestParallel.generateReport(karateOutputPath);
assertTrue(results.getErrorMessages(), results.getFailCount() == 0);
}
}
For example tags and features to be set in config?
I re-read your question a few times and gave up trying to understand it. But I'll lay down a couple of principles:
you should use tags to decide which features to run / not-run. try to fit everything you need to this model and don't complicate things
for more control, you can set some "system property" on the command-line and maybe before you use the Runner, you can write some Java logic which would be - "if karate.env (or some other system property) is foo, then select tags one. two and three etc.
yes the Karate 1.0 series can technically run multiple Runner instances in parallel, but that is left to you and we don't have an example, it would require you to manage threads or a Java Executor manually

What is the proper way to define custom attributes for conditional compilation?

I'd like to be able to pass a flag to cargo test to enable logging in my tests, when I need to debug them.
I've come up with something like:
#[cfg(logging)]
// An internal module where I define some helper to configure logging
// I use `tracing` internally.
use crate::logging;
#[test]
fn mytest() {
#[cfg(logging)]
logging::enable();
// ..
assert!(true);
}
Then I can enable the logs with
RUSTFLAGS="--cfg logging" cargo test
It works but it feels like I'm abusing the rustc flag system. It also has the side effect of recompiling all the crates with my logging flag, which (besides the fact that it takes ages) may be an issue if this flag is used by one of my dependency some day.
Is there a better way to define and use custom attributes? I could add a feature to my cargo manifest, but this is not really a feature since it's just for the tests.
You wouldn't usually recompile your application, there's no need to: you can use an environment variable. For example:
if std::env::var("MY_LOG").is_ok() {
logging::enable();
}
You can then dynamically decide to log by calling your application with
MY_LOG=true cargo run
or, when already compiled,
MY_LOG=true myapp
You would usually configure more than just whether the log is on, for example the log level or the level destination. Here's a real example: https://github.com/Canop/broot/blob/master/src/main.rs#L20

Repast - call simulation from java program without GUI

I am following the instruction to test calling my simulation model from another java program.
package test;
//import repast.simphony.runtime.RepastMain;
public class UserMain {
public UserMain(){};
public void start(){
String[] args = new String[]{"D:\\user\\Repast_java\\IntraCity_Simulator\\IntraCity_Simulator.rs"};
repast.simphony.runtime.RepastMain.main(args);
// repast.simphony.runtime.RepastBatchMain.main(args);
}
public static void main(String[] args) {
UserMain um = new UserMain();
um.start();
}
}
The java program will launch the GUI with the RepastMain configuration:
repast.simphony.runtime.RepastMain.main(args);
The java program will soon be terminated without running and returning nothing if I apply non-GUI configuration:
repast.simphony.runtime.RepastBatchMain.main(args);
How to enable the running of the simulation in headless mode?
SECONDLY, I need to deploy my simulation model on a remote server (Linux). What is the best way for the server to call my simulation model? if HTTP, how to perform the configuration subsequently? The running of the model is preferred to be batch run method (either a single run or multiple runs depending on the user choice). The batch run output needs to be transformed into JSON format to feedback to the server.
Parts of the batch run mechanism for Simphony can probably be used for this. For some context on headless command line batch runs, see:
https://repast.github.io/docs/RepastBatchRunsGettingStarted.pdf
That doesn't align exactly with what you are trying to do, given that you are embedding the simulation run in other java code, but should help as background.
Ultimately, though the batch run code calls an InstanceRunner:
https://github.com/Repast/repast.simphony/blob/master/repast.simphony.distributed.batch/src/repast/simphony/batch/InstanceRunner.java
The InstanceRunner either iterates over a list of parameters sets in a file or parameter set strings passed to it directly and then performs a simulation run for each of those parameter sets. If you passed it a single parameter set, it would run once which I think is what you want to do. So, I would suggest looking at the InstanceRunner code to get a sense of how it works, and mimic InstanceRunner.main() in your code that calls the simulation.
As for the remote execution, Simphony cancopy a simulation to a remote resource, run it and copy the results back. That's integrated with the Simphony GUI and so is not callable from other code without some work on your part. All the relevant code is in:
https://github.com/Repast/repast.simphony/tree/master/repast.simphony.distributed.batch/src/repast/simphony/batch
The SSHSession class has code for executing commands on a remote resource over SSH, methods for copying files and so on. So, perhaps that might be useful to you.

How to get resource in tests using Gradle?

I have:
--src
-----main
-----test
--------resources
-------------test.zip
In test directory I have Spock test. How can I refer the test.zip file so to pass its absolute path to the test and to check some of my methods? I tried with new File('.').absolutePath but I want to find better solution
def "check checkTypeZipMethod"() {
expect:
assert testedMethod(new File('.').absolutePath+"test/resources/test.zip")==true
}
You can either try to locate the zip file based on ClassLoader#getResource, or configure the test task to set a system property pointing to the zip file. The latter is easier to implement, but harder to make work in environments other than Gradle (e.g. when running the test inside an IDE).
PS: In a Spock expect: or then: block, you can (and should) omit the assert keyword.

NAnt, MbUnit, CruiseControl, Selenium - passing settings to the test assembly

I am putting together some ideas for our automated testing platform and have been looking at Selenium for the test runner.
I am wrapping the recorded Selenium C# scripts in an MbUnit test, which is being triggered via the MbUnit NAnt task. The Selenium test client is created as follows:
selenium = new DefaultSelenium("host", 4444, "*iexplore", "http://[url]/");
How can I pass the host, port and url settings into the test so their values can be controlled via the NAnt task?
For example, I may have multiple Selenium RC servers listening and I want to use the same test code passing in each server address instead of embedding the settings within the tests themselves.
I have an approach mocked up using a custom NAnt task I have written but it is not the most elegant solution at present and I wondered if there was an easier way to accomplish what I want to do.
Many thanks if anyone can help.
Thanks for the responses so far.
Environment variables could work, however, we could be running parallel tests via a single test assembly so I wouldn't want settings to be overwritten during execution, which could break another test. Interesting line of thought though, thanks, I reckon I could use that in other areas.
My current solution involves a custom NAnt task build on top of the MbUnit task, which allows me to specify the additional host, port, url settings as attributes. These are then saved as a config file within the build directory and then read in by the test assemblies. This feels a bit "clunky" to me as my tests need to inherit from a specific class. Not too bad but I'd like to have less dependencies and concentrate on the testing.
Maybe I am worrying too much!!
I have a base class for all test fixtures which has the following setup code:
[FixtureSetUp]
public virtual void TestFixtureSetup ()
{
BrowserType = (BrowserType) Enum.Parse (typeof (BrowserType),
System.Configuration.ConfigurationManager.AppSettings["BrowserType"],
true);
testMachine = System.Configuration.ConfigurationManager.AppSettings["TestMachine"];
seleniumPort = int.Parse (System.Configuration.ConfigurationManager.AppSettings["SeleniumPort"],
System.Globalization.CultureInfo.InvariantCulture);
seleniumSpeed = System.Configuration.ConfigurationManager.AppSettings["SeleniumSpeed"];
browserUrl = System.Configuration.ConfigurationManager.AppSettings["BrowserUrl"];
targetUrl = new Uri (System.Configuration.ConfigurationManager.AppSettings["TargetUrl"]);
string browserExe;
switch (BrowserType)
{
case BrowserType.InternetExplorer:
browserExe = "*iexplore";
break;
case BrowserType.Firefox:
browserExe = "*firefox";
break;
default:
throw new NotSupportedException ();
}
selenium = new DefaultSelenium (testMachine, seleniumPort, browserExe, browserUrl);
selenium.Start ();
System.Console.WriteLine ("Started Selenium session (browser type={0})",
browserType);
// sets the speed of execution of GUI commands
if (false == String.IsNullOrEmpty (seleniumSpeed))
selenium.SetSpeed (seleniumSpeed);
}
I then simply supply the test runner with a config. file:
For MSBuild I use environment variables, I create those in my CC.NET config then they would be available in the script. I think this would work for you too.
Anytime I need to integrate with an external entity using NAnt I either end up using the exec task or writing a custom task. Given the information you posted it would seem that writing your own would indeed be a good solution, However you state you're not happy with it. Can you elaborate a bit on why you don't think you current solution is an elegant one?
Update
Not knowing internal details it seems like you've solved it pretty well with a custom task. From what I've heard, that's how I would have done it.
Maybe a new solution will show itself in time, but for now be light on yourself!