Repast - call simulation from java program without GUI - repast-simphony

I am following the instruction to test calling my simulation model from another java program.
package test;
//import repast.simphony.runtime.RepastMain;
public class UserMain {
public UserMain(){};
public void start(){
String[] args = new String[]{"D:\\user\\Repast_java\\IntraCity_Simulator\\IntraCity_Simulator.rs"};
repast.simphony.runtime.RepastMain.main(args);
// repast.simphony.runtime.RepastBatchMain.main(args);
}
public static void main(String[] args) {
UserMain um = new UserMain();
um.start();
}
}
The java program will launch the GUI with the RepastMain configuration:
repast.simphony.runtime.RepastMain.main(args);
The java program will soon be terminated without running and returning nothing if I apply non-GUI configuration:
repast.simphony.runtime.RepastBatchMain.main(args);
How to enable the running of the simulation in headless mode?
SECONDLY, I need to deploy my simulation model on a remote server (Linux). What is the best way for the server to call my simulation model? if HTTP, how to perform the configuration subsequently? The running of the model is preferred to be batch run method (either a single run or multiple runs depending on the user choice). The batch run output needs to be transformed into JSON format to feedback to the server.

Parts of the batch run mechanism for Simphony can probably be used for this. For some context on headless command line batch runs, see:
https://repast.github.io/docs/RepastBatchRunsGettingStarted.pdf
That doesn't align exactly with what you are trying to do, given that you are embedding the simulation run in other java code, but should help as background.
Ultimately, though the batch run code calls an InstanceRunner:
https://github.com/Repast/repast.simphony/blob/master/repast.simphony.distributed.batch/src/repast/simphony/batch/InstanceRunner.java
The InstanceRunner either iterates over a list of parameters sets in a file or parameter set strings passed to it directly and then performs a simulation run for each of those parameter sets. If you passed it a single parameter set, it would run once which I think is what you want to do. So, I would suggest looking at the InstanceRunner code to get a sense of how it works, and mimic InstanceRunner.main() in your code that calls the simulation.
As for the remote execution, Simphony cancopy a simulation to a remote resource, run it and copy the results back. That's integrated with the Simphony GUI and so is not callable from other code without some work on your part. All the relevant code is in:
https://github.com/Repast/repast.simphony/tree/master/repast.simphony.distributed.batch/src/repast/simphony/batch
The SSHSession class has code for executing commands on a remote resource over SSH, methods for copying files and so on. So, perhaps that might be useful to you.

Related

Running only selected tests with dynamic input [duplicate]

This question already has an answer here:
Tag logic for Parallel Run
(1 answer)
Closed 2 years ago.
I have tried few approaches to solve my problem but with no success (I do need to improve my Java :)), so I am hopping that I am missing something or that someone can point me in the right direction.
I have multiple microservices that I need to test. I should be able to test all at once or only the ones I want. Each service has its own DB and different feature files. Note that these services may not be all up and running.
I can run tests with manually setting config for each service. Ideally I would like to pass a variable with service name in command line and the tests should start.
In current set up I use callSingle to run DBInit.feature which runs SQL scripts to populate my DB. I have also set global variables that are used in feature files. And this works fine.
Problems start when I add more feature files that are used to test the service that is not running. And when I have to use callSingle for specified service to populate its DB.
The first idea was to use different envs, but I could need 5 envs to be executed in a single run and with one report. Then I was thinking to implement runner for each service but I am not sure if these runners run in parallel and not sure how could I populate DB in this case?
Is it possible to use custom variable that will be passed to main test class.
public class DemoTestSelected {
#BeforeClass
public static void beforeClass() throws Exception {
TestBase.beforeClass();
}
#Test
public void testSelected() {
List<String> tags = Arrays.asList("~#ignore");
List<String> features = Arrays.asList("classpath:demo/cats");
String karateOutputPath = "target/surefire-reports";
Results results = Runner.path(features)
.tags(tags)
.outputCucumberJson(true)
.reportDir(karateOutputPath).parallel(5);
DemoTestParallel.generateReport(karateOutputPath);
assertTrue(results.getErrorMessages(), results.getFailCount() == 0);
}
}
For example tags and features to be set in config?
I re-read your question a few times and gave up trying to understand it. But I'll lay down a couple of principles:
you should use tags to decide which features to run / not-run. try to fit everything you need to this model and don't complicate things
for more control, you can set some "system property" on the command-line and maybe before you use the Runner, you can write some Java logic which would be - "if karate.env (or some other system property) is foo, then select tags one. two and three etc.
yes the Karate 1.0 series can technically run multiple Runner instances in parallel, but that is left to you and we don't have an example, it would require you to manage threads or a Java Executor manually

How to restrict test data method call for respective Test method by using TestCaseSource attribute in NUnit

I am using NUnit for the Selenium C# project. In which I have many test methods. For getting data (from excel) I am using a public static method that returns IEnumerable<TestCaseData> which I am calling at test method level as TestCaseSource. I am facing challenges now, as I start executing on the test method it is invoking all the static methods which are there in the project.
Code looks like this:
public static IEnumerable<TestCaseData> BasicSearch()
{
BaseEntity.TestDataPath = PMTestConstants.PMTestDataFolder + ConfigurationManager.AppSettings.Get("Environment").ToString() + PMTestConstants.PMTestDataBook;
return ExcelTestDataHelper.ReadFromExcel(BaseEntity.TestDataPath, ExcelQueryCreator.GetCommand(PMTestConstants.QueryCommand, PMTestConstants.PMPolicySheet, "999580"));
}
[Test, TestCaseSource("BasicSearch"), Category("Smoke")]
public void SampleCase(Dictionary<string, string> data)
{
dosomething;
}
Can someone help me how can I restrict my data call method to the respective test method?
Your TestCaseSource is not actually called by the test method when you run it, but as part of test discovery. While it's possible to select a single test to execute, it's not possible to discover tests selectively. NUnit must examine the assembly and find all the tests before it's possible to run any of them.
To make matters worse, if you are running under Visual Studio, the discovery process takes place multiple times, first before the tests are initially displayed and then again each time the tests are run. This is made necessary by the architecture of the VS Test Window, which runs separate processes for the initial disovery and the execution of the tests.
That makes it particularly important to minimize the amount of work done in test discovery, especially when running under Visual Studio. Ideally, you should structure the code so that the variable parameters are recorded during discovery. The actual data access should take place at execution time. This can be done in a OneTimeSetUp method, a SetUp method or at the start of the test itself.
Finally, I'd say that your instinct is correct: it should be possible to set up a TestCaseSource, which only runs if the test you select is about to be executed. Unfortunately, that's a feature that NUnit doesn't yet have.

TypeMock in an integration/regression test suite

I need to run an integration/regression test suite for our application and the application is supposed to behave differently at different times of the day. I cannot change the system time since other apps depend on it I would like to mock DateTime.Now for this purpose. However, when I put the mocking in the main method, exceptions were thrown. When I use mocking in an nunit test for the same application, it works fine. Can typemock be used only in the context of a unit test? Is there anyway I can run the solution with mocking enabled?
I ran the solution through TMockRunner.exe as well but had the same issue.
Thanks!
I see this error when I run using the method that Travis mentioned
#Travis Illig, The code for the wrapper is:
class Program
{
static void Main(string[] args)
{
ExpectationsSetup();
ConsoleApplication2.Program.Main(args);
}
public static void ExpectationsSetup()
{
Isolate.WhenCalled(() => DateTime.Now).WillReturn(new DateTime(2010, 1, 2));
}
}
I see the following error:
Unhandled Exception: TypeMock.TypeMockException:
*** No method calls found in recording block. Please check:
* Are you trying to fake a field instead of a property?
* Are you are trying to fake an unsupported mscorlib type? See supported types
here: http://www.typemock.com/mscorlib-types
at gt.a(c0 A_0, Boolean A_1)
at bg.a(Boolean A_0)
at dt.b(Boolean A_0)
at i2.b(Boolean A_0)
at i2.a(Object A_0, Boolean A_1, Func`1 A_2, Action A_3, Action A_4, Action A
_5, Boolean A_6)
at i2.b(Object A_0)
at TypeMock.ArrangeActAssert.ExpectationEngine`1.a(TResult A_0)
at ConsoleApplication2Mock.Program.ExpectationsSetup() in C:\Users\shvenkat\D
ocuments\Visual Studio 2010\Projects\ConsoleApplication2\ConsoleApplication2Mock
\Program.cs:line 22
at ConsoleApplication2Mock.Program.Main(String[] args) in C:\Users\shvenkat\D
ocuments\Visual Studio 2010\Projects\ConsoleApplication2\ConsoleApplication2Mock
\Program.cs:line 14
Any help will be appreciated
Thanks!
The typical use of Typemock Isolator is within the context of a unit test or a small testing environment. There is a non-zero level of overhead associated with running Isolator (or any other profiler-based product like NCover) in a process, so you generally don't want to do that.
However, there are some total edge-cases when you really do want to run Isolator on a regular process, and that is possible.
Here's an article I wrote from a while back explaining how you can do this to a Windows Service, for example.
The basic algorithm holds:
Enable Typemock Isolator (either by setting up the profiling flags on the process or by running things through TMockRunner.exe).
Set up your expectations (this is where you mock DateTime.Now or whatever else you want mocked out).
Let the application finish starting up and run as normal.
The first step is easy enough - it's just like if you were running it in a unit test environment. It's the second step that can be difficult. It means you need to have some sort of "wrapper" or something that runs before the rest of your application is allowed to start that will set up your expectations. This normally happens in a test setup method or in the "arrange" part of your "arrange-act-assert" unit test. You'll see an example of this in my article.
Again, I'll warn you about performance. It's probably fine to do something like this in a test environment like you mention you're doing, but I don't think I'd do it in production.
A specific note about your program and the error you're seeing:
I tried to set up a reproduction of it and while I was able to mock other things, I wasn't able to get DateTime.Now mocking to work either. I got the same exception you're seeing.
For example, try this in your wrapper:
class Program
{
static void Main(string[] args)
{
ExpectationsSetup();
ConsoleApplication2.Program.Main(args);
}
public static void ExpectationsSetup()
{
// Mock something OTHER than DateTime.Now - this mocks
// the call to your wrapped application.
Isolate
.WhenCalled(() => ConsoleApplication2.Program.Main(null))
.DoInstead(ctx => Console.WriteLine("faked!"));
}
}
Running that wrapper through TMockRunner.exe, you'll actually get the mocking to work. However, switching it back to DateTime.Now, you'll get the exception again.
I did verify that mocking DateTime.Now in a unit test environment does work. So there must be something special about the unit test environment, though I don't know what.
Figuring out that difference is a little more in-depth than something that can be handled here. You should contact Typemock support with this one and explain the situation. They are pretty good about helping out. Be sure to send them a reproduction (e.g., a simple console app showing the issue) and you'll get a faster/better response.

How can I test command-line applications using maven?

I work on a complex, multi-module maven project. One of the modules is an executable jar that implements a command-line application. I need to integration test this application. I need to run it several times, with several different command-lines and validate the exit status and stdout/err. However, I can't find a plugin for maven that claims to support this, and also can't track down a JUnit library that supports testing command-line applications.
Before you say 'don't test the main method - instead do bla', in this case I really do mean to test the main method, not some subsidiary functionality. The whole point is to run the application as a user would in its own VM and environment, and validate that it is behaving itself - parsing command-line options correctly, exiting with the write status and hot-loading the right classes from the right plugin jars.
My current hack is to use apache-exec from within a junit test method. It appears to be working, but is quite fiddly to set up.
public void testCommandlineApp()
throws IOException
{
CommandLine cl = new CommandLine(resolveScriptNameForOS("./runme")); // add .sh/.bat
cl.addArgument("inputFile.xml");
exec.setWorkingDirectory(workingDir);
exec.setExitValues(new int[] { 0, 1, 2 });
int exitCode = exec.execute(cl);
assertEquals("Exit code should be zero", 0, exitCode);
}
Why not simply use a shell script, using the maven-exec-plugin to build your classpath?
STDOUT=$(mvn exec:java -DmainClass=yourMainClass --arg1 --arg2=value2)
RETURN_CODE=$?
# validate STDOUT
# validate RETURN_CODE
You can even use something like shunit2 if you prefer a more structured approach.

NAnt, MbUnit, CruiseControl, Selenium - passing settings to the test assembly

I am putting together some ideas for our automated testing platform and have been looking at Selenium for the test runner.
I am wrapping the recorded Selenium C# scripts in an MbUnit test, which is being triggered via the MbUnit NAnt task. The Selenium test client is created as follows:
selenium = new DefaultSelenium("host", 4444, "*iexplore", "http://[url]/");
How can I pass the host, port and url settings into the test so their values can be controlled via the NAnt task?
For example, I may have multiple Selenium RC servers listening and I want to use the same test code passing in each server address instead of embedding the settings within the tests themselves.
I have an approach mocked up using a custom NAnt task I have written but it is not the most elegant solution at present and I wondered if there was an easier way to accomplish what I want to do.
Many thanks if anyone can help.
Thanks for the responses so far.
Environment variables could work, however, we could be running parallel tests via a single test assembly so I wouldn't want settings to be overwritten during execution, which could break another test. Interesting line of thought though, thanks, I reckon I could use that in other areas.
My current solution involves a custom NAnt task build on top of the MbUnit task, which allows me to specify the additional host, port, url settings as attributes. These are then saved as a config file within the build directory and then read in by the test assemblies. This feels a bit "clunky" to me as my tests need to inherit from a specific class. Not too bad but I'd like to have less dependencies and concentrate on the testing.
Maybe I am worrying too much!!
I have a base class for all test fixtures which has the following setup code:
[FixtureSetUp]
public virtual void TestFixtureSetup ()
{
BrowserType = (BrowserType) Enum.Parse (typeof (BrowserType),
System.Configuration.ConfigurationManager.AppSettings["BrowserType"],
true);
testMachine = System.Configuration.ConfigurationManager.AppSettings["TestMachine"];
seleniumPort = int.Parse (System.Configuration.ConfigurationManager.AppSettings["SeleniumPort"],
System.Globalization.CultureInfo.InvariantCulture);
seleniumSpeed = System.Configuration.ConfigurationManager.AppSettings["SeleniumSpeed"];
browserUrl = System.Configuration.ConfigurationManager.AppSettings["BrowserUrl"];
targetUrl = new Uri (System.Configuration.ConfigurationManager.AppSettings["TargetUrl"]);
string browserExe;
switch (BrowserType)
{
case BrowserType.InternetExplorer:
browserExe = "*iexplore";
break;
case BrowserType.Firefox:
browserExe = "*firefox";
break;
default:
throw new NotSupportedException ();
}
selenium = new DefaultSelenium (testMachine, seleniumPort, browserExe, browserUrl);
selenium.Start ();
System.Console.WriteLine ("Started Selenium session (browser type={0})",
browserType);
// sets the speed of execution of GUI commands
if (false == String.IsNullOrEmpty (seleniumSpeed))
selenium.SetSpeed (seleniumSpeed);
}
I then simply supply the test runner with a config. file:
For MSBuild I use environment variables, I create those in my CC.NET config then they would be available in the script. I think this would work for you too.
Anytime I need to integrate with an external entity using NAnt I either end up using the exec task or writing a custom task. Given the information you posted it would seem that writing your own would indeed be a good solution, However you state you're not happy with it. Can you elaborate a bit on why you don't think you current solution is an elegant one?
Update
Not knowing internal details it seems like you've solved it pretty well with a custom task. From what I've heard, that's how I would have done it.
Maybe a new solution will show itself in time, but for now be light on yourself!