Repast: Query() method runs greatly slower than manual iteration - repast-simphony

Recently I found a big problem using repast query() method. I found it is somehow significantly slower than using the simple manual iteration approach to get the specific agent set. Take a package object query for all hubs with the same "hub_code" for example, I tested both using query and manual iteration approaches:
public void arrival_QueryApproach() {
try {
if (this.getArr_time() == this.getGs().getTick()) {
Query<Object> hub_query = new PropertyEquals<Object>(context, "hub_code", this.getSrc());
for (Object o: hub_query.query()) {
if (o instanceof Hub) {
((Hub)o).getDepature_queue().add(this);
this.setStatus(3);
this.setCurrent_hub(this.getSrc());
break;
}
}
}
}
catch (Exception e) {
System.out.println("No hub identified: " + this.getSrc());
}
}
public void arrival_ManualApproach() {
try {
if (this.getArr_time() == this.getGs().getTick()) {
for (Hub o: gs.getHub_list()) {
if (o.getHub_code().equals(this.getSrc())) {
((Hub)o).getDepature_queue().add(this);
this.setStatus(3);
this.setCurrent_hub(this.getSrc());
break;
}
}
}
}
catch (Exception e) {
System.out.println("No hub identified: " + this.getSrc());
}
}
The executing speed is dramatically different. There are 50000 package and 350 hub objects in my model. It took me on average 1 minute and 40 seconds to run 1600 ticks when using built-in query function, while it takes only 5 seconds when using manual iteration approach. What are the causes to this dramatic difference and why query works so slow? Instead it should logically runs much quicker.
Another issue assocaited with the query methods is that “PropertyGreaterThanEquals” or "PropertyLessThanEquals" runs much slower than using the method “PropertyEquals”. below is another simple example about query a suitable dock for a truck to unload goods.
public void match_dock() {
// Query<Object> pre_fit = new PropertyGreaterThanEquals(context, "unload_speed", 240);
// Query<Object> pre_fit = new PropertyLessThanEquals(context, "unload_speed", 240);
Query<Object> pre_fit = new PropertyEquals(context, "unload_speed", 240);
for (Object o : pre_fit.query()) {
if (o instanceof Dock) {
System.out.println("this dock's id is: " + ((Dock)o).getId());
}
}
}
There are only 3 docks and 17 truck objects in the model. it took less than one second to run total of 1920 ticks if using "PropertyEquals"; however, it took me more than 1 minute to run total of 1920 ticks if choosing the query methods “PropertyGreaterThanEquals” or "PropertyLessThanEquals". in this sense, I have to again loop through the all objects(docks) and doing the greater than query manually? This appears to be another issue much affecting the model execution speed?
I am using java version "11.0.1" 2018-10-16 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.1+13-LTS)
My Eclipse complier level is 10. Installed JREs (default) JDK 11.
Thanks for helpful advice.

Related

janusGraph using remote connection to update and delete : report DefaultGraphTraversal.none()

using janusGraph git code example : example-remotegraph
It works well when i going to create elements and do some query things.
But it report exception when update and delete...
java.util.concurrent.CompletionException: org.apache.tinkerpop.gremlin.driver.exception.ResponseException: Could not locate method: DefaultGraphTraversal.none()
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
at org.apache.tinkerpop.gremlin.driver.ResultSet.one(ResultSet.java:107)
at org.apache.tinkerpop.gremlin.driver.ResultSet$1.hasNext(ResultSet.java:159)
at org.apache.tinkerpop.gremlin.driver.ResultSet$1.next(ResultSet.java:166)
at org.apache.tinkerpop.gremlin.driver.ResultSet$1.next(ResultSet.java:153)
at org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteTraversal$TraverserIterator.next(DriverRemoteTraversal.java:142)
at org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteTraversal$TraverserIterator.next(DriverRemoteTraversal.java:127)
at org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteTraversal.nextTraverser(DriverRemoteTraversal.java:108)
at org.apache.tinkerpop.gremlin.process.remote.traversal.step.map.RemoteStep.processNextStart(RemoteStep.java:80)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:128)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:38)
at org.apache.tinkerpop.gremlin.process.traversal.Traversal.iterate(Traversal.java:203)
at org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.GraphTraversal.iterate(GraphTraversal.java:2694)
at org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.GraphTraversal$Admin.iterate(GraphTraversal.java:178)
at org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal.iterate(DefaultGraphTraversal.java:48)
at org.janusgraph.example.GraphApp.deleteElements(GraphApp.java:301)
at org.janusgraph.example.GraphApp.runApp(GraphApp.java:350)
at org.janusgraph.example.RemoteGraphApp.main(RemoteGraphApp.java:227)
here is the code :
public void deleteElements() {
try {
if (g == null) {
return;
}
LOGGER.info("deleting elements");
// note that this will succeed whether or not pluto exists
g.V().has("name", "pluto").drop().iterate();
if (supportsTransactions) {
g.tx().commit();
}
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
if (supportsTransactions) {
g.tx().rollback();
}
}
}
emmm.....i thought i have fixed this problem.....
the only reason perhaps the library version used doesn't match the gremlin-server's version;
I tried to turn the gremlin driver library to 3.2.9 version, and it works well.
You need to use the same Tinkerpop version JanusGraph is using, as this is an incompatible change that was introduced in Tinkerpop

I need answer of one jade agent to depend on information from others and don't know how to do it

I'm new to jade and I have 5 agents in eclipse that have formula for finding an average and the question is how to send information from agent to this formula for calculation?
I'll be glad if someone can help me with this.
For example, there is one of my agents. There's no formula, because I don't know how to represent it. This is math expression of it: n+=alfa(y(1,2)-y(1,1))
public class FirstAgent extends Agent {
private Logger myLogger = Logger.getMyLogger(getClass().getName());
public class WaitInfoAndReplyBehaviour extends CyclicBehaviour {
public WaitInfoAndReplyBehaviour(Agent a) {
super(a);
}
public void action() {
ACLMessage msg = myAgent.receive();
if(msg != null){
ACLMessage reply = msg.createReply();
if(msg.getPerformative()== ACLMessage.REQUEST){
String content = msg.getContent();
if ((content != null) && (content.indexOf("What is your number?") != -1)){
myLogger.log(Logger.INFO, "Agent "+getLocalName()+" - Received Info Request from "+msg.getSender().getLocalName());
reply.setPerformative(ACLMessage.INFORM);
try {
reply.setContentObject(7);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
else{
myLogger.log(Logger.INFO, "Agent "+getLocalName()+" - Unexpected request ["+content+"] received from "+msg.getSender().getLocalName());
reply.setPerformative(ACLMessage.REFUSE);
reply.setContent("( UnexpectedContent ("+content+"))");
}
}
else {
myLogger.log(Logger.INFO, "Agent "+getLocalName()+" - Unexpected message ["+ACLMessage.getPerformative(msg.getPerformative())+"] received from "+msg.getSender().getLocalName());
reply.setPerformative(ACLMessage.NOT_UNDERSTOOD);
reply.setContent("( (Unexpected-act "+ACLMessage.getPerformative(msg.getPerformative())+") )");
}
send(reply);
}
else {
block();
}
}
}
So from what I can make out you want to (1) send a formula/task to multiple platforms, (2) have them performed locally, (3) and have the results communicated back.
I think there are atleast two ways of doing this:
The first is sending an object in an ACLMessage using Java Serialisation. This is a more OOP approach and not very "Agenty".
The second being the cloning or creating a local task agent.
Using Java SerialZation. (Solution 1)
Create an object for the calculation
class CalculationTask implements serialization
{ int n;
int calculate(){
n+=alfa(y(1,2)-y(1,1));
}
}
send the calculation object via ACLMESSAGE from the senderAgent.
request.setContentObject(new CalculationTask())
recieve the calculation object by recieverAgent and perform calculation on the object. Then response setting the complete task in the response.
CalculationTask myTask = request.getContentObject();
myTask.calculate();
ACLMESSAGE response = request.createReply();
response.setContentObject(myTask());
response.setPerformative(ACLMESSAGE.INFORM)
send(response)
The senderAgent then receives the complete job.
ACLMESSAGE inform = getMessage();
CalculationTask completeTask = inform.getContentObject();
completeTask.process()
Creating local Task Agents (Solution 2)
The Agent Orientated way of doing it would be to launch a task agent on each platform. Have each task agent complete the task and respond appropriately.

Performance in Entity Framework Foreach Add elements

I have a program to add points of interest and it's guides from a webservice to a data base, it looks like this:
// For all the cities (like 1M)
foreach(City city in ListOfCities){
try {
AddCity(city);
} catch (Exception ex){
_logger.Error(ex.Message);
continue;
}
}
//save points of interest of the city to the database
public void AddCity(City city){
using (WEntities context = new WEntities()){
//for all the points of interest
foreach(PointOfInterest point in city){
try{
//search all the guides and add them to the point of interest
List<Guide> listGuides = _webservice.GetAllGuidesForPoint(point);
foreach(Guide guide in listGuides){
point.Guides.Add(guide);
}
// add the point to the context and save it to the database
context.PointOfInterest.AddObject(point);
context.ObjectStateManager.ChangeObjectState(point, System.Data.EntityState.Added);
context.SaveChanges();
} catch (Exception ex) {
_logger.Error(ex.Message)
continue;
}
}
}
}
The problem is, that given a number of cities, the speed of each iteration drop significantly, at first each loop may take 1 second, and in the end can take more than 30 minutes.
What I'm doing wrong? Can I do something to make all the iterations to take the same time (the short one, if I can choose)?
P.D.: what's more, the cpu and ram usage increments over the time.

How to run a simulation case using CaseRunner function?

I'm currently working on a Petrel plug-in in which I need to run a simulation case (through a "For Loop"), I create my case runner, export it and the run it...but after finishing the simulation and closing the console, I check the CaseRunner.IsRunning property and it shows true! This cause that the results have not been loaded to the petrel system.
I tried to load the results manually after finishing the Run of my case (using caserunner and also using a batch file in my code) and I can't see any results in the programming environment.
Does anybody have a solution for this situation?
This is the related part of my code:
Case theCase = arguments.TheCase;
Case Test2 = simroots.CreateCase(theCase, "FinalCase");
CaseRunner cRunners = SimulationSystem.GetCaseRunner(Test2);
cRunners.Export();
cRunners.Run();
bool b = cRunners.IsRunning;
actually I checked when the process finishes; after "cRunners.Run" the code waits for exit the process using:
System.Diagnostics.Process[] parray = System.Diagnostics.Process.GetProcesses();
foreach (System.Diagnostics.Process pr in parray)
{
if (pr.ProcessName == "cmd")
{
pr.WaitForExit();//just wait
}
}
and when the console closes itself, i checked the cRunners.IsRunning term.
However, I'm not so expert... can you show me an example of using CaseRunnerMonitor? both definition of the derived class and its implementation.
All I need is running a simulation case n times via a for loop and
after each Run access to its provided summary results.
I tried some different scenarios to get my desired results, I put here some of them
First I create my CaseRunnerMonitor class:
public class MyMonitor : CaseRunnerMonitor
{
//…
public override void RunCompleted()
{
// define arguments
foreach (Slb.Ocean.Petrel.DomainObject.Simulation.SummaryResult sr in simroot.SummaryResults)
{
IEnumerable ….
List ….
// some codes to change the input arguments according to the current step simulation summary results
}
PetrelLogger.InfoOutputWindow("MyMonitor is completed!");
}
//…
}
And then use it:
private void button1_Click(object sender, EventArgs e)
{
// Some codes that define some arguments…
for (int j = 0; j < 8; j++)
{
// some changes in the arguments
Case MyTest;
MyMonitor monit4 = new MyMonitor();
SimulationRoot simroot = SimulationRoot.Get(PetrelProject.PrimaryProject);
using (ITransaction trans = DataManager.NewTransaction())
{
trans.Lock(simroot);
MyTest = simroot.CreateCase(OriginalCase, MycaseNameFunc());
trans.Commit();
}
CaseRunner cRun = SimulationSystem.GetCaseRunner(MyTest);
cRun.Export();
cRun.Run(monit4);
//Wait(); //waits for current process to close
}
}
But the thing is that MyTest case results part are empty after my run is completed. in this case all the results loaded to the petrel when the 8th (last) simulation completes. If I don’t activate the Wait() function, all 8 runs are almost calling simultaneously…
I changed my scenario, my callback after each run is read the simulation results, change something and call next run so
I create my CaseRunnerMonitor class:
public class MyMonitor2 : CaseRunnerMonitor
{
//…
public override void RunCompleted()
{
// define arguments
index++;
if (index <=8)
{
foreach (Slb.Ocean.Petrel.DomainObject.Simulation.SummaryResult sr in simroot.SummaryResults)
{
IEnumerable ….
List ….
// some codes to change the input arguments according to the current step simulation summary results
}
Case MyTest;
MyMonitor monit4 = new MyMonitor();
SimulationRoot simroot = SimulationRoot.Get(PetrelProject.PrimaryProject);
using (ITransaction trans = DataManager.NewTransaction())
{
trans.Lock(simroot);
MyTest = simroot.CreateCase(OriginalCase, MycaseNameFunc());
trans.Commit();
}
CaseRunner cRun = SimulationSystem.GetCaseRunner(MyTest);
cRun.Export();
cRun.Run(monit4);
}
PetrelLogger.InfoOutputWindow("MyMonitor2 is completed!");
}
//…
}
And then use it:
private void button1_Click(object sender, EventArgs e)
{
Index=0;
// Some codes that define some arguments…
// some changes in the arguments
Case MyTest;
MyMonitor monit5 = new MyMonitor();
SimulationRoot simroot = SimulationRoot.Get(PetrelProject.PrimaryProject);
using (ITransaction trans = DataManager.NewTransaction())
{
trans.Lock(simroot);
MyTest = simroot.CreateCase(OriginalCase, MycaseNameFunc());
trans.Commit();
}
CaseRunner cRun = SimulationSystem.GetCaseRunner(MyTest);
cRun.Export();
cRun.Run(monit5);
}
in this situation no need to wait() function is required. But the problem is that I access to MyTest case results in one level before the current run completes. i.e, I can view the step 5 results via MyTest.Results when the run 6 is completed while step 6 results are empty despite of completion of its run.
I check the CaseRunner.IsRunning property and it shows true
This is because Caserunner.Run() is non-blocking; that is, it starts another thread to launch the run. Control flow then passes immediately to your cRunners.IsRunning check which is true as simulation is in progress.
cRunners.Run(); //non-blocking
bool b = cRunners.IsRunning;
You should look at CaseRunnerMonitor if you want a call-back when the simulation is complete.
Edit:
can you show me an example of using CaseRunnerMonitor? both definition of the derived class and its implementation.
Create your monitor class:
public class CustomCaseRunnerMonitor : CaseRunnerMonitor
{
//...
public override void RunCompleted()
{
//This is probably the callback you want
}
}
Use it:
Case myCase = WellKnownSimulators.ECLIPSE100.CreateSimulationCase(...);
CaseRunner runner = SimulationSystem.GetCaseRunner(myCase);
var myMonitor = new CustomCaseRunnerMonitor(...);
runner.Run(myMonitor);
//Your callbacks defined in your CustomCaseRunnerMonitor will now be called
See also "Running and monitoring a Simulation" in SimulationSystem API documentation.
Ah, OK. I didn't realise you were trying to load results with the CaseMonitor.
I'm afraid the short answer is "No, you can't know when Petrel has loaded results".
The long answer is Petrel will automatically load results if the option is set in the Case arguments. (Define Simulation Case -> Advance -> Automatically load results).
In API:
EclipseFormatSimulator.Arguments args = EclipseFormatSimulator.GetEclipseFormatSimulatorArguments(myCase);
EclipseFormatSimulator.Arguments.RuntimeArguments runtimeArgs = args.Runtime;
runtimeArgs.AutoLoadResults = true;
runtimeArgs.AutoLoadResultsInterval = 120; //How frequently in seconds Petrel polls sim dir.
You will have to poll SimulationRoot.SummaryResults (using the same API you are already using) after case has finished.
You should use the CaseRunnerMonitor we discussed to determine when to start doing this, rather than the System.Diagnostics.Process[] parray = System.Diagnostics.Process.GetProcesses(); code you currently have.

usbManager openDevice call fails after several hundred successful attempts

I'm using usbmanager class to manage USB host on my android 4.1.1 machine.
all seems to work quite well for a few hundreds of transactions until (after ~ 900 transactions) opening the device fails, returning null without exception.
Using a profiler it doesn't seem to be a matter of memory leakage.
this is how I initialize the communication from my main activity (doing this once):
public class MainTestActivity extends Activity {
private BroadcastReceiver m_UsbReceiver = null;
private PendingIntent mPermissionIntent = null;
UsbManager m_manager=null;
DeviceFactory m_factory = null;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
mPermissionIntent = PendingIntent.getBroadcast(this, 0, new Intent(ACTION_USB_PERMISSION), 0);
IntentFilter filter = new IntentFilter(ACTION_USB_PERMISSION);
filter.addAction(UsbManager.ACTION_USB_DEVICE_DETACHED);
m_UsbReceiver = new BroadcastReceiver() {
public void onReceive(Context context, Intent intent) {
String action = intent.getAction();
if (UsbManager.ACTION_USB_DEVICE_DETACHED.equals(action)) {
UsbDevice device = (UsbDevice)intent.getParcelableExtra(UsbManager.EXTRA_DEVICE);
if (device != null) {
// call your method that cleans up and closes communication with the device
Log.v("BroadcastReceiver", "Device Detached");
}
}
}
};
registerReceiver(m_UsbReceiver, filter);
m_manager = (UsbManager) getSystemService(Context.USB_SERVICE);
m_factory = new DeviceFactory(this,mPermissionIntent);
}
and this is the code of my test:
ArrayList<DeviceInterface> devList = m_factory.getDevicesList();
if ( devList.size() > 0){
DeviceInterface devIf = devList.get(0);
UsbDeviceConnection connection;
try
{
connection = m_manager.openDevice(m_device);
}
catch (Exception e)
{
return null;
}
The test will work OK for 900 to 1000 calls and after this the following call will return null (without exception):
UsbDeviceConnection connection;
try
{
connection = m_manager.openDevice(m_device);
}
You might just run out of file handles, a typical limit would be 1024 open files per process.
Try calling close() on the UsbDeviceConnection, see doc.
The UsbDeviceConnection object has allocated system ressources - e.g. a file descriptor - which will be released only on garbage collection in your code. But in this case you run out of ressources before you run out of memory - which means the garbage collector is not invoked yet.
I had opendevice fail on repeated runs on android 4.0 even though I open only once in my code. I had some exit paths that did not close the resources and I had assumed the OS would free it on process termination.
However there seems to be some issue with release of resources on process termination -I used to have issues even when I terminated and launched a fresh process.
I finally ensured release of resources on exit and made the problem go away.