Is it possible for a MoveIteratorFactory to generate moves based on a current working solution? - optaplanner

I would like to create a kind of a "smart" MoveIteratorFactory for my VRP (time windowed) example based app. This move factory should return an Iterator that would generate each time a CompositeMove based on the current solution state.
Is it possible for the MoveIteratorFactory to create an Iterator that generates moves based on current solution state?
AFAIK MoveIteratorFactory's methods accept a ScoreDirector object, and it seems that the returned Iterator should generate moves using instances retrieved from the ScoreDirector's working solution. But are these instances being updated while the solver process is undergoing? Do they have all planning variables set according to the current working solution state, when hasNext and next methods are called? Or should an iterator have a field with a ScoreDirector instance, and generate moves using instances retrieved each time from the ScoreDirector?

Yes, just make sure sure that the cacheType isn't PHASE or higher (by default it's fine because by default it's JUST_IN_TIME). See docs chapter 7.
At the beginning of every step it will call createRandomMoveIterator(), which can take into account the state of the current workingSolution.

Related

Anylogic Nested Agent RandomNumberGenerator

I have an agent that is nested in another agent. This nested agent has a function that calls the annylogic probability distribution functions(pdfs) such as gamma(), lognormal(), etc. However I keep getting a nullPointerException if I call these pdfs inside the nested agent. I am realising this is because the nested agent cannot access the default randomNumberGenerator. Is there a way I can access the defaultRandomNumberGenerator within the nested agent as well or is the only solution to create a new generator for each nested agent?
The error is because your agent is outside the model hierarchy of agents.
This is not good practice; there should very rarely be a need to have 'floating' agents outside the model hierarchy; they can always be inside an agent population somewhere.
In the rare cases that there are strong design reasons to do so (or if you use plain Java classes and thus have Java objects which by definition are not Agents and are therefore outside the agent hierarchy), just give them a parameter (field in the case of a Java class) that points to some agent that is in the model hierarchy (typically their 'generator'), and then you can call all 'required-to-be-in-model-hierarchy' functions via that parameter. (That is, you are delegating all such calls to an agent instance which can call them.)
e.g., the nested agent type (let's say Thing) has parameter agentRef of type Agent set by whoever creates it; for example
Thing t = new Thing(this);
Then, within Thing, you use code such as agentRef.normal(1,10).
Only agents that are connected to the engine in some way have access to the random number generator. And if your experiment is set to run main - like the example below - then all agents that want to use the random number generator must be connected to main in some way
So if you do this for example it wont work, and you get an NPE (Null Pointer Exception)
If you do this it will
Best option is to just create your own random number generator
lognormal(0.1, 0.1, 5, new Random(0));
(Just put the random number generator somewhere so that you can use it again and again, else you will get the same number every time since it is the same (new) random object used to get the number)
This design is way better - see example here
Why do two flowcharts set up exactly the same end with different results every time the simulation is run even when I use a fixed seed?

Is it a good practice the attach an event related parameter to an object's model as a variable?

This is about an API handling the validation during saving an object. Which means that the front-end client sends a request to the API to a specific end point, then on the back-end the API creates a new object if the right conditions are meet.
Right now the regular method that we use is that the models has a ruleset for each fields and then the validation is invoked when the save function is invoked, but technically the validation is done right before the object is saved into the database.
Then during today's code review I came across a solution which I wasn't sure if it's a good practice or not. And it was about that the front-end must send a specific parameter to the API every time. This is because other APIs are using our API as well, and we needed to know if the request was sent as and API request or a browser request. If this parameter is present then we want to execute an extra validation function on a specific field.
(1)If I would have to implement it, then I would check the incoming parameter in the service handler or in the controller level, and if I got one, I would invoke the validation right away, and if it fails I would throw an error.
(2)The implementation I saw however adds an extra variable to the model, and sets the model variable when there is an incoming parameter, then validates only when the save function is invoked on the object(which first validates the ruleset defined on the object fields, then saves the object into the database)
So my problem with (2) is that the object now grown bigger with an extra variable that is only related to a specific event. So I would say it's better to implement (1). But (2) also has an advantage, and that is when you create the object on different end point by parsing the parameters, then the validation will work there as well, even if the developer forget to update the code there.
Now this may seems like a silly question because, why would I care about just 1 extra variable, but this is like a bedrock of something good or bad. So if I say this is ok, then from now on the models will start growing with extra variables that are only related to specific events, which I think should be handled on the controller/service handler level. On the other hand the code would be more reliable if it's not the developer who should remember all the 6712537 functionalities and keep them in mind when makes some changes somewhere. Let's say all the devs will get heart attack tomorrow from the excitement of an amazing discovery, and a new developer has to work on the project while he doesn't know about these small details, and then he has to change something on the code that is related to this functionality - so that new feature should be supported by this old one as well.
So my question is if is there any good practice on this, and what do you think what would be the best approach?
So I spent some time on thinking on the solution, and I think the best is to have an array of acceptable trigger variables in the model class. Then when the parameters are passed to the model on the controller level, then the loader function can be modified that it takes the trigger variables from the parameters and save it in the model's associative array variable that stores the trigger variables.
By default this array is empty, and it doesn't matter how much new variables are needed to be created, it will only contain the necessary ones when those are used.
Then of course the loader function needs to be modified in a way that it can filter out the non trigger variables as well as it is done for the regular fields, and there can be even a rule set of validation on the trigger variables if necessary.
So this solves the problem with overgrowing the object with unnecessary variables and the centralized validation part, because now the validation can be always done in the model instead of the controller.
And since the loader function is modified to store the trigger variables in the model's trigger variables array variable, the developer never has to remember that this functionality was created. Which is good, because in the future when he creates a new related function or end point that should handle object creation, he will not miss it to validate it against the old functionality, because the the loader function that he modified in the past like this will handle it for him.
It needs to be noted tho, that since the loader function doesn't differentiate between the parameters, and where to load them other then checking the names of the parameters with the filter functions, these parameter names should be identical from each other, otherwise a buggy functionality can be created accidentally. Like if you forget that a model attribute with the same name was used, then you can accidentally trigger an event that was programmed to be triggered if the trigger variable with the same name is present. However this can be solved by prefixing the trigger variables for example.

Project Job Scheduling: Where are resetPanel and createChart called?

I am working on a Class that takes data from a csv, works with a Scheduler Object, and associates the data with the scheduler's attributes(Project, Job, Allocation, Resource, etc). I was thinking after I got everything down(ProjectList, JobList, AllocationList, ExecutionModeList, Resources) I could just pass the scheduler object into createChart.
However, I am still unsure as to where resetPanel and createChart are called( I understand that the ProjectJobPanel has these functions).
So my 2 questions are:
Where are these functions called?( I couldn't find this information in the documentation)
If i want to display my data, do I need to do anything else other than pass in the scheduler object into the "createChart" function?
resetPanel() is called by the example swing dialog when a new dataset is loaded. If updatePanel() isn't overwritten, it's also called when a new best solution is found (so for every best solution changed event).

GMF Model & Table View

I have been given this task and would appreciate it if someone helped really. I built a Graphical Model through GMF, which is as follows;
As you see one of the nodes in the model has been selected. The task here is to create an eclipse view with a table, which will be automatically updated upon the selection of a so-called "City Node". As you may guess from the model, the table should contain path costs to all of the cities. I will later expand my solution to include a modified Dijkstra Algorithm but right now i am stuck in the creation of a table view.
I tried to build it using a TableViewer but it seems fairly complex since we need to set the input of the table on ContentProvider, but the twist here is that since we need a SelectionListener to obtain the City Coordinates (as it was ordered to calculate path-costs through the distance between two connected cities divided by the max speed, which was indicated on the connecting streets in the graph) and also the currently selected city, the path-costs need to be automatically calculated and displayed in the table upon the receipt of a click-event. This means that we somehow need to update the input which is gonna be handed to ContentProvider on every selection change.
For further information, I get the current selection through selectionChanged method of ISelectionListener interface and inside this method I put the city information in an arraylist. However although i declared this arraylist outside of the method as public, I cannot seem to access it from the outside of the method and thus can not pass it contentProvider. Eventually the input of the table can not be updated. I tried to write this text as simple as possible and I hope you guys can help me. As I can not foresee now what should be done, I would really appreciate it.
You're on the right track!
In your selection listener's ISelectionListener.selectionChanged method you just have to set the new input for the viewer with TableViewer.setInput. Then, the IStructuredContentProvider.inputChanged method gets invoked on the content provider for the viewer. It's here where you can do your stuff with the new input and refresh the viewer with TableViewer.refresh.
You can also use the JFace databinding framework, but I think you should be fine with what I've mentioned above.

Ncqrs recreate the complete ReadModel

Using Ncqrs, is there a way to replay every single event ever happened (all aggregate types) and feed these through my denormalizers in order to recreate the whole read model from scratch?
Edit:
I though it's be good to provide a more specific use case. I'm building this inside a ASP.NET MVC application and using Entity Framework (Code first) for working with the read models. In order to speed up development (and because I'm lazy), I want to use a database initializer that recreates the database schemas once any read model changes. Then using the initializer's seed method to repopulate them.
There is unfortunately nothing built in to do this for you (though I haven't updated the version of ncqrs I use in quite a while so perhaps that's changed). It is also somewhat non-trivial to do it since it depends on exactly what you want to do.
The way I would do it (up to this point I have not had a need) would be to:
Call to the event store to get all relevant events
Depending on what you are doing this could be all events or just the events for one aggregate root, or a subset of events for one or more aggregate roots.
Re-create the read-model in memory from scratch (to save slow and unnecessary writing)
Store the re-created read-model in place of the existing one
Call to the event store one more time to get any events that may have been missed
Repeat until there are no new events being returned
One thing to note, if you are recreating the entire read-model database from scratch I would off-line the service temporarily or queue up new events until you finish.
Again there are different ways you could approach this problem, your architecture and scenarios will probably dictate how best to do it.
We use a MsSqlServerEventStore, to replay all the events I implemented the following code:
var myEventBus = NcqrsEnvironment.Get<IEventBus>();
if (myEventBus == null) throw new Exception("EventBus is not found in NcqesEnvironment");
var myEventStore = NcqrsEnvironment.Get<IEventStore>() as MsSqlServerEventStore;
if (myEventStore == null) throw new Exception("MsSqlServerEventStore is not found in NcqesEnvironment");
var myEvents = myEventStore.GetEventsAfter(GetFirstEventIdFromEventStore(), int.MaxValue);
myEventBus.Publish(myEvents);
This will push all the events on the eventbus and the denormalizers will process all the events. The function GetFirstEventIdFromEventStore just queries the eventstore and returns the first Id from the eventstore (where SequentialId = 1)
What I ended up doing is the following. At the service startup, before any commands are being processed, if the read model has changed, I throw it away and recreate it from scratch by processing all past events in my denormalizers. This is done in the database initializer's seed method.
This was a trivial task using the MS SQL event storage as there was a method for retrieving all events. However, I'm not sure about other event storages.