Asymmetric Distance Matrix for VRP in the OptaPlanner don't work correctly - optaplanner

I implemented an asymmetric distance matrix for VRP in the examples of OptaPlanner as suggested in option B of answer https://stackoverflow.com/a/19420978/3743175
However the values of soft constraints does not coincide with the total value of the calculated distance of routes in the tests.
Anyone have any idea of the cause of this? I've checked several times, my matrix is correct and the problem does not occur for symmetric instances.
Any help is welcome.
Thanks.
Solution Found:
I found problem in lines with softScore calculation of the examples: the softscore is calculated with reverse arcs.
I replaced this lines in VehicleRoutingIncrementalScoreCalculator class:
...
softScore -= vehicle.getLocation().getDistance(customer.getLocation());
...
softScore += vehicle.getLocation().getDistance(customer.getLocation());
with:
...
softScore -= customer.getLocation().getDistance(vehicle.getLocation());
...
softScore += customer.getLocation().getDistance(vehicle.getLocation());
and I fixed the Customer class with following method:
public int getDistanceToPreviousStandstill() {
if (previousStandstill == null) {
return 0;
}
return previousStandstill.getLocation().getDistance(location);
}

Related

Using a solution from a model as an input to another one and Outputting each Solution Separately

I'm solving an optimization problem in which I need the result from one model to be used as a input in another model for 180 iterations. I'm using CPLEX with OPL language without any addon.
I tried to save the values from one model into an Excel file and reading those into the next model but since I'm going to do this 180 times I am worried I will make an error and have to restart or not even know I made an error.
Is it possible to have this run for 180 iterations and input each iteration's solution separately?
You can rely on warmstart for that.
2 simple examples in easy OPL
Warm start from a file:
include "zoo.mod";
main {
var filename = "c:/temp/mipstart.mst";
thisOplModel.generate();
cplex.readMIPStarts(filename);
cplex.solve();
writeln("Objective: " + cplex.getObjValue());
}
or with API
int nbKids=300;
// a tuple is like a struct in C, a class in C++ or a record in Pascal
tuple bus
{
key int nbSeats;
float cost;
}
// This is a tuple set
{bus} pricebuses=...;
// asserts help make sure data is fine
assert forall(b in pricebuses) b.nbSeats>0;assert forall(b in pricebuses) b.cost>0;
// To compute the average cost per kid of each bus
// you may use OPL modeling language
float averageCost[b in pricebuses]=b.cost/b.nbSeats;
// Let us try first with a naïve computation, use the cheapest bus
float cheapestCostPerKid=min(b in pricebuses) averageCost[b];
int cheapestBusSize=first({b.nbSeats | b in pricebuses : averageCost[b]==cheapestCostPerKid});
int nbBusNeeded=ftoi(ceil(nbKids/cheapestBusSize));
float cost0=item(pricebuses,<cheapestBusSize>).cost*nbBusNeeded;
execute DISPLAY_Before_SOLVE
{
writeln("The naïve cost is ",cost0);
writeln(nbBusNeeded," buses ",cheapestBusSize, " seats");
writeln();
}
int naiveSolution[b in pricebuses]=
(b.nbSeats==cheapestBusSize)?nbBusNeeded:0;
// decision variable array
dvar int+ nbBus[pricebuses];
// objective
minimize
sum(b in pricebuses) b.cost*nbBus[b];
// constraints
subject to
{
sum(b in pricebuses) b.nbSeats*nbBus[b]>=nbKids;
}
float cost=sum(b in pricebuses) b.cost*nbBus[b];
execute DISPLAY_After_SOLVE
{
writeln("The minimum cost is ",cost);
for(var b in pricebuses) writeln(nbBus[b]," buses ",b.nbSeats, " seats");
}
main
{
thisOplModel.generate();
// Warm start the naïve solution
cplex.addMIPStart(thisOplModel.nbBus,thisOplModel.naiveSolution);
cplex.solve();
thisOplModel.postProcess();
}

Custom Delaunay Refinement with CGAL Delaunay3D

I want to perform a custom refinement strategy in a tetrahedral mesh.My input is a point cloud and I have tetrahedralized it using Delaunay 3D routine available in CGAL. The points have scalar values associated with it. Now I want to refine the tetrahedral mesh with this following strategy:
1. Get the maximum value among the vertices of each tetrahedra.
2. Get the value at the point that is going to be inserted (May be barycentre, weighted centroid or circumcenter).
3. If the difference is large enough add this point.
Any idea how to do this effectively? Note: I do not require 0-1 dimensional feature preservation.
I have already tried the above strategy. Let me show what I have done so far.
// Assume T is Delaunay_3D triangulation CGAL mesh and I have an oracle f that tells me what is the value at the point that is going to be inserted if conditions are met.
bool updated = true;
int it = 0;
while (updated)
{
updated = false;
std::vector<std::pair<Point, unsigned> > point_to_be_inserted;
for (auto cit = T.finite_cells_begin(); cit != T.finite_cells_end(); cit++)
{
Cell_handle c = cit;
Point v = Maximum valued vertex
Point q = Point that is going to be inserted
double val_at_new_pt = oracle(q, &pts, var);
double ratio = std::abs(max_val - val_at_new_pt) / max_val;
if (ratio > threshold) {
point_to_be_inserted.emplace_back(std::make_pair(q, new_pt_ind));
updated = true;
}
}
if (updated)
{
std::cout << "Total pts inserted in it: " << it << " " << point_to_be_inserted.size() << std::endl;
T.insert(point_to_be_inserted.begin(), point_to_be_inserted.end())
}
}
The problem is it is quite slow (each time iterating through all the cells). I am not finding any effective strategy to do the refinement locally. I tried using a queue but the cell_handles are getting messed up after I perform one iteration of refinement. I cannot have a map that tells me whether the tetrahedra is refined or not because each time after insertion of new points cell_handles are getting created. Any help will be appreciated. Thanks in advance.

Optimized recalculating all pairs shortest path when removing vertexes dynamically from an undirected graph

I use following dijkstra implementation to calculate all pairs shortest paths in an undirected graph. After calling calculateAllPaths(), dist[i][j] contains shortest path length between i and j (or Integer.MAX_VALUE if no such path available).
The problem is that some vertexes of my graph are removing dynamically and I should recalculate all paths from scratch to update dist matrix. I'm seeking for a solution to optimize update speed by avoiding unnecessary calculations when a vertex removes from my graph. I already search for solution and I now there is some algorithms such as LPA* to do this, but they seem very complicated and I guess a simpler solution may solve my problem.
public static void calculateAllPaths()
{
for(int j=graph.length/2+graph.length%2;j>=0;j--)
{
calculateAllPathsFromSource(j);
}
}
public static void calculateAllPathsFromSource(int s)
{
final boolean visited[] = new boolean[graph.length];
for (int i=0; i<dist.length; i++)
{
if(i == s)
{
continue;
}
//visit next node
int next = -1;
int minDist = Integer.MAX_VALUE;
for (int j=0; j<dist[s].length; j++)
{
if (!visited[j] && dist[s][j] < minDist)
{
next = j;
minDist = dist[s][j];
}
}
if(next == -1)
{
continue;
}
visited[next] = true;
for(int v=0;v<graph.length;v++)
{
if(v == next || graph[next][v] == -1)
{
continue;
}
int md = dist[s][next] + graph[next][v];
if(md < dist[s][v])
{
dist[s][v] = dist[v][s] = md;
}
}
}
}
If you know that vertices are only being removed dynamically, then instead of just storing the best path matrix dist[i][j], you could also store the permutation of each such path. Say, instead of dist[i][j] you make a custom class myBestPathInfo, and the array of an instance of this, say myBestPathInfo[i][j], contain members best distance as well as permutation of the best path. Preferably, the best path permutation is described as an ordered set of some vertex objects, where the latter are of reference type and unique for each vertex (however used in several myBestPathInfo instances). Such objects could include a boolean property isActive (true/false).
Whenever a vertex is removed, you traverse through the best path permutations for each vertex-vertex pair, to make sure no vertex has been deactivated. Finally, only for broken paths (deactivated vertices) do you re-run Dijkstra's algorithm.
Another solution would be to solve the shortest path for all pairs using linear programming (LP) techniques. A removed vertex can be easily implemented as an additional constraint in your program (e.g. flow in <=0 and and flow out of vertex <= 0*), after which the re-solving of the shortest path LP:s can use the previous optimal solution as a feasible basic feasible solution (BFS) in the dual LPs. This property holds since adding a constraint in the primal LP is equivalent to an additional variable in the dual; hence, previously optimal primal BFS will be feasible in dual after additional constraints. (on-the-fly starting on simplex solver for LPs).

How Codility calculates the complexity? Example: Tape-Equilibrium Codility Training

I was training in Codility solving the first lesson: Tape-Equilibrium.
It is said it has to be of complexity O(N). Therefore I was trying to solve the problem with just one for. I knew how to do it with two for but I understood it would have implied a complexity of O(2N), therefore I skipped those solutions.
I looked for it in Internet and of course, in SO there was an answer
To my astonishment, all the solutions first calculate the sum of the elements of the vector and afterwards make the calculations. I understand this is complexity O(2N), but it gets an score of 100%.
At this point, I think I am mistaken about my comprehension of the time complexity limits. If they ask you to get a time complexity of O(N), is it right to get O(X*N)? Being X a value not really high ?
How does this works?
Let f and g be functions.
The Big-O notation f in O(g) means that you can find a constant number c such that f(n) ≤ c⋅g(n). So if your algorithm has complexity 2N (or XN for a constant X) this is in O(N) due to c = 2 (or c = X) holds 2N ≤ c⋅N = 2⋅N (or XN ≤ c⋅N = X⋅N).
This is how I managed to keep it O(N) along with a 100% score:
// you can also use imports, for example:
// import java.util.*;
// you can use System.out.println for debugging purposes, e.g.
// System.out.println("this is a debug message");
class Solution {
public int solution(int[] A) {
int result = Integer.MAX_VALUE;
int[] s1 = new int[A.length-1];
int[] s2 = new int[A.length-1];
for(int i=0;i<A.length-1;i++){
if(i>0){
s1[i] = s1[i-1] + A[i];
}
else {
s1[i] = A[i];
}
}
for(int i=A.length-1;i>0;i--){
if(i<A.length-1){
s2[i-1] = s2[i] + A[i];
}
else {
s2[i-1] = A[A.length-1];
}
}
for(int i=0;i<A.length-1;i++){
if(Math.abs(s1[i]-s2[i])<result){
result = Math.abs(s1[i]-s2[i]);
}
}
return result;
}
}

Looping with iterator vs temp object gives different result graphically (Libgdx/Java)

I've got a particle "engine" whom I've implementing a Pool system to and I've tested two different ways of rendering every Particle in a list. Please note that the Pooling really doesn't have anything with the problem to do. I just followed a tutorial and tried to use the second method when I noticed that they behaved differently.
The first way:
for (int i = 0; i < particleList.size(); i++) {
Iterator<Particle> it = particleList.iterator();
while (it.hasNext()) {
Particle p = it.next();
if (p.isDead()){
it.remove();
}
p.render(batch, delta);
}
}
Which works just fine. My particles are sharp and they move with the correct speed.
The second way:
Particle p;
for (int i = 0; i < particleList.size(); i++) {
p = particleList.get(i);
p.render(batch, delta);
if (p.isDead()) {
particleList.remove(i);
bulletPool.free(p);
}
}
Which makes all my particles blurry and moving really slow!
The render method for my particles look like this:
public void render(SpriteBatch batch, float delta) {
sprite.setX(sprite.getX() + (dx * speed) * delta * Assets.FPS);
sprite.setY(sprite.getY() + (dy * speed) * delta * Assets.FPS);
ttl--;
sprite.setScale(sprite.getScaleX() - 0.002f);
if (ttl <= 0 || sprite.getScaleX() <= 0)
isDead = true;
sprite.draw(batch);
}
Why do the different rendering methods provide different results?
Thanks in advance
You are mutating (removing elements from) a list while iterating over it. This is a classic way to make a mess.
The Iterator must have code to handle the delete case correctly. But your index-based for loop does not. Specifically when you call particleList.remove(i) the i is now "out of sync" with the content of the list. Consider what happens when you remove the element at index 3: 'i' will increment to 4, but the old element 4 got shuffled down into index 3, so it will get skipped.
I assume you're avoiding the Iterator to avoid memory allocations. So, one way to side-step this issue is to reverse the loop (go from particleList.size() down to 0). Alternatively, you can only increment i for non-dead particles.