TSP time windows in GUSEK - ampl

I'm trying to do a program where I have 9 places to visit and each of them have a time window to visit, [a,b] in the program. The time in each city is p and I have one deposit where I start with only one car and I need to finish in the deposit as well. I dont need to visit all the cities the same day.
By the time this is what I have, but the program sais that "p" is not definded. Maybe you can help me and tell me some hints about how to do it.
param n, integer, >= 3;
/* number of nodes */
param MAX_TIME := 600;
param MAX_X :=20;
param maxspeed;
set CITIES := 1..n;
/* set of nodes */
set ARCS, within CITIES cross CITIES;
/* set of arcs */
param DIST{(i,j) in ARCS};
/* distance from node i to node j */
param NEXTC, integer, >= 0;
/* Next city after i in the solution */
var x{(i,j) in ARCS}, binary;
/* x[i,j] = 1 means that the salesman goes from node i to node j */
param bigM := 5000;
var tar{CITIES}; /*arrival */
var tlv{CITIES}; /* departure */
var tea{CITIES} >= 0; /* early arrival (arrival before the designated time window) */
var tla{CITIES} >= 0; /* late arrival (arrival after the designated time window) */
var ted{CITIES} >= 0; /* early departure (departure before the designated time window) */
var tld{CITIES} >= 0; /* late departure (departure after the designated time window) */
s.t. t1 {i in CITIES} : tlv[i] >= tar[i];
s.t. t2 {i in CITIES, j in CITIES} : tar[j] >= tlv[i] + p[i] + DIST[i,j]/maxspeed - bigM*(1-x[i,j]);
s.t. t3 {i in CITIES} : tea[i] >= a[i] - tar[i]; /* early arrival */
s.t. t4 {i in CITIES} : tla[i] >= tar[i] - b[i]; /* late arrival */
s.t. t5 {i in CITIES} : ted[i] >= a[i] - tlv[i]; /* early departure */
s.t. t6 {i in CITIES} : tld[i] >= tlv[i] - b[i]; /* late departure */
set days := 1..5;
param root := 1;
var y{(i,j) in ARCS}, >= 0;
/* y[i,j] is a flow through arc (i,j) */
minimize total: sum{(i,j) in ARCS} DIST[i,j] * x[i,j];
solve;
printf "Optimal tour has length %d\n",
sum{(i,j) in ARCS} DIST[i,j] * x[i,j];
printf("From node To node Distance\n");
printf{(i,j) in ARCS: x[i,j]} " %3d %3d %8g\n",
i, j, DIST[i,j];
data;
param n := 9;
param : ARCS : DIST :=
1 2 21
1 3 8
1 4 6
1 5 10
1 6 2
1 7 4
1 8 5
1 9 5
2 1 21
2 3 18
2 4 21
2 5 23
2 6 22
2 7 20
2 8 22
2 9 25
3 1 7
3 2 16
3 4 3
3 5 9
3 6 8
3 7 4
3 8 7
3 9 9
4 1 8
4 2 18
4 3 3
4 5 10
4 6 9
4 7 6
4 8 9
4 9 11
5 1 9
5 2 25
5 3 11
5 4 9
5 6 11
5 7 8
5 8 11
5 9 13
6 1 4
6 2 23
6 3 8
6 4 7
6 5 11
6 7 5
6 8 7
6 9 8
7 1 3
7 2 20
7 3 6
7 4 5
7 5 8
7 6 4
7 8 3
7 9 8
8 1 3
8 2 20
8 3 5
8 4 4
8 5 8
8 6 5
8 7 4
8 9 7
9 1 7
9 2 24
9 3 10
9 4 8
9 5 13
9 6 7
9 7 7
9 8 6;
param : a :=
1 705
2 420
3 420
4 420
5 420
6 420
7 420
8 420
9 420;
param : b :=
1 785
2 795
3 725
4 500
5 785
6 785
7 430
8 785
9 785;
param : p :=
1 65
2 55
3 125
4 65
5 65
6 65
7 65
8 65
9 65;
end;
Thank you.

In the constraint t2 you use p[i]:
tar[j] >= tlv[i] + p[i] + DIST[i,j]/maxspeed - bigM*(1-x[i,j]);
but p is not declared. You should declare it is a parameter or a variable depending on whether it is an input data or something that needs to be determined in the optimization process. For example:
param p{CITIES};

Related

parameter is already defined in ampl

I want to define some parameters in ampl file, however the software indicates that the parameter is defined when I try to run it. How can I fix this problem?
set N := 1..6;
set N_row := 1..4;
var x{i in N} >= 0, <= 1 default 0;
param alpha{N_row};
param A{N_row,N};
param P{N_row,N};
param alpha := 1 1.0 2 1.2 3 3.0 4 3.2;
param A :=
[1,*] 1 10 2 3 3 17 4 3.5 5 1.7 6 8
[2,*] 1 0.05 2 10 3 17 4 0.1 5 8 6 14
[3,*] 1 3 2 3.5 3 1.7 4 10 5 17 6 8
[4,*] 1 17 2 8 3 0.05 4 10 5 0.1 6 14;
param P :=
[1,*] 1 0.1312 2 0.1696 3 0.5569 4 0.0124 5 0.8283 6 0.5886
[2,*] 1 0.2329 2 0.4135 3 0.8307 4 0.3736 5 0.1004 6 0.9991
[3,*] 1 0.2348 2 0.1451 3 0.3522 4 0.2883 5 0.3047 6 0.6650
[4,*] 1 0.4047 2 0.8828 3 0.8732 4 0.5743 5 0.1091 6 0.0381;
minimize Obj: sum {i in N_row} (alpha[i]*exp(-1*(sum{j in N} A[i,j]*(x[j]-P[i,j])**2)));
option solver scip;
solve;
display x;
display Obj;
display alpha;
display A;
display P;
It's generally good practice to keep model and data separate, but if you really want to do this, you can use the "data" statement to switch into data mode and then "model" to switch back. Like this:
set N := 1..6;
set N_row := 1..4;
var x{i in N} >= 0, <= 1 default 0;
param alpha{N_row};
param A{N_row,N};
param P{N_row,N};
data;
param alpha := 1 1.0 2 1.2 3 3.0 4 3.2;
param A :=
[1,*] 1 10 2 3 3 17 4 3.5 5 1.7 6 8
[2,*] 1 0.05 2 10 3 17 4 0.1 5 8 6 14
[3,*] 1 3 2 3.5 3 1.7 4 10 5 17 6 8
[4,*] 1 17 2 8 3 0.05 4 10 5 0.1 6 14;
param P :=
[1,*] 1 0.1312 2 0.1696 3 0.5569 4 0.0124 5 0.8283 6 0.5886
[2,*] 1 0.2329 2 0.4135 3 0.8307 4 0.3736 5 0.1004 6 0.9991
[3,*] 1 0.2348 2 0.1451 3 0.3522 4 0.2883 5 0.3047 6 0.6650
[4,*] 1 0.4047 2 0.8828 3 0.8732 4 0.5743 5 0.1091 6 0.0381;
model;
# etc.

RODBC: Columns and values don't match

I came across this behavior in RODBC (using SQL Server driver):
df1 = data.frame(matrix(c(1:20), nrow=10))
df1
which outputs
X1 X2
1 1 11
2 2 12
3 3 13
4 4 14
5 5 15
6 6 16
7 7 17
8 8 18
9 9 19
10 10 20
which makes sense. Then I save the table using RODBC
sqlSave(conout, df1, 'TEST')
Then I switch the two created columns:
df2 = df1[,c(2,1)]
df2
which outputs
X2 X1
1 11 1
2 12 2
3 13 3
4 14 4
5 15 5
6 16 6
7 17 7
8 18 8
9 19 9
10 20 10
which also makes sense.
Seeing those two tables, I see that X1 only contains 1:10 and X2 only contains 11:20. Now, when I do
sqlSave(conout, df2, 'TEST', append=TRUE, fast=FALSE)
sqlQuery(conout, 'SELECT * FROM TEST')
rownames X1 X2
1 1 1 11
2 2 2 12
3 3 3 13
4 4 4 14
5 5 5 15
6 6 6 16
7 7 7 17
8 8 8 18
9 9 9 19
10 10 10 20
11 1 11 1
12 2 12 2
13 3 13 3
14 4 14 4
15 5 15 5
16 6 16 6
17 7 17 7
18 8 18 8
19 9 19 9
20 10 20 10
which is definitely not what I saved. Now three questions:
How is this possible?
Where is this behavior explained in the RODBC manual?
How can I prevent the behavior without reordering my columns (the real case behind this example has > 300 columns).

replace the outlier value from multiple columns based on different condition using pandas?

I want to find the outlier in multiple columns at a time and replace the outlier value with some other value based on two conditions.
sample dataset:
day phone_calls received
1 11 11
2 12 12
3 10 0
4 13 12
5 170 2
6 9 9
7 67 1
8 180 150
9 8 1
10 10 10
find out the outlier range, let's say the range is (8-50), then replace the value: if the column value is less than 8 then replace with 8, and if greater than 50 then replace with 50.
Please help I am new to pandas.
I think need set_index with clip:
df = df.set_index('day').clip(8,50)
print (df)
phone_calls received
day
1 11 11
2 12 12
3 10 8
4 13 12
5 50 8
6 9 9
7 50 8
8 50 50
9 8 8
10 10 10
Or similar with iloc select all columns without first:
df.iloc[:, 1:] = df.iloc[:, 1:].clip(8,50)
print (df)
day phone_calls received
0 1 11 11
1 2 12 12
2 3 10 8
3 4 13 12
4 5 50 8
5 6 9 9
6 7 50 8
7 8 50 50
8 9 8 8
9 10 10 10
EDIT: You can specify columns in list:
cols = ['phone_calls','received']
df[cols] = df[cols].clip(8,50)
print (df)
day phone_calls received
0 1 11 11
1 2 12 12
2 3 10 8
3 4 13 12
4 5 50 8
5 6 9 9
6 7 50 8
7 8 50 50
8 9 8 8
9 10 10 10

Visualize the tetrahedra with infinite vertices in MeshLab

I use the sample code to compute the tetrahedra for 6 vertices. The returned faces contain not only the input vertices, but infinite ones as well. This creates a problem of visualisation in MeshLab. I was wondering how I can get around this?
The sample code is as follows
int main(int argc, char **argv)
{
std::list<Point> L;
L.push_front(Point(0,0,0));
L.push_front(Point(1,0,0));
L.push_front(Point(0,1,0));
Triangulation T(L.begin(), L.end());
int n = T.number_of_vertices();
std::vector<Point> V(3);
V[0] = Point(0,0,1);
V[1] = Point(1,1,1);
V[2] = Point(2,2,2);
n = n + T.insert(V.begin(), V.end());
assert( n == 6 );
assert( T.is_valid() );
Locate_type lt;
int li, lj;
Point p(0,0,0);
Cell_handle c = T.locate(p, lt, li, lj);
assert( lt == Triangulation::VERTEX );
assert( c->vertex(li)->point() == p );
Vertex_handle v = c->vertex( (li+1)&3 );
Cell_handle nc = c->neighbor(li);
int nli;
assert( nc->has_vertex( v, nli ) );
std::ofstream oFileT("output",std::ios::out);
oFileT << T;
Triangulation T1;
std::ifstream iFileT("output",std::ios::in);
iFileT >> T1;
assert( T1.is_valid() );
assert( T1.number_of_vertices() == T.number_of_vertices() );
assert( T1.number_of_cells() == T.number_of_cells() );
std::cout << T.number_of_vertices() << " " << T.number_of_cells() << std::endl;
return 0;
}
the output file is
3
6
0 1 0
1 0 0
0 0 0
0 0 1
1 1 1
2 2 2
11
1 0 3 4
2 1 3 4
0 2 3 4
1 5 6 4
2 3 1 0
1 2 5 4
5 2 6 4
6 2 0 4
1 6 0 4
1 2 0 6
1 2 6 5
2 1 8 4
0 2 5 4
1 0 7 4
6 8 5 10
0 9 2 1
6 3 1 10
7 3 5 10
2 8 6 9
7 0 3 9
7 8 10 4
6 3 5 9
In the manual of class Triangulation_3
http://doc.cgal.org/latest/Triangulation_3/classCGAL_1_1Triangulation__3.html
look for section I/O and read the description of the iostream.
In your example, there are 11 cells
1 0 3 4
2 1 3 4
0 2 3 4
1 5 6 4
2 3 1 0
1 2 5 4
5 2 6 4
6 2 0 4
1 6 0 4
1 2 0 6
1 2 6 5
The infinite vertex has index 0. The 6 finite vertices are numbered 1 to 6. Their coordinates are at the top of your file (starting after the 6)
So you can filter out the cells having 0 as vertex, which gives you 5 tetrahedra
2 1 3 4
1 5 6 4
1 2 5 4
5 2 6 4
1 2 6 5

how to calculate "consecutive mean" in R without using loop, or in a more efficient way?

I have a set a data that I need to calculate their "consecutive mean" (I dunno if it is the correct name, but I can't find anything better), here is an example:
ID Var2 Var3
1 A 1
2 A 3
3 A 5
4 A 7
5 A 9
6 A 11
7 B 2
8 B 4
9 B 6
10 B 8
11 B 10
Here I need to calculated the mean of 3 Var3 variable in the same subset consecutively (i.e. there will be 4 means caulculated for A: mean(1,3,5), mean(3,5,7), mean(5,7,9), mean(7,9,11), and 3 means calculated for B: mean(2,4,6), mean(4,6,8), mean(6,8,10). And the result should be:
ID Var2 Var3 Mean
1 A 1 N/A
2 A 3 N/A
3 A 5 3
4 A 7 5
5 A 9 7
6 A 11 9
7 B 2 N/A
8 B 4 N/A
9 B 6 4
10 B 8 6
11 B 10 8
Currently I am using a "loop-inside-a-loop" approach, I subset the dataset using Var2, and then I calculated the mean in another start from the third data.
It suits what I need, but it is very slow, is there any faster way for this problem?
Thanks!
It's generally referred to as a "rolling mean" or "running mean". The plyr package allows you to calculate a function over segments of your data and the zoo package has methods for rolling calculations.
> lines <- "ID,Var2,Var3
+ 1,A,1
+ 2,A,3
+ 3,A,5
+ 4,A,7
+ 5,A,9
+ 6,A,11
+ 7,B,2
+ 8,B,4
+ 9,B,6
+ 10,B,8
+ 11,B,10"
>
> x <- read.csv(con <- textConnection(lines))
> close(con)
>
> ddply(x,"Var2",function(y) data.frame(y,
+ mean=rollmean(y$Var3,3,na.pad=TRUE,align="right")))
ID Var2 Var3 mean
1 1 A 1 NA
2 2 A 3 NA
3 3 A 5 3
4 4 A 7 5
5 5 A 9 7
6 6 A 11 9
7 7 B 2 NA
8 8 B 4 NA
9 9 B 6 4
10 10 B 8 6
11 11 B 10 8
Alternately using base R
x$mean <- unlist(tapply(x$Var3, x$Var2, zoo::rollmean, k=3, na.pad=TRUE, align="right", simplity=FALSE))