Visualize the tetrahedra with infinite vertices in MeshLab - cgal

I use the sample code to compute the tetrahedra for 6 vertices. The returned faces contain not only the input vertices, but infinite ones as well. This creates a problem of visualisation in MeshLab. I was wondering how I can get around this?
The sample code is as follows
int main(int argc, char **argv)
{
std::list<Point> L;
L.push_front(Point(0,0,0));
L.push_front(Point(1,0,0));
L.push_front(Point(0,1,0));
Triangulation T(L.begin(), L.end());
int n = T.number_of_vertices();
std::vector<Point> V(3);
V[0] = Point(0,0,1);
V[1] = Point(1,1,1);
V[2] = Point(2,2,2);
n = n + T.insert(V.begin(), V.end());
assert( n == 6 );
assert( T.is_valid() );
Locate_type lt;
int li, lj;
Point p(0,0,0);
Cell_handle c = T.locate(p, lt, li, lj);
assert( lt == Triangulation::VERTEX );
assert( c->vertex(li)->point() == p );
Vertex_handle v = c->vertex( (li+1)&3 );
Cell_handle nc = c->neighbor(li);
int nli;
assert( nc->has_vertex( v, nli ) );
std::ofstream oFileT("output",std::ios::out);
oFileT << T;
Triangulation T1;
std::ifstream iFileT("output",std::ios::in);
iFileT >> T1;
assert( T1.is_valid() );
assert( T1.number_of_vertices() == T.number_of_vertices() );
assert( T1.number_of_cells() == T.number_of_cells() );
std::cout << T.number_of_vertices() << " " << T.number_of_cells() << std::endl;
return 0;
}
the output file is
3
6
0 1 0
1 0 0
0 0 0
0 0 1
1 1 1
2 2 2
11
1 0 3 4
2 1 3 4
0 2 3 4
1 5 6 4
2 3 1 0
1 2 5 4
5 2 6 4
6 2 0 4
1 6 0 4
1 2 0 6
1 2 6 5
2 1 8 4
0 2 5 4
1 0 7 4
6 8 5 10
0 9 2 1
6 3 1 10
7 3 5 10
2 8 6 9
7 0 3 9
7 8 10 4
6 3 5 9

In the manual of class Triangulation_3
http://doc.cgal.org/latest/Triangulation_3/classCGAL_1_1Triangulation__3.html
look for section I/O and read the description of the iostream.
In your example, there are 11 cells
1 0 3 4
2 1 3 4
0 2 3 4
1 5 6 4
2 3 1 0
1 2 5 4
5 2 6 4
6 2 0 4
1 6 0 4
1 2 0 6
1 2 6 5
The infinite vertex has index 0. The 6 finite vertices are numbered 1 to 6. Their coordinates are at the top of your file (starting after the 6)
So you can filter out the cells having 0 as vertex, which gives you 5 tetrahedra
2 1 3 4
1 5 6 4
1 2 5 4
5 2 6 4
1 2 6 5

Related

How to compute column sum on the basis of other column value in pandas dataframe?

P
T1
T2
T3
0
1
2
3
1
1
2
0
2
3
1
2
3
1
0
2
In the above pandas dataframe df,
I want to add columns on the basis of the value of column 'P'.
if df['P'] == 0: 0
if df['P'] == 1: T1 (=1)
if df['P'] == 2: T1+T2 (=3+1=4)
if df['P'] == 3: T1+T2+T3 (=1+0+2=3)
In other words, I want to add from T1 to TN if df['P'] == N.
How can I implement this with Python code?
EDIT:
For sum values by P column create mask by broadcasting np.arange by length of filtered columns by DataFrame.filter, compare by P values and this mask pass to DataFrame.where, last use sum per rows:
np.random.seed(20)
c = [f'{x}{i + 1}' for x in ['T','U','V'] for i in range(3)]
df = pd.DataFrame(np.random.randint(4, size=(10,10)), columns=['P'] + c)
arrP = df['P'].to_numpy()[:, None]
for c in ['T','U','V']:
df1 = df.filter(regex=rf'^{c}')
df[f'{c}_SUM'] = df1.where(np.arange(len(df1.columns)) < arrP, 0).sum(axis=1)
print (df)
P T1 T2 T3 U1 U2 U3 V1 V2 V3 T_SUM U_SUM V_SUM
0 3 2 3 3 0 2 1 0 3 2 8 3 5
1 3 2 0 2 0 1 2 2 3 3 4 3 8
2 0 1 2 2 2 0 1 1 3 1 0 0 0
3 3 2 2 2 1 3 2 1 3 2 6 6 6
4 3 1 1 3 1 2 2 0 2 3 5 5 5
5 2 3 2 3 1 1 1 0 3 0 5 2 3
6 2 3 2 3 3 3 2 1 1 2 5 6 2
7 3 2 0 2 1 1 2 2 2 3 4 4 7
8 2 2 1 0 2 2 0 3 3 0 3 4 6
9 2 2 3 2 2 3 2 2 1 1 5 5 3

Dataframe within a Dataframe - to create new column_

For the following dataframe:
import pandas as pd
df=pd.DataFrame({'list_A':[3,3,3,3,3,\
2,2,2,2,2,2,2,4,4,4,4,4,4,4,4,4,4,4,4]})
How can 'list_A' be manipulated to give 'list_B'?
Desired output:
list_A
list_B
0
3
1
1
3
1
2
3
1
3
3
0
4
2
1
5
2
1
6
2
0
7
2
0
8
4
1
9
4
1
10
4
1
11
4
1
12
4
0
13
4
0
14
4
0
15
4
0
16
4
0
As you can see, if List_A has the number 3 - then the first 3 values of List_B are '1' and then the value of List_B changes to '0', until List_A changes value again.
GroupBy.cumcount
df['list_B'] = df['list_A'].gt(df.groupby('list_A').cumcount()).astype(int)
print(df)
Output
list_A list_B
0 3 1
1 3 1
2 3 1
3 3 0
4 3 0
5 2 1
6 2 1
7 2 0
8 2 0
9 2 0
10 2 0
11 2 0
12 4 1
13 4 1
14 4 1
15 4 1
16 4 0
17 4 0
18 4 0
19 4 0
20 4 0
21 4 0
22 4 0
23 4 0
EDIT
blocks = df['list_A'].ne(df['list_A'].shift()).cumsum()
df['list_B'] = df['list_A'].gt(df.groupby(blocks).cumcount()).astype(int)

low accuracies with ML modules

I'm working with breast cancer dataset with 2 classes 0-1 and the training and accuracy was great, but I have changed the number of classes to 8 classes 0-7 and I'm getting low accuracy wit Ml algorithms but meanwhile the accuracy with ANN 97% maybe I made a mistake but I don't know where
y_pred :
[5 0 3 0 3 6 1 0 2 1 7 6 7 3 0 3 6 3 7 0 7 1 5 2 5 0 3 6 5 5 7 2 0 6 6 6 3
6 5 0 0 6 6 5 3 0 5 1 6 4 0 7 6 0 5 5 5 0 0 5 7 1 6 6 7 6 0 1 7 5 6 0 6 0
3 3 6 7 7 1 0 7 0 5 5 0 6 0 0 6 1 6 5 0 0 7 0 1 6 1 0 6 0 7 0 6 0 5 0 6 3
6 7 0 6 6 0 0 0 5 7 4 6 6 2 3 5 6 0 7 7 0 5 6 0 0 0 6 1 5 0 7 4 6 0 7 3 6
5 6 6 0 2 0 1 0 7 0 1 7 0 7 7 6 6 6 7 6 6 0 6 5 1 1 7 6 6 7 0 7 0 1 6 0]
y_test:
[1 0 1 6 4 6 1 0 1 3 0 2 6 3 0 1 0 7 0 0 6 6 5 6 2 6 3 6 5 6 7 6 5 7 0 2 3
6 5 0 7 2 6 4 0 0 2 6 3 7 7 1 3 6 5 0 2 7 0 7 6 0 1 7 6 6 0 4 7 0 0 0 6 0
3 5 0 0 7 6 0 0 7 0 6 7 7 2 7 1 1 5 5 3 7 4 7 2 2 4 0 0 0 7 0 2 0 6 0 6 1
7 6 0 6 0 0 1 0 6 6 7 6 6 7 0 6 1 0 0 7 0 5 7 0 0 7 7 6 5 0 0 1 6 0 7 6 6
5 2 6 0 2 0 6 0 5 0 2 7 0 7 7 6 7 6 6 6 0 6 6 0 1 1 7 6 2 7 6 0 0 6 5 0]
I have replaced multilabel_confusion_matrix with confusion_matrix but still I'm getting the same results the accuracy between 40% to 50%.
and I'm getting results with : cv_results.mean() *100
K-Nearest Neighbours: 39.62 %
Support Vector Machine: 48.09 %
Naive Bayes: 30.46 %
Decision Tree: 30.46 %
Randoom Forest: 52.32 %
Logistic Regression: 44.26 %
here is Ml part :
# Predicting the Test set results
y_pred = classifier.predict(X_test)
y_pred = np.argmax(y_pred, axis=1)
cm = multilabel_confusion_matrix(y_test, y_pred)
models = []
models.append(('K-Nearest Neighbours', KNeighborsClassifier(n_neighbors = 5)))
models.append(('Support Vector Machine', SVC()))
models.append(('Naive Bayes', GaussianNB()))
models.append(('Decision Tree', DecisionTreeClassifier()))
models.append(('Randoom Forest', RandomForestClassifier(n_estimators=100)))
models.append(('Logistic Regression', LogisticRegression()))
results = []
names = []
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state = 8)
cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring='accuracy')

parameter is already defined in ampl

I want to define some parameters in ampl file, however the software indicates that the parameter is defined when I try to run it. How can I fix this problem?
set N := 1..6;
set N_row := 1..4;
var x{i in N} >= 0, <= 1 default 0;
param alpha{N_row};
param A{N_row,N};
param P{N_row,N};
param alpha := 1 1.0 2 1.2 3 3.0 4 3.2;
param A :=
[1,*] 1 10 2 3 3 17 4 3.5 5 1.7 6 8
[2,*] 1 0.05 2 10 3 17 4 0.1 5 8 6 14
[3,*] 1 3 2 3.5 3 1.7 4 10 5 17 6 8
[4,*] 1 17 2 8 3 0.05 4 10 5 0.1 6 14;
param P :=
[1,*] 1 0.1312 2 0.1696 3 0.5569 4 0.0124 5 0.8283 6 0.5886
[2,*] 1 0.2329 2 0.4135 3 0.8307 4 0.3736 5 0.1004 6 0.9991
[3,*] 1 0.2348 2 0.1451 3 0.3522 4 0.2883 5 0.3047 6 0.6650
[4,*] 1 0.4047 2 0.8828 3 0.8732 4 0.5743 5 0.1091 6 0.0381;
minimize Obj: sum {i in N_row} (alpha[i]*exp(-1*(sum{j in N} A[i,j]*(x[j]-P[i,j])**2)));
option solver scip;
solve;
display x;
display Obj;
display alpha;
display A;
display P;
It's generally good practice to keep model and data separate, but if you really want to do this, you can use the "data" statement to switch into data mode and then "model" to switch back. Like this:
set N := 1..6;
set N_row := 1..4;
var x{i in N} >= 0, <= 1 default 0;
param alpha{N_row};
param A{N_row,N};
param P{N_row,N};
data;
param alpha := 1 1.0 2 1.2 3 3.0 4 3.2;
param A :=
[1,*] 1 10 2 3 3 17 4 3.5 5 1.7 6 8
[2,*] 1 0.05 2 10 3 17 4 0.1 5 8 6 14
[3,*] 1 3 2 3.5 3 1.7 4 10 5 17 6 8
[4,*] 1 17 2 8 3 0.05 4 10 5 0.1 6 14;
param P :=
[1,*] 1 0.1312 2 0.1696 3 0.5569 4 0.0124 5 0.8283 6 0.5886
[2,*] 1 0.2329 2 0.4135 3 0.8307 4 0.3736 5 0.1004 6 0.9991
[3,*] 1 0.2348 2 0.1451 3 0.3522 4 0.2883 5 0.3047 6 0.6650
[4,*] 1 0.4047 2 0.8828 3 0.8732 4 0.5743 5 0.1091 6 0.0381;
model;
# etc.

pandas: bin data into specific number of bins of specific size

I would like to bin a dataframe by the values in a single column into bins of a specific size and number.
Here is an example df:
df= pd.DataFrame(np.random.randint(0,10000,size=(10000, 4)), columns=list('ABCD'))
Say I want to bin by column D, I will first sort the data:
df.sort('D')
I would now wish to bin so that the first if bin size is 50 and bin number is 100, the first 50 values will go into bin 1, the next into bin 2, and so on and so forth. Any remaining values after the twenty bins should all go into the final bin. Is there anyway of doing this?
EDIT:
Here is a sample input:
x = pd.DataFrame(np.random.randint(0,10,size=(10, 4)), columns=list('ABCD'))
And here is the expected output:
A B C D bin
0 6 8 6 5 3
1 5 4 9 1 1
2 5 1 7 4 3
3 6 3 3 3 2
4 2 5 9 3 2
5 2 5 1 3 2
6 0 1 1 0 1
7 3 9 5 8 3
8 2 4 0 1 1
9 6 4 5 6 3
As an extra aside, is it also possible to bin any equal values in the same bin? So for example, say I have bin 1 which contains values, 0,1,1 and then bin 2 contains 1,1,2. Is there any way of putting those two 1 values in bin 2 into bin 1? This will create very uneven bin sizes but this is not an issue.
It seems you need floor divide np.arange and then assign to new column:
idx = df['D'].sort_values().index
df['b'] = pd.Series(np.arange(len(df)) // 3 + 1, index = idx)
print (df)
A B C D bin b
0 6 8 6 5 3 3
1 5 4 9 1 1 1
2 5 1 7 4 3 3
3 6 3 3 3 2 2
4 2 5 9 3 2 2
5 2 5 1 3 2 2
6 0 1 1 0 1 1
7 3 9 5 8 3 4
8 2 4 0 1 1 1
9 6 4 5 6 3 3
Detail:
print (np.arange(len(df)) // 3 + 1)
[1 1 1 2 2 2 3 3 3 4]
EDIT:
I create another question about problem with last values here:
N = 3
idx = df['D'].sort_values().index
#one possible solution, thanks divakar
def replace_irregular_groupings(a, N):
n = len(a)
m = N*(n//N)
if m!=n:
a[m:] = a[m-1]
return a
idx = df['D'].sort_values().index
arr = replace_irregular_groupings(np.arange(len(df)) // N + 1, N)
df['b'] = pd.Series(arr, index = idx)
print (df)
A B C D bin b
0 6 8 6 5 3 3
1 5 4 9 1 1 1
2 5 1 7 4 3 3
3 6 3 3 3 2 2
4 2 5 9 3 2 2
5 2 5 1 3 2 2
6 0 1 1 0 1 1
7 3 9 5 8 3 3
8 2 4 0 1 1 1
9 6 4 5 6 3 3