Limitations of the Levenberg-Marquardt algorithm - optimization

I am using Levenberg-Marquardt algorithm to minimize a non-linear function of 6 parameters. I have got about 50 data points for each minimization, but I do not get sufficiently accurate results. Does the fact, that my parameters differ from each other by a few orders of magnitudes can be so much significant? If yes, where should I look for the solution? If no, what kind of limitations of LMA you met in your work (it may help to find other problems with my applictaion)?
Many Thanks for your help.
Edit: The problem I am trying to solve is to determine the best transformation T:
typedef struct
{
double x_translation, y_translation, z_translation;
double x_rotation, y_rotation, z_rotation;
} transform_3D;
to fit the set of 3D points to the bunch of 3D lines. In detail I have got a set of coordinates of 3D points and equations of corresponding 3D lines, which should go through those points (in ideal situation). The LMA is minimizing the summ of distances of the transfomed 3D points to corresponding 3D lines.
The transform function is as follows:
cv::Point3d Geometry::transformation_3D(cv::Point3d point, transform_3D transformation)
{
cv::Point3d p_odd,p_even;
//rotation x
p_odd.x=point.x;
p_odd.y=point.y*cos(transformation.x_rotation)-point.z*sin(transformation.x_rotation);
p_odd.z=point.y*sin(transformation.x_rotation)+point.z*cos(transformation.x_rotation);
//rotation y
p_even.x=p_odd.z*sin(transformation.y_rotation)+p_odd.x*cos(transformation.y_rotation);
p_even.y=p_odd.y;
p_even.z=p_odd.z*cos(transformation.y_rotation)-p_odd.x*sin(transformation.y_rotation);
//rotation z
p_odd.x=p_even.x*cos(transformation.z_rotation)-p_even.y*sin(transformation.z_rotation);
p_odd.y=p_even.x*sin(transformation.z_rotation)+p_even.y*cos(transformation.z_rotation);
p_odd.z=p_even.z;
//translation
p_even.x=p_odd.x+transformation.x_translation;
p_even.y=p_odd.y+transformation.y_translation;
p_even.z=p_odd.z+transformation.z_translation;
return p_even;
}
Hope this explanation will help a bit...
Edit2:
Some exemplary data is pasted below. 3D lines are described by the center point and the directional vector. Center point for all lines are (0,0,0) and 'uz' coordinate for each vector is equal to 1.
Set of 'ux' coordinates of directional vectors:
-1.0986, -1.0986, -1.0986,
-1.0986, -1.0990, -1.0986,
-1.0986, -1.0986, -0.9995,
-0.9996, -0.9996, -0.9995,
-0.9995, -0.9995, -0.9996,
-0.9003, -0.9003, -0.9004,
-0.9003, -0.9003, -0.9003,
-0.9003, -0.9003, -0.8011,
-0.7020, -0.7019, -0.6028,
-0.5035, -0.5037, -0.4045,
-0.3052, -0.3053, -0.2062,
-0.1069, -0.1069, -0.1075,
-0.1070, -0.1070, -0.1069,
-0.1069, -0.1070, -0.0079,
-0.0079, -0.0079, -0.0078,
-0.0078, -0.0079, -0.0079,
0.0914, 0.0914, 0.0913,
0.0913, 0.0914, 0.0915,
0.0914, 0.0914
Set of 'uy' coordinates of directional vectors:
-0.2032, -0.0047, 0.1936,
0.3919, 0.5901, 0.7885,
0.9869, 1.1852, -0.1040,
0.0944, 0.2927, 0.4911,
0.6894, 0.8877, 1.0860,
-0.2032, -0.0047, 0.1936,
0.3919, 0.5902, 0.7885,
0.9869, 1.1852, 1.0860,
0.9869, 1.1852, 1.0861,
0.9865, 1.1853, 1.0860,
0.9870, 1.1852, 1.0861,
-0.2032, -0.0047, 0.1937,
0.3919, 0.5902, 0.7885,
0.9869, 1.1852, -0.1039,
0.0944, 0.2927, 0.4911,
0.6894, 0.8877, 1.0860,
-0.2032, -0.0047, 0.1935,
0.3919, 0.5902, 0.7885,
0.9869, 1.1852
and set of 3D points in (x. y. z. x. y. z. x. y. z. ...) form:
{{0, 0, 0}, {0, 16, 0}, {0, 32, 0},
{0, 48, 0}, {0, 64, 0}, {0, 80, 0},
{0, 96, 0}, {0, 112,0}, {8, 8, 0},
{8, 24, 0}, {8, 40, 0}, {8, 56, 0},
{8, 72, 0}, {8, 88, 0}, {8, 104, 0},
{16, 0, 0}, {16, 16,0}, {16, 32, 0},
{16, 48, 0}, {16, 64, 0}, {16, 80, 0},
{16, 96, 0}, {16, 112, 0}, {24, 104, 0},
{32, 96, 0}, {32, 112, 0}, {40, 104, 0},
{48, 96, 0}, {48, 112, 0}, {56, 104, 0},
{64, 96, 0}, {64, 112, 0}, {72, 104, 0},
{80, 0, 0}, {80, 16, 0}, {80, 32, 0},
{80,48, 0}, {80, 64, 0}, {80, 80, 0},
{80, 96, 0}, {80, 112, 0}, {88, 8, 0},
{88, 24, 0}, {88, 40, 0}, {88, 56, 0},
{88, 72, 0}, {88, 88, 0}, {88, 104, 0},
{96, 0, 0}, {96, 16, 0}, {96, 32, 0},
{96, 48,0}, {96, 64, 0}, {96, 80, 0},
{96, 96, 0}, {96, 112, 0}}
This is kind of an "easy" modelled data with very small rotations.

Well, the proper way of using Levenberg-Marquardt is that you need a good initial estimate (a "seed") for your parameters. Recall that LM is a variant of Newton-Raphson; as with such iterative algorithms, the quality of your starting point will make or break your iteration; either converging to what you want, converging to something completely different (not that unlikely to happen, especially if you have a lot of parameters), or shooting off into the wild blue yonder (diverges).
In any event, it would be more helpful if you could mention the model function you're fitting, and possibly a scatter plot of your data; it might go a long way towards finding a workable solution for this.

I would suggest you try using a different approach to indirectly find your rotation parameters, namely to use a 4x4 affine transformation matrix to incorporate the translation and rotation parameters.
This gets rid of the nonlinearity of the sine and cosine functions (which you can figure out after the fact).
The tough part would be to constrain the transformation matrix from shearing or scaling, which you don't want.

Here you have your problem modeled and running with Mathematica.
I used the "Levenberg-Marquardt" method.
This is why I asked for your data. With MY data, YOUR problems are always going to be easier:)
xnew[x_, y_, z_] :=
RotationMatrix[rx, {1, 0, 0}].RotationMatrix[
ry, {0, 1, 0}].RotationMatrix[rz, {0, 0, 1}].{x, y, z} + {tx, ty, tz};
(* Generate Sample Data*)
(* Angles 1/2,1/3,1/5 *)
(* traslation -> {1,2,3} *)
(* Minimum mean Noise 5% *)
data = Table[{{x, y, z},
RotationMatrix[1/2, {1, 0, 0}].
RotationMatrix[1/3, {0, 1, 0}].
RotationMatrix[1/5, {0, 0, 1}].{x, y, z} +{1, 2, 3} +RandomReal[{-.05, .05}, 3]},
{x, 0, 1, .1}, {y, 0, 1, .1}, {z, 0, 1, .1}];
data = Flatten[data, 2];
(* Now find the parameters*)
FindMinimum[
Sum[SquaredEuclideanDistance[xnew[i[[1]] /. List -> Sequence],
i[[2]]], {i, data}]
, {rx, ry, rz, tx, ty, tz}, Method -> "LevenbergMarquardt"]
Out:
{3.2423, {rx -> 0.500566, ry -> 0.334012, rz -> 0.199902,
tx -> 0.99985, ty -> 1.99939, tz -> 3.00021}}
(Within 1/1000 of the real values)
Edit
I worked a little with your data.
The problem is that your system is very bad conditioned. You need much more data to effectively calculate such small rotations.
These are the results I got:
Rotations in degrees:
rx = 179.99999999999999999999984968493536659553226696793
ry = 180.00000000000000000000006934755799995159952661222
rz = 180.0006286861217378980724139120849587855611645627
Traslations
tx = 48.503663696727576867196234527227830090575281353092
ty = 63.974139455057300403798198525151849767949596684232
tz = -0.99999999999999999999997957276716543927459921348549
I should calculate the errors, but I've no time right now.
BTW, rz = Pi + 0.000011 (in radians)
HTH!

Well, I used ceres-solver to solve this, but I did make a modification in your data . Instead of "uz=1.0", I used "uz=0.0" which makes this entirely a 2d data fitting.
I got the following results.
trans: -88.6384, -16.3879, 0
rot: 0, 0, -6.97813e-05
After getting these results, manually calculated the sum of orthogonal distance of transformed points to the corresponding lines and got 0.0280452.
struct CostFunctor {
CostFunctor(const double p[3], double ux, double uy){
p_[0] = p[0];p_[1] = p[1];p_[2] = p[2];
n_[0] = ux; n_[1] = uy;
n_[2] = 0.0;
normalize(n_);
}
template <typename T>
bool operator()(const T* const x, T* residual) const {
T pDash[3];
T pIn[3];
T temp[3];
pIn[0] = T(p_[0]);
pIn[1] = T(p_[1]);
pIn[2] = T(p_[2]);
//transform the input point p_ to pDash
xform(x, &pIn[0], &pDash[0]);
//find dot(pDash, n), where n is the direction of line
T pDashDotN = T(pDash[0]) * T(n_[0]) + T(pDash[1]) * T(n_[1]) + T(pDash[2]) * T(n_[2]);
//projection of pDash along line
temp[0] = pDashDotN * n_[0];temp[1] = pDashDotN * n_[1];temp[2] = pDashDotN * n_[2];
//orthogonal vector from projection to point
temp[0] = pDash[0] - temp[0];temp[1] = pDash[1] - temp[1];temp[2] = pDash[2] - temp[2];
//squared error
residual[0] = temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2];
return true;
}
//untransformed point
double p_[3];
double ux_;
double uy_;
//direction of line
double n_[3];
};
template<typename T>
void xform(const T *x, const T * inPoint, T *outPoint3) {
T xTheta = x[3];
T pOdd[3], pEven[3];
pOdd[0] = inPoint[0];
pOdd[1] = inPoint[1] * cos(xTheta) + inPoint[2] * sin(xTheta);
pOdd[2] = -inPoint[1] * sin(xTheta) + inPoint[2] * cos(xTheta);
T yTheta = x[4];
pEven[0] = pOdd[0] * cos(yTheta) + pOdd[2] * sin(yTheta);
pEven[1] = pOdd[1];
pEven[2] = -pOdd[0] * sin(yTheta) + pOdd[2] * cos(yTheta);
T zTheta = x[5];
pOdd[0] = pEven[0] * cos(zTheta) - pEven[1] * sin(zTheta);
pOdd[1] = pEven[0] * sin(zTheta) + pEven[1] * cos(zTheta);
pOdd[2] = pEven[2];
T xTrans = x[0], yTrans = x[1], zTrans = x[2];
pOdd[0] += xTrans;
pOdd[1] += yTrans;
pOdd[2] += zTrans;
outPoint3[0] = pOdd[0];
outPoint3[1] = pOdd[1];
outPoint3[2] = pOdd[2];
}

Related

How to dot product (1,10^{13}) (10^13,1) scipy sparse matrix

Basically what the title entails.
The two matrices are mostly zeros. And the first is 1 x 9999999999999 and the second is 9999999999999 x 1
When I try to do a dot product I get this.
Unable to allocate 72.8 TiB for an array with shape (10000000000000,) and data type int64
Full traceback </br>
MemoryError: Unable to allocate 72.8 TiB for an array with shape (10000000000000,) and data type int64
In [31]: imputed.dot(s)
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-31-670cfc69d4cf> in <module>
----> 1 imputed.dot(s)
~/.local/lib/python3.8/site-packages/scipy/sparse/base.py in dot(self, other)
357
358 """
--> 359 return self * other
360
361 def power(self, n, dtype=None):
~/.local/lib/python3.8/site-packages/scipy/sparse/base.py in __mul__(self, other)
478 if self.shape[1] != other.shape[0]:
479 raise ValueError('dimension mismatch')
--> 480 return self._mul_sparse_matrix(other)
481
482 # If it's a list or whatever, treat it like a matrix
~/.local/lib/python3.8/site-packages/scipy/sparse/compressed.py in _mul_sparse_matrix(self, other)
499
500 major_axis = self._swap((M, N))[0]
--> 501 other = self.__class__(other) # convert to this format
502
503 idx_dtype = get_index_dtype((self.indptr, self.indices,
~/.local/lib/python3.8/site-packages/scipy/sparse/compressed.py in __init__(self, arg1, shape, dtype, copy)
32 arg1 = arg1.copy()
33 else:
---> 34 arg1 = arg1.asformat(self.format)
35 self._set_self(arg1)
36
~/.local/lib/python3.8/site-packages/scipy/sparse/base.py in asformat(self, format, copy)
320 # Forward the copy kwarg, if it's accepted.
321 try:
--> 322 return convert_method(copy=copy)
323 except TypeError:
324 return convert_method()
~/.local/lib/python3.8/site-packages/scipy/sparse/csc.py in tocsr(self, copy)
135 idx_dtype = get_index_dtype((self.indptr, self.indices),
136 maxval=max(self.nnz, N))
--> 137 indptr = np.empty(M + 1, dtype=idx_dtype)
138 indices = np.empty(self.nnz, dtype=idx_dtype)
139 data = np.empty(self.nnz, dtype=upcast(self.dtype))
MemoryError: Unable to allocate 72.8 TiB for an array with shape (10000000000000,) and data type int64
It seems the scipy is trying to create a temp array.
I am using the .dot method that scipy provides.
I am also open to non-scipy solutions.
Thanks!
In [105]: from scipy import sparse
If I make a (100,1) csr matrix:
In [106]: A = sparse.random(100,1,format='csr')
In [107]: A
Out[107]:
<100x1 sparse matrix of type '<class 'numpy.float64'>'
with 1 stored elements in Compressed Sparse Row format>
The data and indices are:
In [109]: A.data
Out[109]: array([0.19060481])
In [110]: A.indices
Out[110]: array([0], dtype=int32)
In [112]: A.indptr
Out[112]:
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)
So even with only 1 nonzero term, one array is large (101).
On the other hand the csc format for the same array has a much smaller storage. But csc with (1,100) shape will look like the csr.
In [113]: Ac = A.tocsc()
In [114]: Ac.indptr
Out[114]: array([0, 1], dtype=int32)
In [115]: Ac.indices
Out[115]: array([88], dtype=int32)
Math, especially matrix products is done with csr/csc formats. So it may be hard to avoid this 80 TB memory use.
Looking at the traceback I see that it's trying to convert other to the format that matches self.
So with A.dot(B), and A is (1,N) csr, the small shape. B is (N,1) csc, also the small shape. But B.tocsr() requires the large (N+1,) shaped indptr.
Let's try an alternative to dot
First 2 matrices:
In [122]: A = sparse.random(1,100, .2,format='csr')
In [123]: B = sparse.random(100,1, .2,format='csc')
In [124]: A
Out[124]:
<1x100 sparse matrix of type '<class 'numpy.float64'>'
with 20 stored elements in Compressed Sparse Row format>
In [125]: B
Out[125]:
<100x1 sparse matrix of type '<class 'numpy.float64'>'
with 20 stored elements in Compressed Sparse Column format>
In [126]: A#B
Out[126]:
<1x1 sparse matrix of type '<class 'numpy.float64'>'
with 1 stored elements in Compressed Sparse Row format>
In [127]: _.A
Out[127]: array([[1.33661021]])
Their nonzero element indices. Only the ones that match matter.
In [128]: A.indices, B.indices
Out[128]:
(array([16, 20, 23, 28, 30, 37, 39, 40, 43, 49, 54, 59, 61, 63, 67, 70, 74,
91, 94, 99], dtype=int32),
array([ 5, 8, 15, 25, 34, 35, 40, 46, 47, 51, 53, 60, 68, 70, 75, 81, 87,
90, 91, 94], dtype=int32))
equality matrix:
In [129]: mask = A.indices[:,None]==B.indices
In [132]: np.nonzero(mask.any(axis=0))
Out[132]: (array([ 6, 13, 18, 19]),)
In [133]: np.nonzero(mask.any(axis=1))
Out[133]: (array([ 7, 15, 17, 18]),)
The matching indices:
In [139]: A.indices[Out[133]]
Out[139]: array([40, 70, 91, 94], dtype=int32)
In [140]: B.indices[Out[132]]
Out[140]: array([40, 70, 91, 94], dtype=int32)
sum of the corresponding data values matches [127]
In [141]: (A.data[Out[133]]*B.data[Out[132]]).sum()
Out[141]: 1.3366102138511582

Fuzzy logic controller - RuntimeError: Unable to resolve rule execution order

I am new to this concept and i have been trying to implement an fuzzy logic controller for shower. the input are the postion of knob from extreme left to extreme right and outputs are tempreture from very cold to very hot. i am encountering this Runtime error in the rules. Below is my stupid code
import numpy as np
import skfuzzy as fuzz
from skfuzzy import control as ctrl
pos = ctrl.Consequent(np.arange(0, 180, 1), 'pos')
temp = ctrl.Consequent(np.arange(0, 100, 1), 'temp')
pos['EL'] = fuzz.trimf(pos.universe, [0, 0, 45])
pos['L'] = fuzz.trimf(pos.universe, [0, 45, 90])
pos['C'] = fuzz.trimf(pos.universe, [45, 90, 135])
pos['R'] = fuzz.trimf(pos.universe, [90, 135, 180])
pos['ER'] = fuzz.trimf(pos.universe, [135, 180, 180])
temp['VC'] = fuzz.trimf(temp.universe, [0, 0, 10])
temp['C'] = fuzz.trimf(temp.universe, [0, 10, 40])
temp['W'] = fuzz.trimf(temp.universe, [10, 40, 80])
temp['H'] = fuzz.trimf(temp.universe, [40, 80, 100])
temp['VH'] = fuzz.trimf(temp.universe, [80, 100, 100])
rule1 = ctrl.Rule(pos['EL'], temp['VC'])
rule2 = ctrl.Rule(pos['L'], temp['C'])
rule3 = ctrl.Rule(pos['C'], temp['W'])
rule4 = ctrl.Rule(pos['R'], temp['H'])
rule5 = ctrl.Rule(pos['ER'], temp['VH'])
temp_ctrl = ctrl.ControlSystem([rule1, rule2, rule3, rule4, rule5])
temprature = ctrl.ControlSystemSimulation(temp_ctrl)
RuntimeError: Unable to resolve rule execution order. The most likely reason is two or more rules that depend on each other. Please check the rule graph for loops.
I think you might want this
pos = ctrl.Consequent(np.arange(0, 180, 1), 'pos')
to be this
pos = ctrl.Antecedent(np.arange(0, 180, 1), 'pos')
so your rules will read something like
if antecedent then consequent

Extract PhysicalFace as 2D mesh in fipy

I created a mesh with Gmsh (a surface with a hole and then extruded it). Now I would like to plot the model in individual slices after a simulation with e.g. the MatplotlibViewer (Mayavi does not work on both my computers). I had hoped that it would be possible to define a new net using mesh.physicalFaces, but if that is possible, I have not figured it out yet. My second attempt was to apply the mesh again with Gmsh up to the Extrude command. But the mesh corresponds not to that of the 3D version. Can somebody give me a clue on this? Also like just another approach to representation.
I am working on Win10, Fipy 3.1.3, Python 3.6
import numpy as np
from fipy import *
#%%
def func_mesh():
mesh = Gmsh3D('''
Geometry.OCCAutoFix = 0;
SetFactory("OpenCASCADE");
x = 1.;
bseg = 0.08;
bs= bseg*x;
ls = 2.1;
cl = 0.01;
radius = 0.006;
// Exterior (bounding box) of mesh
Point(1) = {0, 0, 0, cl};
Point(2) = {0, bs, 0, cl};
Point(4) = { bs, 0, 0, cl};
Point(3) = {bs, bs, 0, cl};
Line(1) = {1, 2};
Line(2) = {2, 3};
Line(3) = {3, 4};
Line(4) = {4, 1};
Line Loop (21) = {1,2,3,4};
//Circle
Point(5) = {bseg/2 - radius, bseg/2, 0, cl};
Point(6) = {bseg/2, bseg/2 + radius, 0, cl};
Point(7) = { bseg/2 + radius, bseg/2, 0, cl};
Point(8) = {bseg/2, bseg/2 - radius, 0, cl};
Point(9) = {bseg/2, bseg/2, 0, cl};
Circle(10) = {5,9,6};
Circle(11) = {6,9,7};
Circle(12) = {7,9,8};
Circle(13) = {8,9,5};
Line Loop(22) = {10,11,12,13};
Plane Surface(40) = {22}; //cycl
Plane Surface(15) = {21, 22}; //Surface with a hole
id[] = Extrude {0, 0, ls} {Surface{15}; Layers{210}; Recombine;};
Surface Loop(2) = {46, 45, 48, 47, 49, 41, 44, 43, 42, 15};
Physical Volume("Vol") = {id[]};
Physical Surface("surf_ges") = {41, 42, 43, 44, 49, 47, 45, 48, 46, 15};
Physical Surface("HX") = {45, 46, 48, 47};
Physical Surface("Extr") = {15};
''')
return mesh
mesh = func_mesh()
x,y,z = mesh.cellCenters
X,Y,Z = mesh.faceCenters
tS = CellVariable(name="storage",
mesh=mesh,
value=367.,
hasOld=True)
submesh = mesh.physicalFaces['Extr']
xsub, ysub = submesh.cellCenters
tSslice = CellVariable(name = 'tSsclice',
mesh = submesh,
value = tS[z== z[0]])
viewer = MatplotlibViewer(vars = tSslice)
The error message for this attemp is: AttributeError: 'binOp' object has no attribute 'cellCenters'.
If I redefine the mesh only until the extrude order I get:
"ValueError: too many values to unpack (expected 2)" because of the shape of tSslice.
I am grateful for any help
submesh is not a Mesh; it's a mask identifying which faces of mesh are included in the surface Extr.
FiPy has no facility for extracting a mesh out of another mesh. It should be feasible to create a Mesh2D using the submesh mask, mesh.vertexCoords, and mesh.faceVertexIDs, but it's not trivial.
In theory, you could invoke Gmsh2D with everything up to Plane Surface(15) = {21, 22}; //Surface with a hole, but I find that doesn't generate the same number of elements as your 3D slice at z == z[0].
Ahah, I see the issue. I thought the Extrude operation resulted in prismatic cells, but it does not. The cells are tetrahedral. Since the cells of mesh do not all have the same tetrahedral geometry, the cells that have their bases on Extr are not all guaranteed to have their centers at z == z[0]. A better way is to use FiPy's CellVariable interpolation to extract the values of tS at the coordinates of tSslice:
from fipy import *
geo = '''
Geometry.OCCAutoFix = 0;
SetFactory("OpenCASCADE");
x = 1.;
bseg = 0.08;
bs= bseg*x;
ls = 2.1;
cl = 0.01;
radius = 0.006;
// Exterior (bounding box) of mesh
Point(1) = {0, 0, 0, cl};
Point(2) = {0, bs, 0, cl};
Point(4) = { bs, 0, 0, cl};
Point(3) = {bs, bs, 0, cl};
Line(1) = {1, 2};
Line(2) = {2, 3};
Line(3) = {3, 4};
Line(4) = {4, 1};
Line Loop (21) = {1,2,3,4};
//Circle
Point(5) = {bseg/2 - radius, bseg/2, 0, cl};
Point(6) = {bseg/2, bseg/2 + radius, 0, cl};
Point(7) = { bseg/2 + radius, bseg/2, 0, cl};
Point(8) = {bseg/2, bseg/2 - radius, 0, cl};
Point(9) = {bseg/2, bseg/2, 0, cl};
Circle(10) = {5,9,6};
Circle(11) = {6,9,7};
Circle(12) = {7,9,8};
Circle(13) = {8,9,5};
Line Loop(22) = {10,11,12,13};
Plane Surface(40) = {22}; //cycl
Plane Surface(15) = {21, 22}; //Surface with a hole
id[] = Extrude {0, 0, ls} {Surface{15}; Layers{210}; Recombine;};
Surface Loop(2) = {46, 45, 48, 47, 49, 41, 44, 43, 42, 15};
Physical Volume("Vol") = {id[]};
Physical Surface("Extr") = {15};
'''
mesh = Gmsh3D(geo + '''
Physical Surface("surf_ges") = {41, 42, 43, 44, 49, 47, 45, 48, 46, 15};
Physical Surface("HX") = {45, 46, 48, 47};
'''
)
submesh = Gmsh2D(geo)
x,y,z = mesh.cellCenters
X,Y = submesh.cellCenters[..., submesh.physicalCells['Extr']]
Z = numerix.ones(X.shape) * z[0]
tS = CellVariable(name="storage",
mesh=mesh,
value=mesh.x * mesh.y * mesh.z,
hasOld=True)
tSslice = CellVariable(name = 'tSsclice',
mesh = submesh)
# interpolate values of tS at positions of tSslice
tSslice[..., submesh.physicalCells['Extr']] = tS(numerix.vstack([X, Y, Z]))
viewer = MatplotlibViewer(vars = tSslice)
Here, I use the same .geo script to define both mesh and submesh. I add the surf_ges and HX physical surfaces only to the definition of mesh, because otherwise all of these faces will be imported into submesh as well, although with an effective z value of 0, so they obscure the faces you're interested in.
Frankly, I think a better way to achieve a slice through 3D data like this is to use either a customized MayaviClient (see the Cahn-Hilliard sphere and sphereDaemon.py for an example) or export with a VTKViewer and then render your data slices with a tool like ParaView, VisIt, or Mayavi.

OpenGLES adding a projection

I'm started to learn a OpenGLES and currently I'm reading this TUTORIAL
I have reached paragraph Adding a Projection and I'm stuck there:
// Add to render, right before the call to glViewport
CC3GLMatrix *projection = [CC3GLMatrix matrix];
float h = 4.0f * self.frame.size.height / self.frame.size.width;
[projection populateFromFrustumLeft:-2 andRight:2 andBottom:-h/2 andTop:h/2 andNear:4 andFar:10];
glUniformMatrix4fv(_projectionUniform, 1, 0, projection.glMatrix);
// Modify vertices so they are within projection near/far planes
const Vertex Vertices[] = {
{{1, -1, -7}, {1, 0, 0, 1}},
{{1, 1, -7}, {0, 1, 0, 1}},
{{-1, 1, -7}, {0, 0, 1, 1}},
{{-1, -1, -7}, {0, 0, 0, 1}}
};
The author uses some variables in populateFromFrustumLeft... and doesn't explain them. I want to understand the logic of variable selection to be able to use this function in future.
Help me plz to understan the logic!

How is a negative image figured out?

My program im working on does grayscale, builds an alpha mask, and splits the color channels.
How do you invert a picture?
the above are done looking at the image pixel by pixel.
Im using vb2005.net, for the sake of speed is there other ways of doing those things using drawing.graphics?
See this. You'll basically have to use either ColorMatrix:
new float[][]
{
new float[] {-1, 0, 0, 0, 0},
new float[] {0, -1, 0, 0, 0},
new float[] {0, 0, -1, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {1, 1, 1, 0, 1}
}
or unsafe processing (not sure if this particular one can be done in VB.NET)