OpenGLES adding a projection - objective-c

I'm started to learn a OpenGLES and currently I'm reading this TUTORIAL
I have reached paragraph Adding a Projection and I'm stuck there:
// Add to render, right before the call to glViewport
CC3GLMatrix *projection = [CC3GLMatrix matrix];
float h = 4.0f * self.frame.size.height / self.frame.size.width;
[projection populateFromFrustumLeft:-2 andRight:2 andBottom:-h/2 andTop:h/2 andNear:4 andFar:10];
glUniformMatrix4fv(_projectionUniform, 1, 0, projection.glMatrix);
// Modify vertices so they are within projection near/far planes
const Vertex Vertices[] = {
{{1, -1, -7}, {1, 0, 0, 1}},
{{1, 1, -7}, {0, 1, 0, 1}},
{{-1, 1, -7}, {0, 0, 1, 1}},
{{-1, -1, -7}, {0, 0, 0, 1}}
};
The author uses some variables in populateFromFrustumLeft... and doesn't explain them. I want to understand the logic of variable selection to be able to use this function in future.
Help me plz to understan the logic!

Related

Extract PhysicalFace as 2D mesh in fipy

I created a mesh with Gmsh (a surface with a hole and then extruded it). Now I would like to plot the model in individual slices after a simulation with e.g. the MatplotlibViewer (Mayavi does not work on both my computers). I had hoped that it would be possible to define a new net using mesh.physicalFaces, but if that is possible, I have not figured it out yet. My second attempt was to apply the mesh again with Gmsh up to the Extrude command. But the mesh corresponds not to that of the 3D version. Can somebody give me a clue on this? Also like just another approach to representation.
I am working on Win10, Fipy 3.1.3, Python 3.6
import numpy as np
from fipy import *
#%%
def func_mesh():
mesh = Gmsh3D('''
Geometry.OCCAutoFix = 0;
SetFactory("OpenCASCADE");
x = 1.;
bseg = 0.08;
bs= bseg*x;
ls = 2.1;
cl = 0.01;
radius = 0.006;
// Exterior (bounding box) of mesh
Point(1) = {0, 0, 0, cl};
Point(2) = {0, bs, 0, cl};
Point(4) = { bs, 0, 0, cl};
Point(3) = {bs, bs, 0, cl};
Line(1) = {1, 2};
Line(2) = {2, 3};
Line(3) = {3, 4};
Line(4) = {4, 1};
Line Loop (21) = {1,2,3,4};
//Circle
Point(5) = {bseg/2 - radius, bseg/2, 0, cl};
Point(6) = {bseg/2, bseg/2 + radius, 0, cl};
Point(7) = { bseg/2 + radius, bseg/2, 0, cl};
Point(8) = {bseg/2, bseg/2 - radius, 0, cl};
Point(9) = {bseg/2, bseg/2, 0, cl};
Circle(10) = {5,9,6};
Circle(11) = {6,9,7};
Circle(12) = {7,9,8};
Circle(13) = {8,9,5};
Line Loop(22) = {10,11,12,13};
Plane Surface(40) = {22}; //cycl
Plane Surface(15) = {21, 22}; //Surface with a hole
id[] = Extrude {0, 0, ls} {Surface{15}; Layers{210}; Recombine;};
Surface Loop(2) = {46, 45, 48, 47, 49, 41, 44, 43, 42, 15};
Physical Volume("Vol") = {id[]};
Physical Surface("surf_ges") = {41, 42, 43, 44, 49, 47, 45, 48, 46, 15};
Physical Surface("HX") = {45, 46, 48, 47};
Physical Surface("Extr") = {15};
''')
return mesh
mesh = func_mesh()
x,y,z = mesh.cellCenters
X,Y,Z = mesh.faceCenters
tS = CellVariable(name="storage",
mesh=mesh,
value=367.,
hasOld=True)
submesh = mesh.physicalFaces['Extr']
xsub, ysub = submesh.cellCenters
tSslice = CellVariable(name = 'tSsclice',
mesh = submesh,
value = tS[z== z[0]])
viewer = MatplotlibViewer(vars = tSslice)
The error message for this attemp is: AttributeError: 'binOp' object has no attribute 'cellCenters'.
If I redefine the mesh only until the extrude order I get:
"ValueError: too many values to unpack (expected 2)" because of the shape of tSslice.
I am grateful for any help
submesh is not a Mesh; it's a mask identifying which faces of mesh are included in the surface Extr.
FiPy has no facility for extracting a mesh out of another mesh. It should be feasible to create a Mesh2D using the submesh mask, mesh.vertexCoords, and mesh.faceVertexIDs, but it's not trivial.
In theory, you could invoke Gmsh2D with everything up to Plane Surface(15) = {21, 22}; //Surface with a hole, but I find that doesn't generate the same number of elements as your 3D slice at z == z[0].
Ahah, I see the issue. I thought the Extrude operation resulted in prismatic cells, but it does not. The cells are tetrahedral. Since the cells of mesh do not all have the same tetrahedral geometry, the cells that have their bases on Extr are not all guaranteed to have their centers at z == z[0]. A better way is to use FiPy's CellVariable interpolation to extract the values of tS at the coordinates of tSslice:
from fipy import *
geo = '''
Geometry.OCCAutoFix = 0;
SetFactory("OpenCASCADE");
x = 1.;
bseg = 0.08;
bs= bseg*x;
ls = 2.1;
cl = 0.01;
radius = 0.006;
// Exterior (bounding box) of mesh
Point(1) = {0, 0, 0, cl};
Point(2) = {0, bs, 0, cl};
Point(4) = { bs, 0, 0, cl};
Point(3) = {bs, bs, 0, cl};
Line(1) = {1, 2};
Line(2) = {2, 3};
Line(3) = {3, 4};
Line(4) = {4, 1};
Line Loop (21) = {1,2,3,4};
//Circle
Point(5) = {bseg/2 - radius, bseg/2, 0, cl};
Point(6) = {bseg/2, bseg/2 + radius, 0, cl};
Point(7) = { bseg/2 + radius, bseg/2, 0, cl};
Point(8) = {bseg/2, bseg/2 - radius, 0, cl};
Point(9) = {bseg/2, bseg/2, 0, cl};
Circle(10) = {5,9,6};
Circle(11) = {6,9,7};
Circle(12) = {7,9,8};
Circle(13) = {8,9,5};
Line Loop(22) = {10,11,12,13};
Plane Surface(40) = {22}; //cycl
Plane Surface(15) = {21, 22}; //Surface with a hole
id[] = Extrude {0, 0, ls} {Surface{15}; Layers{210}; Recombine;};
Surface Loop(2) = {46, 45, 48, 47, 49, 41, 44, 43, 42, 15};
Physical Volume("Vol") = {id[]};
Physical Surface("Extr") = {15};
'''
mesh = Gmsh3D(geo + '''
Physical Surface("surf_ges") = {41, 42, 43, 44, 49, 47, 45, 48, 46, 15};
Physical Surface("HX") = {45, 46, 48, 47};
'''
)
submesh = Gmsh2D(geo)
x,y,z = mesh.cellCenters
X,Y = submesh.cellCenters[..., submesh.physicalCells['Extr']]
Z = numerix.ones(X.shape) * z[0]
tS = CellVariable(name="storage",
mesh=mesh,
value=mesh.x * mesh.y * mesh.z,
hasOld=True)
tSslice = CellVariable(name = 'tSsclice',
mesh = submesh)
# interpolate values of tS at positions of tSslice
tSslice[..., submesh.physicalCells['Extr']] = tS(numerix.vstack([X, Y, Z]))
viewer = MatplotlibViewer(vars = tSslice)
Here, I use the same .geo script to define both mesh and submesh. I add the surf_ges and HX physical surfaces only to the definition of mesh, because otherwise all of these faces will be imported into submesh as well, although with an effective z value of 0, so they obscure the faces you're interested in.
Frankly, I think a better way to achieve a slice through 3D data like this is to use either a customized MayaviClient (see the Cahn-Hilliard sphere and sphereDaemon.py for an example) or export with a VTKViewer and then render your data slices with a tool like ParaView, VisIt, or Mayavi.

Open the depth test, but it doesn't work?

m_Program = new GLProgram;
m_Program->initWithFilenames(“shader/gldemo.vsh”, “shader/gldemo.fsh”);
m_Program->link();
//set uniform locations
m_Program->updateUniforms();
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
//create vbo
glGenBuffers(1, &vertexVBO);
glBindBuffer(GL_ARRAY_BUFFER, vertexVBO);
auto size = Director::getInstance()->getVisibleSize();
float vertercies[] = {
0,0,0.5, //first point
-1, 1,0.5,
-1, -1,0.5,
-0.3,0,0.8,
1, 1,0.8,
1, -1,0.8};
float color[] = { 1, 0,0, 1, 1,0,0, 1, 1, 0, 0, 1,
0, 1,0, 1, 0,1,0, 1, 0, 1, 0, 1};
glBufferData(GL_ARRAY_BUFFER, sizeof(vertercies), vertercies, GL_STATIC_DRAW);
//vertex attribute "a_position"
GLuint positionLocation = glGetAttribLocation(m_Program->getProgram(), "a_position");
glEnableVertexAttribArray(positionLocation);
glVertexAttribPointer(positionLocation, 3, GL_FLOAT, GL_FALSE, 0, (GLvoid*)0);
//set for color
glGenBuffers(1, &colorVBO);
glBindBuffer(GL_ARRAY_BUFFER, colorVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(color), color, GL_STATIC_DRAW);
GLuint colorLocation = glGetAttribLocation(m_Program->getProgram(), "a_color");
glEnableVertexAttribArray(colorLocation);
glVertexAttribPointer(colorLocation, 4, GL_FLOAT, GL_FALSE, 0, (GLvoid*)0);
//for safty
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//show
glEnable(GL_DEPTH_TEST);
Open the depth test, but it doesn't work?
It can be displayed normally, but I open the depth test, the red triangle Z is set to 0.5, the blue is set to 0.8, but their rendering order has not changed?
Multiple things:
Enabling depth testing does not mean that the depth testing is set correctly. You have to setup it
You have to make sure there is a depth buffer
You have to enable depth writing and depth testing when rendering your primitives
With openGL, you typically need to call those functions in order to have depth testing working:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
and to enable depth writing:
glDepthMask(GL_TRUE);
And just searching for opengl enable depth testing on google gave me some neat results. You should look on the multiple resources google have in its index.

Image center in the calibration matrix

I have a camera and a 3D object, I have calculated the camera matrix as given in this. The image captured using the camera is having size 1600x1200. But in the camera matrix I am not getting the value of image centers properly. Instead of 800, 600 I am getting some other values. What will be the possible reasons for this.
#include <iostream>
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <math.h>
using namespace std;
using namespace cv;
int main()
{
int numberofpoints=30;
float p2d[30][2] =
{{72, 169}, {72, 184}, {71, 264}, {96, 168}, {94, 261}, {94, 276}, {257, 133},
{254, 322}, {254, 337}, {278, 132}, {278, 146}, {275, 321}, {439, 228},
{439, 243}, {437, 328}, {435, 431}, {461, 226}, {459, 326}, {459, 342},
{457, 427}, {457, 444}, {612, 196}, {609, 291}, {605, 392}, {605, 406},
{634, 191}, {633, 206}, {630, 288}, {630, 303}, {627, 390}};
float p3d[30][3] =
{{0, 0, 98}, {0, 0, 94}, {0, 0, 75}, {4, 4, 98}, {4, 4, 75}, {4, 4, 71},
{0, 96, 75}, {0, 96, 25}, {0, 96, 21}, {4, 100, 75},
{4, 100, 71}, {4, 100, 25}, {96, 0, 98}, {96, 0, 94},
{96, 0, 75}, {96, 0, 50}, {100, 4, 98}, {100, 4, 75},
{100, 4, 71}, {100, 4, 50}, {100, 4, 46}, {96, 96, 75},
{96, 96, 50}, {96, 96, 25}, {96, 96, 21}, {100, 100, 75},
{100, 100, 71}, {100, 100, 50}, {100, 100, 46}, {100, 100, 25}};
Mat Matrix2D= Mat(30, 2, CV_64FC1, &p2d);
Mat Matrix3D= Mat(30, 3, CV_64FC1, &p3d);
Mat X_3D=Matrix3D.col(0);
Mat Y_3D=Matrix3D.col(1);
Mat Z_3D=Matrix3D.col(2);
Mat X_2D=Matrix2D.col(0);
Mat Y_2D=Matrix2D.col(1);
Mat z = Mat::zeros(numberofpoints,4, CV_64FC1);
Mat o = Mat::ones(numberofpoints,1, CV_64FC1);
Mat temp,temp1,C,D,AoddRows;
hconcat( X_3D, Y_3D,temp);
hconcat(Z_3D ,o,temp1);
hconcat(z, -X_2D.mul(X_3D),C);
hconcat(-X_2D.mul(Y_3D),-X_2D.mul(Z_3D),D);
hconcat(temp,temp1,AoddRows);
hconcat(AoddRows,C,AoddRows);
hconcat(AoddRows,D,AoddRows);
hconcat(AoddRows,-X_2D,AoddRows);
Mat A1,B1,C1,AevenRows;
hconcat(z,Matrix3D,A1);
hconcat(o,-Y_2D.mul(X_3D) ,B1);
hconcat(-Y_2D.mul(Y_3D),-Y_2D.mul(Z_3D),C1);
hconcat(A1,B1,AevenRows);
hconcat(AevenRows,C1,AevenRows);
hconcat(AevenRows,-Y_2D,AevenRows);
Mat A;
vconcat(AoddRows,AevenRows,A);
Mat w, u, EigenVectors;
SVD::compute(A, w, u, EigenVectors);
Mat EigenVector_SmallestEigenvalue=EigenVectors.row(EigenVectors.rows-1);
Mat P=Mat::zeros(3,4, CV_64FC1);
int k=0;
for(int i=0;i<P.rows;i++)
{
for(int j=0;j<P.cols;j++)
{
P.at<double>(i, j) = EigenVector_SmallestEigenvalue.at<double>(0,k);
k++;
}
}
double abs_lambda = sqrt(P.at<double>(2,0) * P.at<double>(2,0) + P.at<double>(2,1) * P.at<double>(2,1) + P.at<double>(2,2) * P.at<double>(2,2));
P = P / abs_lambda;
Mat ProjectionMatrix = P;
Mat T= Mat::zeros(3,1, CV_64FC1);
Mat R = Mat::zeros(3,3, CV_64FC1);
Mat B = Mat::zeros(3,3, CV_64FC1);
Mat b = Mat::zeros(3,1,CV_64FC1);
P.row(0).colRange(0,3).copyTo(B.row(0));
P.row(1).colRange(0,3).copyTo(B.row(1));
P.row(2).colRange(0,3).copyTo(B.row(2));
P.col(3).copyTo(b.col(0));
Mat K=B*B.t();
double Center_x = K.at<double>(0,2);
double Center_y = K.at<double>(1,2);
double beta = sqrt(K.at<double>(1,1)-Center_y*Center_y);
double gemma = (K.at<double>(0,1)-Center_x*Center_y)/beta;
double alpha = sqrt(K.at<double>(0,0)-Center_x*Center_x-gemma*gemma);
Mat CameraMatrix=Mat::zeros(3,3, CV_64FC1);
CameraMatrix.at<double>(0,0)=alpha;
CameraMatrix.at<double>(0,1)=gemma;
CameraMatrix.at<double>(0,2)=Center_x;
CameraMatrix.at<double>(1,1)=beta;
CameraMatrix.at<double>(1,2)=Center_y;
CameraMatrix.at<double>(2,2)=1;
R = CameraMatrix.inv()*B;
T = CameraMatrix.inv()*b;
Mat newMatrix2D=Mat::zeros(numberofpoints,2, CV_64FC1);
for(int i=0;i<numberofpoints;i++)
{
double num_u = P.at<double>(0,0) * Matrix3D.at<double>(i,0)
+ P.at<double>(0,1) * Matrix3D.at<double>(i,1)
+ P.at<double>(0,2) * Matrix3D.at<double>(i,2)
+ P.at<double>(0,3);
double num_v = P.at<double>(1,0) * Matrix3D.at<double>(i,0)
+ P.at<double>(1,1) * Matrix3D.at<double>(i,1)
+ P.at<double>(1,2) * Matrix3D.at<double>(i,2)
+ P.at<double>(1,3);
double den = P.at<double>(2,0) * Matrix3D.at<double>(i,0)
+ P.at<double>(2,1) * Matrix3D.at<double>(i,1)
+ P.at<double>(2,2) * Matrix3D.at<double>(i,2)
+ P.at<double>(2,3);
newMatrix2D.at<double>(i,0)=num_u/den;
newMatrix2D.at<double>(i,1)=num_v/den;
}
Mat reprojMatrix=newMatrix2D;
Mat errorDiff=reprojMatrix-Matrix2D;
int size=errorDiff.rows;
double sum=0;
for(int i=0;i<errorDiff.rows;i++)
{
sum=sum + sqrt(errorDiff.at<double>(i,0) * errorDiff.at<double>(i,0)
+ errorDiff.at<double>(i,1) * errorDiff.at<double>(i,1));
}
sum=sum/size;
cout<<"Average Error="<<sum<<endl;
cout<<"Translation Matrix="<<T<<endl<<endl;
cout<<"Rotational Matrix="<<R<<endl<<endl;
cout<<"Camera Matrix="<<CameraMatrix<<endl;
return 0;
}
SAMPLE 1:
const int numberofpoints = 30;
float p2d[numberofpoints][2] =
{{72, 169}, {72, 184}, {71, 264}, {96, 168}, {94, 261}, {94, 276}, {257, 133},
{254, 322}, {254, 337}, {278, 132}, {278, 146}, {275, 321}, {439, 228},
{439, 243}, {437, 328}, {435, 431}, {461, 226}, {459, 326}, {459, 342},
{457, 427}, {457, 444}, {612, 196}, {609, 291}, {605, 392}, {605, 406},
{634, 191}, {633, 206}, {630, 288}, {630, 303}, {627, 390}};
float p3d[numberofpoints][3] =
{{0, 0, 98}, {0, 0, 94}, {0, 0, 75}, {4, 4, 98}, {4, 4, 75}, {4, 4, 71},
{0, 96, 75}, {0, 96, 25}, {0, 96, 21}, {4, 100, 75},
{4, 100, 71}, {4, 100, 25}, {96, 0, 98}, {96, 0, 94},
{96, 0, 75}, {96, 0, 50}, {100, 4, 98}, {100, 4, 75},
{100, 4, 71}, {100, 4, 50}, {100, 4, 46}, {96, 96, 75},
{96, 96, 50}, {96, 96, 25}, {96, 96, 21}, {100, 100, 75},
{100, 100, 71}, {100, 100, 50}, {100, 100, 46}, {100, 100, 25}};
SAMPLE 2:
const int numberofpoints = 24;
float p2d[24][2] =
{{41, 291}, {41, 494}, {41, 509}, {64, 285}, {64, 303},
{64, 491}, {195, 146}, {216, 144}, {216, 160}, {431, 337},
{431, 441}, {431, 543}, {431, 558}, {452, 336}, {452, 349},
{452, 438}, {452, 457}, {452, 539}, {568, 195}, {566, 291},
{588, 190}, {587, 209}, {586, 289}, {586, 302}};
float p3d[24][3] =
{{0, 0, 75}, {0, 0, 25}, {0, 0, 21}, {4, 4, 75}, {4, 4, 71}, {4, 4, 25},
{0, 96, 75 }, {4, 100, 75}, {4, 100, 71}, {96, 0, 75}, {96, 0, 50}, {96, 0, 25},
{96, 0, 21}, {100, 4, 75}, {100, 4, 71}, {100, 4, 50}, {100, 4, 46}, {100, 4, 25}, {96, 96, 75 }, {96, 96, 50 }, {100, 100, 75}, {100, 100, 71}, {100, 100, 50}, {100, 100, 46}};
SAMPLE 3:
const int numberofpoints = 33;
float p2d[numberofpoints][2] =
{{45, 310}, {43, 412}, {41, 513}, {41, 530}, {68, 305},
{68, 321}, {66, 405}, {66, 423}, {64, 509}, {201, 70}, {201, 84},
{200, 155}, {198, 257}, {218, 259}, {218, 271}, {441, 364},
{439, 464}, {437, 569}, {437, 586}, {462, 361}, {462, 376},
{460, 462}, {460, 479}, {459, 565}, {583, 117}, {583, 131},
{580, 215}, {576, 313}, {603, 113}, {600, 214}, {599, 229},
{596, 311}, {596, 323}};
float p3d[numberofpoints][3] =
{{0, 0, 75}, {0, 0, 50}, {0, 0, 25}, {0, 0, 21}, {4, 4, 75}, {4, 4, 71},
{4, 4, 50}, {4, 4, 46}, {4, 4, 25}, {0, 96, 98}, {0, 96, 94}, {0, 96, 75},
{0, 96, 50}, {4, 100,75}, {4, 100,71}, {96, 0, 75}, {96, 0, 50}, {96, 0, 25},
{96, 0, 21}, {100, 4,75}, {100, 4,71}, {100, 4,50}, {100, 4,46}, {100, 4,25},{96, 96,98}, {96, 96,94}, {96, 96,75}, {96, 96,50}, {100, 100, 98}, {100, 100, 75},{100, 100, 71}, {100, 100, 50}, {100, 100, 46}};
EDIT:
I've taken one of the 3D coordinate point sets you've provided, and then tried to project them on a virtual camera (to prevent any errors with 3d-2d point correspondences). The alpha, beta and gamma are estimated correctly in this case, but u and v (that you are looking for) were for some reason negative in all of my experiments. Perhaps there's a bug in my code, but maybe this is due to inaccuracy of OpenCV solveZ implementation (I've had problems with OpenCV's eigenvalue/vector computation before and try to use OpenBLAS when high accuracy is needed).
Another possible reason - as authors state, this approach performs algebraic minimization, which may produce physically meaningless results. Perhaps you should check remaining part of paper or try a different approach.
Here's the updated code:
#include <opencv2/opencv.hpp>
#include <iostream>
int main()
{
// generate random 3d points
const int numberofpoints = 33;
float p3d_[numberofpoints][3] =
{{0, 0, 75}, {0, 0, 50}, {0, 0, 25}, {0, 0, 21}, {4, 4, 75}, {4, 4, 71},
{4, 4, 50}, {4, 4, 46}, {4, 4, 25}, {0, 96, 98}, {0, 96, 94}, {0, 96, 75},
{0, 96, 50}, {4, 100,75}, {4, 100,71}, {96, 0, 75}, {96, 0, 50}, {96, 0, 25},
{96, 0, 21}, {100, 4,75}, {100, 4,71}, {100, 4,50}, {100, 4,46}, {100, 4,25},{96, 96,98}, {96, 96,94}, {96, 96,75}, {96, 96,50}, {100, 100, 98}, {100, 100, 75},{100, 100, 71}, {100, 100, 50}, {100, 100, 46}};
std::vector<cv::Point3d> p3d;
for (int i = 0; i < numberofpoints; i++) {
cv::Point3d X(p3d_[i][0], p3d_[i][1], p3d_[i][2]);
p3d.push_back(X);
}
// set up virtual camera
const cv::Size imgSize(1600, 1200);
const double fx = 100;
const double fy = 120;
cv::Mat1d K = (cv::Mat1d(3, 3) << fx, 0, imgSize.width/2,
0, fy, imgSize.height/2,
0, 0, 1);
std::cout << "K = " << std::endl;
std::cout << K << std::endl << std::endl;
cv::Mat1d t = (cv::Mat1d(3, 1) << 10, 20, 20);
cv::Mat1d rvecDeg = (cv::Mat1d(3, 1) << 10, 20, 30);
// project point on camera
std::vector<cv::Point2d> p2d;
cv::projectPoints(p3d,
rvecDeg*CV_PI/180,
t,
K,
cv::Mat(),
p2d);
// estimate projection
cv::Mat1d G = cv::Mat1d::zeros(numberofpoints*2, 12);
for (int i = 0; i < numberofpoints; i++) {
const double X = p3d[i].x;
const double Y = p3d[i].y;
const double Z = p3d[i].z;
G(i*2 + 0, 0) = X;
G(i*2 + 0, 1) = Y;
G(i*2 + 0, 2) = Z;
G(i*2 + 0, 3) = 1;
G(i*2 + 1, 4) = X;
G(i*2 + 1, 5) = Y;
G(i*2 + 1, 6) = Z;
G(i*2 + 1, 7) = 1;
const double u = p2d[i].x;
const double v = p2d[i].y;
G(i*2 + 0, 8) = u*X;
G(i*2 + 0, 9) = u*Y;
G(i*2 + 0, 10) = u*Z;
G(i*2 + 0, 11) = u;
G(i*2 + 1, 8) = v*X;
G(i*2 + 1, 9) = v*Y;
G(i*2 + 1, 10) = v*Z;
G(i*2 + 1, 11) = v;
}
std::cout << "G = " << std::endl;
std::cout << G << std::endl << std::endl;
cv::Mat1d p;
cv::SVD::solveZ(G, p);
cv::Mat1d P = p.reshape(0, 3).clone();
P /= P(2, 3);
std::cout << "p = " << std::endl;
std::cout << p << std::endl << std::endl;
std::cout << "P = " << std::endl;
std::cout << P << std::endl << std::endl;
cv::Mat1d B(3, 3);
cv::Mat1d b(3, 1);
P.colRange(0, 3).copyTo(B);
P.col(3).copyTo(b);
std::cout << "B = " << std::endl;
std::cout << B << std::endl << std::endl;
std::cout << "b = " << std::endl;
std::cout << b << std::endl << std::endl;
cv::Mat1d K_ = B*B.t();
K_ /= K_(2, 2);
std::cout << "K_ = " << std::endl;
std::cout << K_ << std::endl << std::endl;
const double u = K_(0, 2);
const double v = K_(1, 2);
const double ku = K_(0, 0);
const double kv = K_(1, 1);
const double kc = K_(0, 1);
const double beta = sqrt(kv - v*v);
const double gamma = (kc - u*v)/beta;
const double alpha = sqrt(ku - u*u - gamma*gamma);
cv::Mat1d A = (cv::Mat1d(3, 3) << alpha, gamma, u,
0, beta, v,
0, 0, 1);
std::cout << "A = " << std::endl;
std::cout << A << std::endl << std::endl;
return 0;
}
And output:
K =
[100, 0, 800;
0, 120, 600;
0, 0, 1]
G =
[0, 0, 75, 1, 0, 0, 0, 0, 0, 0, 63156.71331987197, 842.0895109316264;
0, 0, 0, 0, 0, 0, 75, 1, 0, 0, 46451.70410142483, 619.3560546856644;
0, 0, 50, 1, 0, 0, 0, 0, 0, 0, 42144.23132613153, 842.8846265226306;
0, 0, 0, 0, 0, 0, 50, 1, 0, 0, 31473.60945095979, 629.4721890191959;
0, 0, 25, 1, 0, 0, 0, 0, 0, 0, 21113.32803626328, 844.5331214505311;
0, 0, 0, 0, 0, 0, 25, 1, 0, 0, 16261.14346086196, 650.4457384344785;
0, 0, 21, 1, 0, 0, 0, 0, 0, 0, 17744.50634900177, 844.9764928096083;
0, 0, 0, 0, 0, 0, 21, 1, 0, 0, 13777.82037617329, 656.0866845796805;
4, 4, 75, 1, 0, 0, 0, 0, 3374.871998456306, 3374.871998456306, 63278.84997105574, 843.7179996140766;
0, 0, 0, 0, 4, 4, 75, 1, 2506.953106676701, 2506.953106676701, 47005.37075018814, 626.7382766691752;
4, 4, 71, 1, 0, 0, 0, 0, 3375.547822963469, 3375.547822963469, 59915.97385760157, 843.8869557408673;
0, 0, 0, 0, 4, 4, 71, 1, 2513.243523511581, 2513.243523511581, 44610.07254233056, 628.3108808778952;
4, 4, 50, 1, 0, 0, 0, 0, 3380.337247599766, 3380.337247599766, 42254.21559499707, 845.0843118999414;
0, 0, 0, 0, 4, 4, 50, 1, 2557.822370597248, 2557.822370597248, 31972.7796324656, 639.4555926493119;
4, 4, 46, 1, 0, 0, 0, 0, 3381.587616222724, 3381.587616222724, 38888.25758656133, 845.396904055681;
0, 0, 0, 0, 4, 4, 46, 1, 2569.460510014068, 2569.460510014068, 29548.79586516178, 642.365127503517;
4, 4, 25, 1, 0, 0, 0, 0, 3391.684639092943, 3391.684639092943, 21198.02899433089, 847.9211597732357;
0, 0, 0, 0, 4, 4, 25, 1, 2663.441243136111, 2663.441243136111, 16646.50776960069, 665.8603107840277;
0, 96, 98, 1, 0, 0, 0, 0, 0, 76956.82972653268, 78560.09701250211, 801.6336429847155;
0, 0, 0, 0, 0, 96, 98, 1, 0, 65682.8899983677, 67051.28354000035, 684.1967708163302;
0, 96, 94, 1, 0, 0, 0, 0, 0, 76853.25603860538, 75252.14653780111, 800.5547504021395;
0, 0, 0, 0, 0, 96, 94, 1, 0, 65937.37528926283, 64563.67997073651, 686.8476592631544;
0, 96, 75, 1, 0, 0, 0, 0, 0, 76268.94727818707, 59585.11506108365, 794.4682008144487;
0, 0, 0, 0, 0, 96, 75, 1, 0, 67373.04865228917, 52635.19425960092, 701.8025901280123;
0, 96, 50, 1, 0, 0, 0, 0, 0, 75153.33695072016, 39142.36299516675, 782.8472599033349;
0, 0, 0, 0, 0, 96, 50, 1, 0, 70114.15427855898, 36517.78868674947, 730.3557737349894;
4, 100, 75, 1, 0, 0, 0, 0, 3182.802966382433, 79570.07415956081, 59677.55561967062, 795.7007415956082;
0, 0, 0, 0, 4, 100, 75, 1, 2830.826944396773, 70770.67360991932, 53078.00520743949, 707.7067360991932;
4, 100, 71, 1, 0, 0, 0, 0, 3176.842851499092, 79421.07128747732, 56388.96061410889, 794.2107128747731;
0, 0, 0, 0, 4, 100, 71, 1, 2846.678125949429, 71166.95314873573, 50528.53673560237, 711.6695314873573;
96, 0, 75, 1, 0, 0, 0, 0, 94501.57967461165, 0, 73829.35912079035, 984.3914549438714;
0, 0, 0, 0, 96, 0, 75, 1, 69392.97525695754, 0, 54213.26191949808, 722.8434922599744;
96, 0, 50, 1, 0, 0, 0, 0, 102665.427189569, 0, 53471.57666123386, 1069.431533224677;
0, 0, 0, 0, 96, 0, 50, 1, 76872.21110081286, 0, 40037.60994834002, 800.7521989668005;
96, 0, 25, 1, 0, 0, 0, 0, 134150.4060603281, 0, 34935.00157821043, 1397.400063128417;
0, 0, 0, 0, 96, 0, 25, 1, 105716.8927391909, 0, 27530.44081749762, 1101.217632699905;
96, 0, 21, 1, 0, 0, 0, 0, 150007.0124893856, 0, 32814.03398205309, 1562.573046764433;
0, 0, 0, 0, 96, 0, 21, 1, 120243.7808209231, 0, 26303.32705457694, 1252.539383551283;
100, 4, 75, 1, 0, 0, 0, 0, 98699.75231231008, 3947.990092492403, 74024.81423423256, 986.9975231231008;
0, 0, 0, 0, 100, 4, 75, 1, 73361.21263523313, 2934.448505409325, 55020.90947642484, 733.6121263523312;
100, 4, 71, 1, 0, 0, 0, 0, 99628.75712124966, 3985.150284849986, 70736.41755608725, 996.2875712124966;
0, 0, 0, 0, 100, 4, 71, 1, 74265.21217550985, 2970.608487020394, 52728.30064461199, 742.6521217550985;
100, 4, 50, 1, 0, 0, 0, 0, 107383.6098967354, 4295.344395869415, 53691.80494836769, 1073.836098967354;
0, 0, 0, 0, 100, 4, 50, 1, 81811.33387099921, 3272.453354839968, 40905.6669354996, 818.113338709992;
100, 4, 46, 1, 0, 0, 0, 0, 109823.0621321851, 4392.922485287403, 50518.60858080514, 1098.230621321851;
0, 0, 0, 0, 100, 4, 46, 1, 84185.12534995917, 3367.405013998367, 38725.15766098122, 841.8512534995917;
100, 4, 25, 1, 0, 0, 0, 0, 141059.7182762867, 5642.388731051467, 35264.92956907167, 1410.597182762867;
0, 0, 0, 0, 100, 4, 25, 1, 114581.0097655446, 4583.240390621784, 28645.25244138615, 1145.810097655446;
96, 96, 98, 1, 0, 0, 0, 0, 83904.83905262669, 83904.83905262669, 85652.85653288974, 874.0087401315279;
0, 0, 0, 0, 96, 96, 98, 1, 72995.44277461393, 72995.44277461393, 74516.18116575171, 760.3691955688951;
96, 96, 94, 1, 0, 0, 0, 0, 84021.59669072446, 84021.59669072446, 82271.1467596677, 875.2249655283798;
0, 0, 0, 0, 96, 96, 94, 1, 73575.81721996517, 73575.81721996517, 72042.98769454923, 766.4147627079706;
96, 96, 75, 1, 0, 0, 0, 0, 84712.67045349024, 84712.67045349024, 66181.77379178925, 882.4236505571899;
0, 0, 0, 0, 96, 96, 75, 1, 77010.98050603706, 77010.98050603706, 60164.82852034145, 802.1977136045526;
96, 96, 50, 1, 0, 0, 0, 0, 86206.34878960505, 86206.34878960505, 44899.13999458597, 897.9827998917193;
0, 0, 0, 0, 96, 96, 50, 1, 84435.70020728504, 84435.70020728504, 43976.92719129429, 879.5385438258859;
100, 100, 98, 1, 0, 0, 0, 0, 87539.46210370047, 87539.46210370047, 85788.67286162646, 875.3946210370046;
0, 0, 0, 0, 100, 100, 98, 1, 76664.75192364832, 76664.75192364832, 75131.45688517536, 766.6475192364833;
100, 100, 75, 1, 0, 0, 0, 0, 88416.27638616599, 88416.27638616599, 66312.20728962449, 884.16276386166;
0, 0, 0, 0, 100, 100, 75, 1, 81008.14162095707, 81008.14162095707, 60756.10621571781, 810.0814162095708;
100, 100, 71, 1, 0, 0, 0, 0, 88614.85267838663, 88614.85267838663, 62916.5454016545, 886.1485267838663;
0, 0, 0, 0, 100, 100, 71, 1, 81991.80970097007, 81991.80970097007, 58214.18488768875, 819.9180970097007;
100, 100, 50, 1, 0, 0, 0, 0, 90038.77512167525, 90038.77512167525, 45019.38756083763, 900.3877512167526;
0, 0, 0, 0, 100, 100, 50, 1, 89045.35592350175, 89045.35592350175, 44522.67796175087, 890.4535592350175;
100, 100, 46, 1, 0, 0, 0, 0, 90415.3917433265, 90415.3917433265, 41591.08020193018, 904.153917433265;
0, 0, 0, 0, 100, 100, 46, 1, 90910.96508045508, 90910.96508045508, 41819.04393700934, 909.1096508045508]
p =
[0.006441365659365744;
-0.006935698812720837;
-0.03488896901596035;
-0.7622591852949104;
0.004771957238156334;
-0.01133107207303337;
-0.02452697177582621;
-0.6456783687203944;
-1.258567018901194e-05;
1.123537587555102e-05;
4.154374841821949e-05;
0.0008967755121116598]
P =
[7.182807260423624, -7.734041261217285, -38.90490824599617, -849.9999999999995;
5.321239455925471, -12.63534956072988, -27.35018011148848, -719.9999999999993;
-0.0140343597913107, 0.01252863813051139, 0.04632569451009583, 1]
B =
[7.182807260423624, -7.734041261217285, -38.90490824599617;
5.321239455925471, -12.63534956072988, -27.35018011148848;
-0.0140343597913107, 0.01252863813051139, 0.04632569451009583]
b =
[-849.9999999999995;
-719.9999999999993;
1]
K_ =
[649999.9999999985, 479999.9999999995, -799.9999999999992;
479999.9999999995, 374400.0000000002, -600.0000000000002;
-799.9999999999992, -600.0000000000002, 1]
A =
[99.99999999999883, -1.940255363782255e-12, -799.9999999999992;
0, 119.9999999999995, -600.0000000000002;
0, 0, 1]
What other values are you getting? Generally, the camera center is never exactly at the center of the image, but it should be reasonably close.

Limitations of the Levenberg-Marquardt algorithm

I am using Levenberg-Marquardt algorithm to minimize a non-linear function of 6 parameters. I have got about 50 data points for each minimization, but I do not get sufficiently accurate results. Does the fact, that my parameters differ from each other by a few orders of magnitudes can be so much significant? If yes, where should I look for the solution? If no, what kind of limitations of LMA you met in your work (it may help to find other problems with my applictaion)?
Many Thanks for your help.
Edit: The problem I am trying to solve is to determine the best transformation T:
typedef struct
{
double x_translation, y_translation, z_translation;
double x_rotation, y_rotation, z_rotation;
} transform_3D;
to fit the set of 3D points to the bunch of 3D lines. In detail I have got a set of coordinates of 3D points and equations of corresponding 3D lines, which should go through those points (in ideal situation). The LMA is minimizing the summ of distances of the transfomed 3D points to corresponding 3D lines.
The transform function is as follows:
cv::Point3d Geometry::transformation_3D(cv::Point3d point, transform_3D transformation)
{
cv::Point3d p_odd,p_even;
//rotation x
p_odd.x=point.x;
p_odd.y=point.y*cos(transformation.x_rotation)-point.z*sin(transformation.x_rotation);
p_odd.z=point.y*sin(transformation.x_rotation)+point.z*cos(transformation.x_rotation);
//rotation y
p_even.x=p_odd.z*sin(transformation.y_rotation)+p_odd.x*cos(transformation.y_rotation);
p_even.y=p_odd.y;
p_even.z=p_odd.z*cos(transformation.y_rotation)-p_odd.x*sin(transformation.y_rotation);
//rotation z
p_odd.x=p_even.x*cos(transformation.z_rotation)-p_even.y*sin(transformation.z_rotation);
p_odd.y=p_even.x*sin(transformation.z_rotation)+p_even.y*cos(transformation.z_rotation);
p_odd.z=p_even.z;
//translation
p_even.x=p_odd.x+transformation.x_translation;
p_even.y=p_odd.y+transformation.y_translation;
p_even.z=p_odd.z+transformation.z_translation;
return p_even;
}
Hope this explanation will help a bit...
Edit2:
Some exemplary data is pasted below. 3D lines are described by the center point and the directional vector. Center point for all lines are (0,0,0) and 'uz' coordinate for each vector is equal to 1.
Set of 'ux' coordinates of directional vectors:
-1.0986, -1.0986, -1.0986,
-1.0986, -1.0990, -1.0986,
-1.0986, -1.0986, -0.9995,
-0.9996, -0.9996, -0.9995,
-0.9995, -0.9995, -0.9996,
-0.9003, -0.9003, -0.9004,
-0.9003, -0.9003, -0.9003,
-0.9003, -0.9003, -0.8011,
-0.7020, -0.7019, -0.6028,
-0.5035, -0.5037, -0.4045,
-0.3052, -0.3053, -0.2062,
-0.1069, -0.1069, -0.1075,
-0.1070, -0.1070, -0.1069,
-0.1069, -0.1070, -0.0079,
-0.0079, -0.0079, -0.0078,
-0.0078, -0.0079, -0.0079,
0.0914, 0.0914, 0.0913,
0.0913, 0.0914, 0.0915,
0.0914, 0.0914
Set of 'uy' coordinates of directional vectors:
-0.2032, -0.0047, 0.1936,
0.3919, 0.5901, 0.7885,
0.9869, 1.1852, -0.1040,
0.0944, 0.2927, 0.4911,
0.6894, 0.8877, 1.0860,
-0.2032, -0.0047, 0.1936,
0.3919, 0.5902, 0.7885,
0.9869, 1.1852, 1.0860,
0.9869, 1.1852, 1.0861,
0.9865, 1.1853, 1.0860,
0.9870, 1.1852, 1.0861,
-0.2032, -0.0047, 0.1937,
0.3919, 0.5902, 0.7885,
0.9869, 1.1852, -0.1039,
0.0944, 0.2927, 0.4911,
0.6894, 0.8877, 1.0860,
-0.2032, -0.0047, 0.1935,
0.3919, 0.5902, 0.7885,
0.9869, 1.1852
and set of 3D points in (x. y. z. x. y. z. x. y. z. ...) form:
{{0, 0, 0}, {0, 16, 0}, {0, 32, 0},
{0, 48, 0}, {0, 64, 0}, {0, 80, 0},
{0, 96, 0}, {0, 112,0}, {8, 8, 0},
{8, 24, 0}, {8, 40, 0}, {8, 56, 0},
{8, 72, 0}, {8, 88, 0}, {8, 104, 0},
{16, 0, 0}, {16, 16,0}, {16, 32, 0},
{16, 48, 0}, {16, 64, 0}, {16, 80, 0},
{16, 96, 0}, {16, 112, 0}, {24, 104, 0},
{32, 96, 0}, {32, 112, 0}, {40, 104, 0},
{48, 96, 0}, {48, 112, 0}, {56, 104, 0},
{64, 96, 0}, {64, 112, 0}, {72, 104, 0},
{80, 0, 0}, {80, 16, 0}, {80, 32, 0},
{80,48, 0}, {80, 64, 0}, {80, 80, 0},
{80, 96, 0}, {80, 112, 0}, {88, 8, 0},
{88, 24, 0}, {88, 40, 0}, {88, 56, 0},
{88, 72, 0}, {88, 88, 0}, {88, 104, 0},
{96, 0, 0}, {96, 16, 0}, {96, 32, 0},
{96, 48,0}, {96, 64, 0}, {96, 80, 0},
{96, 96, 0}, {96, 112, 0}}
This is kind of an "easy" modelled data with very small rotations.
Well, the proper way of using Levenberg-Marquardt is that you need a good initial estimate (a "seed") for your parameters. Recall that LM is a variant of Newton-Raphson; as with such iterative algorithms, the quality of your starting point will make or break your iteration; either converging to what you want, converging to something completely different (not that unlikely to happen, especially if you have a lot of parameters), or shooting off into the wild blue yonder (diverges).
In any event, it would be more helpful if you could mention the model function you're fitting, and possibly a scatter plot of your data; it might go a long way towards finding a workable solution for this.
I would suggest you try using a different approach to indirectly find your rotation parameters, namely to use a 4x4 affine transformation matrix to incorporate the translation and rotation parameters.
This gets rid of the nonlinearity of the sine and cosine functions (which you can figure out after the fact).
The tough part would be to constrain the transformation matrix from shearing or scaling, which you don't want.
Here you have your problem modeled and running with Mathematica.
I used the "Levenberg-Marquardt" method.
This is why I asked for your data. With MY data, YOUR problems are always going to be easier:)
xnew[x_, y_, z_] :=
RotationMatrix[rx, {1, 0, 0}].RotationMatrix[
ry, {0, 1, 0}].RotationMatrix[rz, {0, 0, 1}].{x, y, z} + {tx, ty, tz};
(* Generate Sample Data*)
(* Angles 1/2,1/3,1/5 *)
(* traslation -> {1,2,3} *)
(* Minimum mean Noise 5% *)
data = Table[{{x, y, z},
RotationMatrix[1/2, {1, 0, 0}].
RotationMatrix[1/3, {0, 1, 0}].
RotationMatrix[1/5, {0, 0, 1}].{x, y, z} +{1, 2, 3} +RandomReal[{-.05, .05}, 3]},
{x, 0, 1, .1}, {y, 0, 1, .1}, {z, 0, 1, .1}];
data = Flatten[data, 2];
(* Now find the parameters*)
FindMinimum[
Sum[SquaredEuclideanDistance[xnew[i[[1]] /. List -> Sequence],
i[[2]]], {i, data}]
, {rx, ry, rz, tx, ty, tz}, Method -> "LevenbergMarquardt"]
Out:
{3.2423, {rx -> 0.500566, ry -> 0.334012, rz -> 0.199902,
tx -> 0.99985, ty -> 1.99939, tz -> 3.00021}}
(Within 1/1000 of the real values)
Edit
I worked a little with your data.
The problem is that your system is very bad conditioned. You need much more data to effectively calculate such small rotations.
These are the results I got:
Rotations in degrees:
rx = 179.99999999999999999999984968493536659553226696793
ry = 180.00000000000000000000006934755799995159952661222
rz = 180.0006286861217378980724139120849587855611645627
Traslations
tx = 48.503663696727576867196234527227830090575281353092
ty = 63.974139455057300403798198525151849767949596684232
tz = -0.99999999999999999999997957276716543927459921348549
I should calculate the errors, but I've no time right now.
BTW, rz = Pi + 0.000011 (in radians)
HTH!
Well, I used ceres-solver to solve this, but I did make a modification in your data . Instead of "uz=1.0", I used "uz=0.0" which makes this entirely a 2d data fitting.
I got the following results.
trans: -88.6384, -16.3879, 0
rot: 0, 0, -6.97813e-05
After getting these results, manually calculated the sum of orthogonal distance of transformed points to the corresponding lines and got 0.0280452.
struct CostFunctor {
CostFunctor(const double p[3], double ux, double uy){
p_[0] = p[0];p_[1] = p[1];p_[2] = p[2];
n_[0] = ux; n_[1] = uy;
n_[2] = 0.0;
normalize(n_);
}
template <typename T>
bool operator()(const T* const x, T* residual) const {
T pDash[3];
T pIn[3];
T temp[3];
pIn[0] = T(p_[0]);
pIn[1] = T(p_[1]);
pIn[2] = T(p_[2]);
//transform the input point p_ to pDash
xform(x, &pIn[0], &pDash[0]);
//find dot(pDash, n), where n is the direction of line
T pDashDotN = T(pDash[0]) * T(n_[0]) + T(pDash[1]) * T(n_[1]) + T(pDash[2]) * T(n_[2]);
//projection of pDash along line
temp[0] = pDashDotN * n_[0];temp[1] = pDashDotN * n_[1];temp[2] = pDashDotN * n_[2];
//orthogonal vector from projection to point
temp[0] = pDash[0] - temp[0];temp[1] = pDash[1] - temp[1];temp[2] = pDash[2] - temp[2];
//squared error
residual[0] = temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2];
return true;
}
//untransformed point
double p_[3];
double ux_;
double uy_;
//direction of line
double n_[3];
};
template<typename T>
void xform(const T *x, const T * inPoint, T *outPoint3) {
T xTheta = x[3];
T pOdd[3], pEven[3];
pOdd[0] = inPoint[0];
pOdd[1] = inPoint[1] * cos(xTheta) + inPoint[2] * sin(xTheta);
pOdd[2] = -inPoint[1] * sin(xTheta) + inPoint[2] * cos(xTheta);
T yTheta = x[4];
pEven[0] = pOdd[0] * cos(yTheta) + pOdd[2] * sin(yTheta);
pEven[1] = pOdd[1];
pEven[2] = -pOdd[0] * sin(yTheta) + pOdd[2] * cos(yTheta);
T zTheta = x[5];
pOdd[0] = pEven[0] * cos(zTheta) - pEven[1] * sin(zTheta);
pOdd[1] = pEven[0] * sin(zTheta) + pEven[1] * cos(zTheta);
pOdd[2] = pEven[2];
T xTrans = x[0], yTrans = x[1], zTrans = x[2];
pOdd[0] += xTrans;
pOdd[1] += yTrans;
pOdd[2] += zTrans;
outPoint3[0] = pOdd[0];
outPoint3[1] = pOdd[1];
outPoint3[2] = pOdd[2];
}

How is a negative image figured out?

My program im working on does grayscale, builds an alpha mask, and splits the color channels.
How do you invert a picture?
the above are done looking at the image pixel by pixel.
Im using vb2005.net, for the sake of speed is there other ways of doing those things using drawing.graphics?
See this. You'll basically have to use either ColorMatrix:
new float[][]
{
new float[] {-1, 0, 0, 0, 0},
new float[] {0, -1, 0, 0, 0},
new float[] {0, 0, -1, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {1, 1, 1, 0, 1}
}
or unsafe processing (not sure if this particular one can be done in VB.NET)