translate coordinate from one triangle to a triangle with a different perspective - objective-c

How do i calculate point D for triangle 2?
I have the the following coordinates for triangle 1:
a(0,0) b(0,78) c(18,39)
point D is located at (0,39) in triangle 1.
now I change the perspective on my triangle by for example moving coordinate b and c.
the new triangle formed is called triangle 2 with coordinates:
a(0,0) b(11,72) c(37,42)
AS YOU CAN SEE POINT D IS NOT IN THE MIDDLE OF LINE a<-->b BECAUSE OF THE CHANGE IN PERSPECTIVE/SKEW.
How do i calculate point d? I have the coordinates abc of triangle 1 & 2.
Preferably answer in programcode rather than using math signs, since i am not a hero at reading math :)

You need to convert point D to barycentric coordinates using the original triangle coordinates, then convert it back to cartesian coordinates using the modified triangle coordinates.
This looks like a good introduction to triangular barycentric coordinates: http://blogs.msdn.com/b/rezanour/archive/2011/08/07/barycentric-coordinates-and-point-in-triangle-tests.aspx
Also, explicit formulae for converting a point in a triangle to barycentric coordinates are given at the end of the Converting to Barycentric Coordinates section of the Wikipedia article “Barycentric coordinate system”.

I guess there are more ways of calculating a coordinate from one perspective to another.
more on the triangle way is written by culebrón here: Transforming captured co-ordinates into screen co-ordinates
At the same link there is another way by using SVD and calculate an H-matrix which can be used to translate any coordinate from one perspective to another. I am going to use this way because i could solve this way in matlab. Next step in objective-c! i had some trouble calculating the same in objective-c. more on that here: calculate the V from A = USVt in objective-C with SVD from LAPACK in xcode
I would like to know how to solve the triangle way too! i could not figure out what a1 and a2 were in culebron's post: https://stackoverflow.com/a/1690300/1568532 neither the width and height made much sense to me.
Also i would like to know how to calculate the EYE's point of view on a triangle or quadrangle based on 3 or 4 coordinates. if you know the original size of the object.
any ideas on this?
when i search for eye or camera's point of view. there is load of result about photography.
what do i need to use in order to calculate this? maybe some example anyone?

Related

How to find a point on a b-spline that is on the normal plane of a point on another b-spline using goemdl / nurbs

So the first problem will be explaining what I am after clearly.
I have two non-rational 3D b-splines. The first b-spline it the guiding spline. The second b-spline is a reference and it is essentially 'inside' of the first spline. ( the splines were generated in Solidworks )
Imagine a circular playground slide. The first spline is the center line of the slide. The second spline is the inside edge of the slide.
The inside spline will tend to be shorter than the center spline. The inside will also tend to have more curvature at any given point than the center.
The path of the slide is not perfectly circular. But the inside spline is always 'parallel' to the outside. ( very liberal use of the word parallel here )
What I am after:
Given a point along the center curve, I would like to find the point on the inside curve that is on the plane that is defined by the normal to the tangent of the center spline at that point.
Where I am at:
I am using the geomdl library in python to manipulate the splines.
I can choose a distance along the center spline and from geomdl I get the 3D point and the tangent vector (A,B,C) of that point and therefore the plane at that point that is normal to the spline at that point.
What I am doing:
From the tangent vector and the point I compute the equation of the plane in the form of:
Ax + By + Cz = D.
From there I guess at the point at the same distance on the inside spline and plug it into the equation for the plane that I already have. I use the error in D to guess at which way I should bump my guess on where the point on the inside curve might really be.
[ I understand that over the entire length of the two splines there may be more than one solution. i.e. if the curve wraps more than 180° there would be more than one point on the inside curve that lands on the plane defined by the center curve. In the local area that i am interested in this will not be a problem. Any second point would also be a long ways away from the center line. i.e. the correct point will be no more than 25 mm from the center point. A non-local point will be at least 3000 mm away. ]
This mostly works. But from time to time it fails. i.e. if D is very near 0 my guesses will diverge from the answer.
Currently I make 10 guesses, each guess having a smaller delta guess than the last.
I have a great number of these points to evaluate. My solution requires 10 X the number of calculations so it is not terribly efficient.
From my Google searches I believe that using the error in D in the equation of a plane may not be correct. I 'think' that D is the distance of the plane to the origin.(yes /no?) Therefore I am really comparing the distance of the two planes from the origin and not really from each other. If my guess happens to be on the "other side" of the origin then the distance's may be the same but opposite.
My Question:
What is the correct way to go about this?
Is my assumption that D is the distance from the plane to the origin?
Is driving the error in D between the two points valid?
What is the correct way to do this?
Restate my question in different terms
Given a plane (Ax + By + Cz = D) how do I find the point on a given b-spline that pierces ( or is coincident to ) that plane (using geomdl.bcurve)?
( I am very much in over my head here so please forgive if this does not make sense )

Calculate angle on a plane in 3D space from a 2D image

I have 2 input images of a plane where the (static) camera is at an unknown angle. I managed to extract edges and points of interests using opencv. But I'm stuck calculating real angles from the images.
From image #1 I need to calculate the camera angle relative to the plane. I know 3 points on the plane that form a equilateral triangle (angles of 60 degree). The center point of the triangle is also the centerpoint of the plane. However the plane center point on the image is covered by another object.
From image #2 I need to calculate the real angle of an object (Point C) on the plane to one of the 3 points and the plane center point (= line A to B).
How can I calculate the real angle β as if the camera had no angle towards the plane?
Update:
I was looking for a solution for my problem at https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html
There is a number of functions but I couldn't figure out how to apply them to my specific problem.
There is a function to calculate Homography using two images with keypoints but I do not have images of the scene from different camera angles.
Then there is cv::findHomography which Finds a perspective transformation between two planes. I know 4 source points but what are my 4 destination points?
Another one I was looking at is cv::solvePnP and cv::solvePnPRansac but again I only know 4 source points on the plane. I don't know about their 3D correspondence point.
What am I missing?
#Micka: Thanks for your input. I have 4 points for processing the image (the 3 static base points + the object at point C). I can assume these points are all located on the plane at z=0. However I do not have coordinates for a second plane neither the (x,y) of the corresponding 3D points.
Your description does not explicitly say it, but if you can assume that segment AB bisects the base of the triangle, then you have 4 point correspondences between the plane and its image, so you can use cv::findHomography.

pose estimation: determine whether rotation and transmation matrix are right

Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.

How to choose control point distance for 3D cubic Bézier curves to optimize 'roundness'?

Say I want to construct a 3D cubic Bézier curve, and I already have both end-points, and the direction (normal vector) for both control points. How can I choose the distance of both control points to their respective end-points in order to make the curve as 'nicely rounded' as possible?
To formalize 'nicely rounded': I think that means maximizing the smallest angle between any two segments in the curve. For example, having end-points (10, 0, 0) and (0, 10, 0) with respective normal vectors (0, 1, 0) and (1, 0, 0) should result in a 90° circular arc. For the specific case of 2D circular arcs, I've found articles like this one. But I haven't been able to find anything for my more general case.
(Note that these images are just to illustrate the 'roundness' concept. My curves are not guaranteed to be plane-aligned. I may replace the images later to better illustrate that point.)
This is a question of aesthetics, and if the real solution is unknown or too complicated, I would be happy with a reasonable approximation. My current approximation is too simplistic: choosing half the distance between the two end-points for both control point distances. Someone more familiar with the math will probably be able to come up with something better.
(PS: This is for open-source software, and I would be happy to give credit on GitHub.)
Edit: Here are some other images to illustrate a 3D case (jsfiddle):
Edit 2: Here's a screenshot of an unstable version of ApiNATOMY to give you an idea of what I'm trying to do. I'm creating 3D tubes to represent blood-vessels, connecting different parts of an anatomical schematic:
(They won't let me put in a jsfiddle link if I don't include code...)
What you are basically asking is to have curvature over the spline as constant as possible.
A curve with constant curvature is just a circular arc, so it makes sense to try to fit such an arc to your input parameters. In 2D, this is easy: construct the line which goes through your starting point and is orthogonal to the desired direction vector. Do the same for the ending point. Now intersect these two lines: the result is the center of the circle which passes through the two points with the desired direction vectors.
In your example, this intersection point would just be (0,0), and the desired circular arc lies on the unit circle.
So this gives you a circular arc, which you can either use directly or use the approximation algorithm which you have already cited.
This breaks down when the two direction vectors are collinear, so you'd have to fudge it a bit if this ever comes up. If they point at each other, you can simply use a straight line.
In 3D, the same construction gives you two planes passing through the end points. Intersect these, and you get a line; on this line, choose the point which minimizes the sum of squared distances to the two points. This gives you the center of a sphere which touches both end points, and now you can simply work in the plane spanned by these three points and proceed as in 2D.
For the special case where your two end points and the two known normal vector for the control points happen to make the Bezier curve a planar one, then basically you are looking for a cubic Bezier curve that can well approximate a circular arc. For this special case, you can set the distance (denoted as L) between the control point and their respective end point as L = (4/3)*tan(A/4) where A is the angle of the circular arc.
For the general 3D case, perhaps you can apply the same formula as:
compute the angle between the two normal vectors.
use L=(4/3)*tan(A/4) to decide the location of your control points.
if your normals are aligned in a plane
What you're basically doing here is creating an elliptical arc, in 3D, where the "it's in 3D" part is completely irrelevant, since it's just a 2D curve, rotated/translated to sit in your 3D space. So let's just solve the 2D case, and then the RT is entirely up to you.
Creating the "perfect" cubic Bezier between two points on an arc comes with limitations. You basically can't create good looking arcs that span more than a quarter circle. So, with that said: your start and end point normals give you a 2D angle between your normal vectors, which is the same angle as between your start and end tangents (since normals are perpendicular to tangents). So, let's:
align our curve so that the tangent at the start is 0
plug the angle between tangents into the formula given in the section on Circle approximation in the Primer on Bezier curves. This is basically just dumb "implementing the formula for c1x/c1y/c2x/c2y as a function that takes an angle as argument, and spits out four values as c1(x,y) and c2(x,y) coordinats".
There is no step 3, we're done.
After step 2, you have your control points in 2D to create the most circular arc between a start and end point. Now you just need to scale/rotate/translate it in 3D so that it lines up with where you needed your start and end point to begin with.
if your normals are not aligned in a plane
Now we have a problem, although one that we can deal with by treating the dimensions as separate things entirely. Instead of creating a single 2D curve, we're going to create three: one that's the X/Y projection, one that's the X/Z projection, and one that's the Y/Z projection. For all three of these, we're going to abstract the control points in exactly the same way as before, and then we simply take the projective control points (three for each control point), and then go "okay, we now have X, Y, and Z projective coordinates. That means we have (X,Y,Z) coordinates", and done again.

Barycentric coordinates texture mapping

I want to map textures with correct perspective for 3D rendering. I am using barycentric coordinates to locate points on the faces of triangles. Simple affine transformation gave me that standard, weird looking result. This is what I did to correct my perspective, but it seems to have only made the distortion greater:
three triangle vertices v1 v2 v3
vertex coordinates are v_.x v_.y v_.z
texture coordinates are v_.u v_.v
barycentric coordinates corresponding to vertices are b1 b2 b3
I am trying to get the correct texture coordinates U and V
z=b1/v1.z + b2/v2.z + b3/v3.z
U=(b1*v1.u/v1.z + b2*v2.u/v2.z + b3*v3.u/v3.z) / z
V=(b1*v1.v/v1.z + b2*v2.v/v2.z + b3*v3.v/v3.z) / z
This SHOULD work shouldn't it? Why isn't this working?
EDIT: The response on this page looks useful, but I am unsure what the w coordinate is. Maybe somebody could just explain that, which would also likely solve my problem. http://www.gamedev.net/topic/593669-perspective-correct-barycentric-coordinates/
note: My tags were all wrong at first. That is now fixed.
Okay, this one I DID manage to solve on my own. I was dividing by the z coordinate in screen space. The solution is to divide by the homogeneous w coordinate instead.
Well, that took a while to figure out.