Simplest way to transform a CGAL::Surface_mesh - cgal

I have a CGAL::Surface_mesh myMesh, I want to:
Translate myMesh, such that the centroid is its new origin
Rotate myMesh, such that the principal axis is X-axis
then Scale myMesh
I know, that I can use Surface_mesh_deformation, but that seems to be in-efficient when I want to do rigid transformation.

We have recently merge this PR that is doing exactly what you want. It will be part of CGAL 4.13. In the mean time you can call CGAL::Aff_transformation_3 on each point in the mesh like:
CGAL::Aff_transformation_3<K> aff(XXXXX);
for(Surface_mesh::Vertex_index v : myMesh.vertices())
{
aff(myMesh.point(v));
}

Related

How to do polynomial transformations programmatically?

Suppose i have bunches of the below n=36 polynomials/data:
They are all quite similar but with sightly different roof and amplitude, what is the best approach for me to code a sequence of coefficients/changes so that i can use this sequence to transform one polynomial to another one, say: the blue one + a change sequence -> the green one?
P.S.:
I had tried to use gaussian curve to fit the data, but unfortunately the results were very poor, so i have to use polynomials;
Currently the data are fitted by numpy.polyfit(x, y, 35)
Edit:
The intention is to find a way to generically describe the transformation between two polys, so i can use it to transform the future polys, say: in future i get a totally new poly like above, i can use this transformation code to transform it in a specific manner: increase/decrease the roof/amplitude, by specific manner i mean, note in the graph, the y changes around the roof x is always bigger, along +x / -x the changes are descending in a way, quite like gaussian curve, but unfortunately cannot use gaussian curve to express the data

Moving player on Y axis in Godot 2D

I'm new to Godot.
I'm trying to make my player move vertically just like when it's moving horizontally.
I've tried a couple of thoughts, but unfortunately, I couldn't move him the I want him to move.
I want to code my vertical movement in a similar way to my following horizontal movement code if possible:
var direction: = Vector2(
Input.get_action_strength("move_right") - Input.get_action_strength("move_left"), 0.0
)
velocity = speed * direction
velocity = move_and_slide(velocity)
And if it's not possible, how can I code it?
Once upon a time there were vectors. I'm not in the mood to make yet another Introduction to Vector Algebra or to explain How to Work With Arbitrarily Oriented Vectors. Perhaps you might be interested in Math for Game Devs.
In this case, what you need to know is that 2D Vectors have an horizontal an a vertical component (usually called x and y respectively). And you are leaving your vertical component at zero, here:
var direction: = Vector2(
Input.get_action_strength("move_right") - Input.get_action_strength("move_left"), 0.0
)
So… Er… Don't do that. You say you want it to be like the horizontal, so something like this:
var direction: = Vector2(
Input.get_action_strength("move_right") - Input.get_action_strength("move_left"),
Input.get_action_strength("move_down") - Input.get_action_strength("move_up")
)
In computer graphics the vertical component in 2D often goes downwards, due to historical reasons. There are different conventions for 3D, but that is not the issue at hand, pun intended.
The other lines you have already work with arbitrary vectors. You don't need to change them, nor repeat them.

Convert a 3D cartesian map to Healpix projection

I want to convert a map I have into a healpy map. I am fairly new to working with healpy so any suggestions would be appreciated.
The current map looks like this, in the format GLONxGLATxR(Kpc):
You need to use the healpy.ang2pix function, so given your coordinates you can understand which is the associated pixel.
See https://healpy.readthedocs.io/en/latest/generated/healpy.pixelfunc.ang2pix.html
As an example, see this tutorial:
https://gist.github.com/zonca/680c68c3d60697eb0cb669cf1b41c324

Using libigl's uniformly_sample_two_manifold

I'm trying to use the function of libigl uniformly_sample_two_manifold, but it does not work as described and there is no documentation whatsoever to help me understand why.
I have a 3D mesh represented as Eigen::MatrixXd V with vertices and Eigen::MatrixXi F with faces. I'm attempting to use the function as follows:
igl::uniformly_sample_two_manifold(V, F, 20, 1.0, Out);
... giving the function my vertices, faces, and asking for 20 uniform samples in the Out structure. I set the "push factor" to 1 since I don´t think I have any use for it now.
I noticed that the function specifically askes for "positions of mesh in weight space", which I presumed means the vertex positions. If I use it like this, however, the function returns the expected amount of vertices which are clustered very close to each other and are by no means uniformly distributed across the mesh.
Does anyone happen to know how to correctly use this function? Or would anyone know what does this "weight space" mean?
Thanks!

pose estimation: determine whether rotation and transmation matrix are right

Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.