Conditional Vertex Selection with radius not working in Meshlab - mesh

I'm using meshlab to process a dense point cloud right now. I'm trying to remove some points that have radius bigger than a certain number from the center, but could not get Meshlab to select those points. I'm using Conditional Vertex Selection, but the (rad > 0) function is not selecting any point at all. I also centered the bounding box at the origin.
enter image description here
Does anyone know what the problem is? Thanks!

I'm not really sure what the "rad" variable refers to, but I don't think it is a spherical distance from the center. However if you want to select vertices larger than a spherical radius from the center, you can use something like (sqrt(x^2+y^2+z^2) >= 100), replacing 100 with whatever radius you want.
Shameless plug: MLX incorporates both cylindrical and spherical selection shortcuts as functions mlx.select.cylindrical_vert & mlx.select.spherical_vert.

Related

Is there a way to use CGAL to perform adaptive remeshing

Suppose I have a highly refined mesh, which can be achieved by using using the remeshing code in CGAL.
PMP::isotropic_remeshing(
faces(mesh),
target_edge_length,
mesh,
PMP::parameters::number_of_iterations(nb_iter)
.protect_constraints(true)//i.e. protect border, here
);
Now if I want to use the edge collapse function in CGAL to selectively only collapse areas that I wanted to using this function,
int r = edge_collapse(surface_mesh
,stop_predicate
,vertex_index_map(vimap)
.edge_index_map(eimap)
.edge_is_border_map(ebmap)
.get_cost(cf)
.get_placement(pf)
.visitor(vis)
);
I understand that there is a "get_cost(cf)" where i could increase the costing in one region of the mesh, so as to lower the number of edge collapse in that region.
Could anyone show me how to do that?
Specifically, suppose I have a sphere of size 1, with an isotropic mesh of edge length 0.001. I wanted a gradually grading of edge length to go from 0.01 on one end to 0.1 on the other opposite side of the sphere. How do i achieve it with the two functions?
q

pose estimation: determine whether rotation and transmation matrix are right

Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.

Detect if a quad is actually visible 2D in OpenGL

I currently have 16 tiles, with individual images that make up 1 big map. I pan by transforming right at the beginning before any actual drawing with this:
GL.Translate(G_.Pan(0), G_.Pan(1), 0)
Then I zoom by doing this:
GL.Ortho(-G_.Size * 1.5 ^ G_.ZoomFactor, G_.Size * 1.5 ^ G_.ZoomFactor, G_.Size * 1.5 ^ G_.ZoomFactor, -G_.Size * 1.5 ^ G_.ZoomFactor, -1, 1)
G_.Size is a constant that only varies on startup depending on parameters, zoom factor ranges from -1 to -13
What I want to be able to do is check if 1 of the 16 tiles is within the visible area, so then I stop them drawing when they are not on screen. I had found some quite complex methods for doing it, but it was 3D and seemed like a lot of work for something that should be simple. I would of thought it would of been something like just checking if a point is within the bounds of visible area, but I have no idea on how to get the visible area.
Andon M Coleman already suggested you to implement projection volume culling (a generalized form of frustum culling). This is however outside the scope of OpenGL. You must understand that OpenGL is not a "magical" scene graph that does scene management and the likes. It's mere drawing API; what it does is putting shaded, textured points, lines or triangles on the screen and that's it. The rest is up to you, or the libraries you choose to implement it.
In the case of projection volume culling you're testing if a given piece of geometry intersects with the volume defined by the planes that form the borders of the volume. Your projection matrix defines such planes, specifically it transform the view space vertex position volume into the range [-1;1]×[-1;1]×[0;1] of perspective divided clip space. So by inverting the projection matrix and unprojection the corners of the [-1;1]×[-1;1]×[0;1] cube through that you determine the limiting planes of the projection volume in view space.
You then use that information to intersect your quads with the volume to see if they cross it, i.e. are in any way visible.

Calculating total coverage area of a union of polygons

I have a number of 2D (possibly intersecting) polygons which I rendered using OpenGL ES on the screen. All the polygons are completely contained within the screen. What is the most timely way to find the percentage area of the union of these polygons to the total screen area? Timeliness is required as I have a requirement for the coverage area to be immediately updated whenever a polygon is shifted.
Currently, I am representing each polygon as a 2D array of booleans. Using a point-in-polygon function (from a geometry package), I sample each point (x,y) on the screen to check if it belongs to the polygon, and set polygon[x][y] = true if so, false otherwise.
After doing that to all the polygons in the screen, I loop through all the screen pixels again, and check through each polygon array, counting that pixel as "covered" if any polygon has its polygon[x][y] value set to true.
This works, but the performance is not ideal as the number of polygons increases. Are there any better ways to do this, using open-source libraries if possible? I thought of:
(1) Unioning the polygons to get one or more non-overlapping polygons. Then compute the area of each polygon using the standard area-of-polygon formula. Then sum them up. Not sure how to get this to work?
(2) Using OpenGL somehow. Imagine that I am rendering all these polygons with a single color. Is it possible to count the number of pixels on the screen buffer with that certain color? This would really sound like a nice solution.
Any efficient means for doing this?
If you know background color and all polygons have other colors, you can read all pixels from framebuffer glReadPixels() and simply count all pixels that have color different than background.
If first condition is not met you may consider creating custom framebuffer and render all polygons with the same color (For example (0.0, 0.0, 0.0) for backgruond and (1.0, 0.0, 0.0) for polygons). Next, read resulting framebuffer and calculate mean of red color across the whole screen.
If you want to get non-overlapping polygons, you can run a line intersection algorithm. A simple variant is the Bentley–Ottmann algorithm, but even faster algorithms of O(n log n + k) (with n vertices and k crossings) are possible.
Given a line intersection, you can unify two polygons by constructing a vertex connecting both polygons on the intersection point. Then you follow the vertices of one of the polygons inside of the other polygon (you can determine the direction you have to go in using your point-in-polygon function), and remove all vertices and edges until you reach the outside of the polygon. There you repair the polygon by creating a new vertex on the second intersection of the two polygons.
Unless I'm mistaken, this can run in O(n log n + k * p) time where p is the maximum overlap of the polygons.
After unification of the polygons you can use an ordinary area function to calculate the exact area of the polygons.
I think that attempt to calculate area of polygons with number of pixels is too complicated and sometimes inaccurate. You can see something similar in stackoverflow answer about calculation the area covered by a polygon and if you construct regular polygons see area of a regular polygon ,

translate coordinate from one triangle to a triangle with a different perspective

How do i calculate point D for triangle 2?
I have the the following coordinates for triangle 1:
a(0,0) b(0,78) c(18,39)
point D is located at (0,39) in triangle 1.
now I change the perspective on my triangle by for example moving coordinate b and c.
the new triangle formed is called triangle 2 with coordinates:
a(0,0) b(11,72) c(37,42)
AS YOU CAN SEE POINT D IS NOT IN THE MIDDLE OF LINE a<-->b BECAUSE OF THE CHANGE IN PERSPECTIVE/SKEW.
How do i calculate point d? I have the coordinates abc of triangle 1 & 2.
Preferably answer in programcode rather than using math signs, since i am not a hero at reading math :)
You need to convert point D to barycentric coordinates using the original triangle coordinates, then convert it back to cartesian coordinates using the modified triangle coordinates.
This looks like a good introduction to triangular barycentric coordinates: http://blogs.msdn.com/b/rezanour/archive/2011/08/07/barycentric-coordinates-and-point-in-triangle-tests.aspx
Also, explicit formulae for converting a point in a triangle to barycentric coordinates are given at the end of the Converting to Barycentric Coordinates section of the Wikipedia article “Barycentric coordinate system”.
I guess there are more ways of calculating a coordinate from one perspective to another.
more on the triangle way is written by culebrón here: Transforming captured co-ordinates into screen co-ordinates
At the same link there is another way by using SVD and calculate an H-matrix which can be used to translate any coordinate from one perspective to another. I am going to use this way because i could solve this way in matlab. Next step in objective-c! i had some trouble calculating the same in objective-c. more on that here: calculate the V from A = USVt in objective-C with SVD from LAPACK in xcode
I would like to know how to solve the triangle way too! i could not figure out what a1 and a2 were in culebron's post: https://stackoverflow.com/a/1690300/1568532 neither the width and height made much sense to me.
Also i would like to know how to calculate the EYE's point of view on a triangle or quadrangle based on 3 or 4 coordinates. if you know the original size of the object.
any ideas on this?
when i search for eye or camera's point of view. there is load of result about photography.
what do i need to use in order to calculate this? maybe some example anyone?