MeshLab Face Count - mesh

Not sure if I'm supposed to ask this question here, but going to give it a try since MeshLab doesn't seem to respond to issues on GitHub fast..
When I imported a mesh consisting of 100 vertices and 75 quad faces, meshlab somehow recognizes it to have 146 faces. What is the problem here?
Please find here the OBJ file and below the screenshot:
Any help/advice would be greatly appreciated,
Thank you!
Tim

Yes, per the MeshLab homepage Stack Overflow is now the recommended place to ask questions. Github should be reserved for reporting actual bugs.
It is important to understand is that MeshLab is designed to work with large unstructured triangular meshes, and while it can do some things with quad and polygonal meshes, there are some limitations and idiosyncrasies.
MeshLab essentially treats all meshes as triangular for most operations; when a polygonal mesh is opened, MeshLab creates "faux edges" that subdivide the mesh into triangles. You can visualize the faux edges by turning "Polygonal Modality" on or off in the edge display pane. If you run "Compute Geometric Measures", it will provide different lengths for the edges both with and without the faux edges. This is why MeshLab is reporting a higher number of faces for your model; it is reporting the number of faces after triangulation, i.e. including the faux edge subdivision. As you can see, when dividing the number of quad faces (75) in half, you end up with nearly double the number of triangular faces (146), which makes sense. Unfortunately I don't know of a way to have MeshLab report the number of faces without these faux edges.
Most filters only work on triangular meshes, and if run on a polygonal mesh the faux edges will be used. A few specific filters (e.g. those in the "Polygonal and Quad Mesh" category) work with quads, and for these the faux edges should be ignored. When exporting, if you check "polygonal" the faux edges should be discarded and the mesh will be saved with the proper polygons, otherwise the mesh will be permanently triangulated per the faux edges.
Hope this helps!

Related

Given a 3D model that is similar the shape of a cube, how do I map regular gridlines onto the 3D model?

Suppose I have a 3D model that roughly resembles a cube or a cuboid, and I wanted to estimate a regular set of gridlines that lays on top of the 3D model.
Is there an efficient way of doing this?
A brute force way might be to first estimate the 8 corner points, then use the geodesic function in CGAL to link up the corner points to form the initial path lines. Recursively, take the halfway point in each path find the geodesic path to the halfway point of the opposing path.
But that would take up too much computing time.
Is there a faster way to do this via texture mapping texture? I need to code this out so will be thankful if a specific algorithm can be referred to, rather than using a software such as blender, maya, etc.
thanks for any help!

Inverted faces in surface mesh produced by Polygonal Surface Reconstruction

I'm trying to use Polygonal Surface Reconstruction with building point cloud to create simplified building models.
I did first tests with this CGAL code example and got first promising results.
As an example, I used this point cloud with vertex normals correctly oriented and got the following result from PSR. Some faces are clearly inverted (dark faces with normals pointing inside the watertight mesh and therefore not visible).
I was wondering if there a way to fix this face orientation error. I've noticed orientation methods on Polygon mesh but I don't really know to apply them to the resulting PSR surface mesh. As far as logic is concerned making normal point outwards should not be too complicated I guess.
Thanks in advance for any help
You can use the function reverse_face_orientations in the Polygon mesh processing package.
Note that this package has several functions that can help you to correct/modify your mesh.

Compute road plane normal with an embedded camera

I am developing some computer vision algorithms for vehicle applications.
I am in front of a problem and some help would be appreciated.
Let say we have a calibrated camera attached to a vehicle which captures a frame of the road forward the vehicle:
Initial frame
We apply a first filter to keep only the road markers and return a binary image:
Filtered image
Once the road lane are separated, we can approximate the lanes with linear expressions and detect the vanishing point:
Objective
But what I am looking for to recover is the equation of the normal n into the image without any prior knowledge of the rotation matrix and the translation vector. Nevertheless, I assume L1, L2 and L3 lie on the same plane.
In the 3D space the problem is quite simple. In the 2D image plane, since the camera projective transformation does not keep the angle properties more complex. I am not able to find a way to figure out the equation of the normal.
Do you have any idea about how I could compute the normal?
Thanks,
Pm
No can do, you need a minimum of two independent vanishing points (i.e. vanishing points representing the images of the points at infinity of two different pencils of parallel lines).
If you have them, the answer is trivial: express the image positions of said vanishing points in homogeneous coordinates. Then their cross product is equal (up to scale) to the normal vector of the 3D plane said pencils define, decomposed in camera coordinates.
Your information is insufficient as the others have stated. If your data is coming from a video a common way to get a road ground plane is to take two or more images, compute the associated homography then decompose the homography matrix into the surface normal and relative camera motion. You can do the decomposition with OpenCV's decomposeHomographyMatmethod. You can compute the homography by associating four or more point correspondences using OpenCV's findHomography method. If it is hard to determine these correspondences it is also possible to do it with a combination of point and line correspondences paper, however this is not implemented in OpenCV.
You do not have sufficient information in the example you provide.
If you are wondering "which way is up", one thing you might be able to do is to detect the line on the horizon. If K is the calibration matrix then KTl will give you the plane normal in 3D relative to your camera. (The general equation for backprojection of a line l in the image to a plane E through the center of projection is E=PTl with a 3x4 projection matrix P)
A better alternative might be to establish a homography to rectify the ground-plane. To do so, however, you need at least four non-collinear points with known coordinates - or four lines, no three of which may be parallel.

Tetrahedralization from surface mesh of thin-walled object

I need to generate a tetrahedral (volume) mesh of thin-walled object object. Think of objects like a bottle or a plastic bowl, etc, which are mostly hollow. The volumetric mesh is needed for an FEM simulation. A surface mesh of the outside surface of the object is available from measurement, using e.g. octomap or KinectFusion. Therefore the vertex spacing is relatively regular. The inner surface of the object can be calculated from the outside surface by moving all points inside, since the wall thickness is known.
So far, I have considered the following approaches:
Create a 3D Delaunay triangulation (which would destroy the existing surface meshes) and then remove all tetrahedra which are not between the two original surfaces. For this check, it might be required to create an implicit surface representation of the 2 surfaces.
Create a 3D Delaunay triangulation and remove tetrahedra which are "inside" (in the hollow space) or "outside" (of the outer surface) with Alphashapes.
Close the outside and inside meshes and load them into tetgen as the outside hull and as a hole respectively.
These approaches seem to be a bit inelegant to me, and they still have some pitfalls. I would probably need several libraries/tools for them. For 1 and 2, probably tetgen or another FEM meshing tool would still be required to create well-conditioned tetrahedra. Does anyone have a more straight-forward solution? I guess this should also be a common problem in 3D printing.
Concerning tools/libraries, I have looked into PCL, meshlab and tetgen so far. They all seem to do only part of the job. Ideally, I would like to use only open source libraries and avoid tools which require manual intervention.
One way is to:
create triangular mesh of surface points,
extrude (move) that surface to inner for a given thickness. That produces volume (triangular prism) mesh of a wall,
each prism can be split in three tetrahedrons.
The problem I see is aspect ratio.
A single layer of tetrahedra will not reproduce shell or bending behavior very well. A single element through the thickness will already require a large mesh. Putting more than one will likely break the bank in order to keep aspect ratios and angles acceptable.
I'd prefer brick or thick shell elements to tetrahedra in this case. I think the modeling will be easier and the behavior will be more faithful to the physics.

Detect Collision point between a mesh and a sphere?

I am writing a physics simulation using Ogre and MOC.
I have a sphere that I shoot from the camera's position and it travels in the direction the camera is facing by using the camera's forward vector.
I would like to know how I can detect the point of collision between my sphere and another mesh.
How would I be able to check for a collision point between the two meshes using MOC or OGRE?
Update: Should have mentioned this earlier. I am unable to use a 3rd party physics library as we I need to develop this myself (uni project).
The accepted solution here flat out doesn't work. It will only even sort of work if the mesh density is generally high enough that no two points on the mesh are farther apart than the diameter of your collision sphere. Imagine a tiny sphere launched at short range on a random vector at a huuuge cube mesh. The cube mesh only has 8 verts. What are the odds that the cube is actually going to hit one of those 8 verts?
This really needs to be done with per-polygon collision. You need to be able to check intersection of polygon and a sphere (and additionally a cylinder if you want to avoid tunneling like reinier mentioned). There are quite a few resources for this online and in book form, but http://www.realtimerendering.com/intersections.html might be a useful starting point.
The comments about optimization are good. Early out opportunities (perhaps a quick check against a bounding sphere or an axis aligned bounding volume for the mesh) are essential. Even once you've determined that you're inside a bounding volume, it would probably be a good idea to be able to weed out unlikely polygons (too far away, facing the wrong direction, etc.) from the list of potential candidates.
I think the best would be to use a specialized physics library.
That said. If I think about this problem, I would suspect that it's not that hard:
The sphere has a midpoint and a radius. For every point in the mesh do the following:
check if the point lies inside the sphere.
if it does check if it is closer to the center than the previously found point(if any)
if it does... store this point as the collision point
Of course, this routine will be fairly slow.
A few things to speed it up:
for a first trivial reject, first see if the bounding sphere of the mesh collides
don't calc the squareroots when checking distances... use the squared lengths instead.(much faster)
Instead of comparing every point of the mesh, use a dimensional space division algorithm (quadtree / BSP)for the mesh to quickly rule out groups of points
Ah... and this routine only works if the sphere doesn't travel too fast (relative to the mesh). If it would travel very fast, and you sample it X times per second, chances are the sphere would have flown right through the mesh without every colliding. To overcome this, you must use 'swept volumes' which basically makes your sphere into a tube. Making the math exponentially complicated.