Compute road plane normal with an embedded camera - camera

I am developing some computer vision algorithms for vehicle applications.
I am in front of a problem and some help would be appreciated.
Let say we have a calibrated camera attached to a vehicle which captures a frame of the road forward the vehicle:
Initial frame
We apply a first filter to keep only the road markers and return a binary image:
Filtered image
Once the road lane are separated, we can approximate the lanes with linear expressions and detect the vanishing point:
Objective
But what I am looking for to recover is the equation of the normal n into the image without any prior knowledge of the rotation matrix and the translation vector. Nevertheless, I assume L1, L2 and L3 lie on the same plane.
In the 3D space the problem is quite simple. In the 2D image plane, since the camera projective transformation does not keep the angle properties more complex. I am not able to find a way to figure out the equation of the normal.
Do you have any idea about how I could compute the normal?
Thanks,
Pm

No can do, you need a minimum of two independent vanishing points (i.e. vanishing points representing the images of the points at infinity of two different pencils of parallel lines).
If you have them, the answer is trivial: express the image positions of said vanishing points in homogeneous coordinates. Then their cross product is equal (up to scale) to the normal vector of the 3D plane said pencils define, decomposed in camera coordinates.

Your information is insufficient as the others have stated. If your data is coming from a video a common way to get a road ground plane is to take two or more images, compute the associated homography then decompose the homography matrix into the surface normal and relative camera motion. You can do the decomposition with OpenCV's decomposeHomographyMatmethod. You can compute the homography by associating four or more point correspondences using OpenCV's findHomography method. If it is hard to determine these correspondences it is also possible to do it with a combination of point and line correspondences paper, however this is not implemented in OpenCV.

You do not have sufficient information in the example you provide.
If you are wondering "which way is up", one thing you might be able to do is to detect the line on the horizon. If K is the calibration matrix then KTl will give you the plane normal in 3D relative to your camera. (The general equation for backprojection of a line l in the image to a plane E through the center of projection is E=PTl with a 3x4 projection matrix P)
A better alternative might be to establish a homography to rectify the ground-plane. To do so, however, you need at least four non-collinear points with known coordinates - or four lines, no three of which may be parallel.

Related

2D shape detection in 3D pointcloud

I'm working on a project to detect the position and orientation of a paper plane.
To collect the data, I'm using an Intel Realsense D435, which gives me accurate, clean depth data to work with.
Now I arrived at the problem of detecting the 2D paper plane silhouette from the 3D point cloud data.
Here is an example of the data (I put the plane on a stick for testing, this will not be in the final implementation):
https://i.stack.imgur.com/EHaEr.gif
Basically, I have:
A 3D point cloud with points on the plane
A 2D shape of the plane
I would like to calculate what rotations/translations are needed to align the 2D shape to the 3D point cloud as accurate as possible.
I've searched online, but couldn't find a good way to do it. One way would be to use Iterative Closest Point (ICP) to first take a calibration pointcloud of the plane in a known orientation, and align it with the current orientation. But from what I've heard, ICP doesn't perform well if the pointclouds aren't kind of already closely aligned at the start.
Any help is appreciated! Coding language doesn't matter.
Does your 3d point cloud have outliers? How many in what way?
How did you use ICP exactly?
One way would be using ICP, with a hand-crafted initial guess using
pcl::transformPointCloud (*cloud_in, *cloud_icp, transformation_matrix);
(to mitigate the problem that ICP needs to be close to work.)
What you actually want is the plane-model that describes the position and orientation of your point-cloud right?
A good estimator of your underlying function can be found with: pcl::ransac
pcl::ransace model consensus
You can then get the computedModel coefficents.
Now finding the correct transformation is just: How to calculate transformation matrix from one plane to another?

MeshLab Face Count

Not sure if I'm supposed to ask this question here, but going to give it a try since MeshLab doesn't seem to respond to issues on GitHub fast..
When I imported a mesh consisting of 100 vertices and 75 quad faces, meshlab somehow recognizes it to have 146 faces. What is the problem here?
Please find here the OBJ file and below the screenshot:
Any help/advice would be greatly appreciated,
Thank you!
Tim
Yes, per the MeshLab homepage Stack Overflow is now the recommended place to ask questions. Github should be reserved for reporting actual bugs.
It is important to understand is that MeshLab is designed to work with large unstructured triangular meshes, and while it can do some things with quad and polygonal meshes, there are some limitations and idiosyncrasies.
MeshLab essentially treats all meshes as triangular for most operations; when a polygonal mesh is opened, MeshLab creates "faux edges" that subdivide the mesh into triangles. You can visualize the faux edges by turning "Polygonal Modality" on or off in the edge display pane. If you run "Compute Geometric Measures", it will provide different lengths for the edges both with and without the faux edges. This is why MeshLab is reporting a higher number of faces for your model; it is reporting the number of faces after triangulation, i.e. including the faux edge subdivision. As you can see, when dividing the number of quad faces (75) in half, you end up with nearly double the number of triangular faces (146), which makes sense. Unfortunately I don't know of a way to have MeshLab report the number of faces without these faux edges.
Most filters only work on triangular meshes, and if run on a polygonal mesh the faux edges will be used. A few specific filters (e.g. those in the "Polygonal and Quad Mesh" category) work with quads, and for these the faux edges should be ignored. When exporting, if you check "polygonal" the faux edges should be discarded and the mesh will be saved with the proper polygons, otherwise the mesh will be permanently triangulated per the faux edges.
Hope this helps!

Mesh to mesh. Mesh fitting (averaging). Mesh comparison.

I have 3 sets of point cloud that represent one surface. I want to use these point clouds to construct triangular mesh, then use the mesh to represent the surface. Each set of point cloud is collected in different ways so their representation to this surface are different. For example, some sets can represent the surface with smaller "error". My questions are:
(1) What's the best way to evaluate such mesh-to-surface "error"?
(2) Is there a mature/reliable way to convert point cloud to triangular mesh? I found some software doing this but most requests extensive manual adjustment.
(3) After the conversion I get three meshes. I want to use a fourth mesh, namely Mesh4, to "fit" the three meshes, and get an "average" mesh of the three. Then I can use this Mesh4 as a representation of the underlying surface. How can I do/call this "mesh to mesh" fitting? Is it a mature technique?
Thank you very much for your time!
Please find below my answers for point 1 and 2:
as a metric for mesh-to-surface error you can use Hausdorff distance. For example, you could use Libigl to compare two meshes.
To obtain a mesh from a point cloud, have a look at PCL

Registration of 3D surface mesh to 3D image volume

I have an accurate mesh surface model of an implant I'd like to optimally rigidly align to a computed tomography scan (scalar volume) that contains the exact same object. I've tried detecting edges in the image volume with canny filter and doing an iterative closest point alignment between the edges and the vertices of the mesh, but it's not working. I also tried voxelizing the mesh, and using image volume alignment methods (Mattes Mutual) which yields very inconsistent results.
Any other suggestions?
Thank you.
Generally, mesh and volume are two different data structures. You have to either convert mesh to volume or convert volume to mesh.
I would recommend doing a segmentation of volume data first, to segment out the issues you want to register. With canny filter might not be enough to segment the border clearly. I would like to recommend you with level-set method and active contour model. These two are frequently used in medical image processing. For these two topics, I would recommend professor Chunming Li's work.
And after you do the segmentation of volume data, you might be able to reconstruct mesh model of that volume with marching cubes. The vertexes of two mesh could be registered through a simple ICP algorithm.
However, this is just a workaround instead of real registration, it always takes too much time to do the segmentation.

Detect Collision point between a mesh and a sphere?

I am writing a physics simulation using Ogre and MOC.
I have a sphere that I shoot from the camera's position and it travels in the direction the camera is facing by using the camera's forward vector.
I would like to know how I can detect the point of collision between my sphere and another mesh.
How would I be able to check for a collision point between the two meshes using MOC or OGRE?
Update: Should have mentioned this earlier. I am unable to use a 3rd party physics library as we I need to develop this myself (uni project).
The accepted solution here flat out doesn't work. It will only even sort of work if the mesh density is generally high enough that no two points on the mesh are farther apart than the diameter of your collision sphere. Imagine a tiny sphere launched at short range on a random vector at a huuuge cube mesh. The cube mesh only has 8 verts. What are the odds that the cube is actually going to hit one of those 8 verts?
This really needs to be done with per-polygon collision. You need to be able to check intersection of polygon and a sphere (and additionally a cylinder if you want to avoid tunneling like reinier mentioned). There are quite a few resources for this online and in book form, but http://www.realtimerendering.com/intersections.html might be a useful starting point.
The comments about optimization are good. Early out opportunities (perhaps a quick check against a bounding sphere or an axis aligned bounding volume for the mesh) are essential. Even once you've determined that you're inside a bounding volume, it would probably be a good idea to be able to weed out unlikely polygons (too far away, facing the wrong direction, etc.) from the list of potential candidates.
I think the best would be to use a specialized physics library.
That said. If I think about this problem, I would suspect that it's not that hard:
The sphere has a midpoint and a radius. For every point in the mesh do the following:
check if the point lies inside the sphere.
if it does check if it is closer to the center than the previously found point(if any)
if it does... store this point as the collision point
Of course, this routine will be fairly slow.
A few things to speed it up:
for a first trivial reject, first see if the bounding sphere of the mesh collides
don't calc the squareroots when checking distances... use the squared lengths instead.(much faster)
Instead of comparing every point of the mesh, use a dimensional space division algorithm (quadtree / BSP)for the mesh to quickly rule out groups of points
Ah... and this routine only works if the sphere doesn't travel too fast (relative to the mesh). If it would travel very fast, and you sample it X times per second, chances are the sphere would have flown right through the mesh without every colliding. To overcome this, you must use 'swept volumes' which basically makes your sphere into a tube. Making the math exponentially complicated.