Make 3D figure of 2D images "projecting information" onto each other - matplotlib

Is there a way to make a z-stack of 2-D images, at the isometric view in 3-D, of points in each 2-D image projecting downwards to the next slice of 2-D images? I am certain there is a technical term for this, but I just don't have the vocabulary to find the most pertinent answer. Would someone be able to point me in the right direction?
Below, I've drawn an "idea" of what this looks like. I'd love to know if this is possible without re-inventing wheels for matplotlib or other Python plotting libraries.

The original question was posed for doing so in Python. After many months of searching, I found a way to do so in TikZ. I cannot consider this my original work, it is largely based on Pascal Seppecher's interaction diagram found here.
To reconstitute my question above, one can use the above template to define:
Agents of different shapes, specify fills
The frame (plane)
which they reside in
Flows of directed edges that communicate
how agents interact with each other in each plane
Inter-plane
interaction flows
https://texample.net/tikz/examples/interaction-diagram/

Related

2D shape detection in 3D pointcloud

I'm working on a project to detect the position and orientation of a paper plane.
To collect the data, I'm using an Intel Realsense D435, which gives me accurate, clean depth data to work with.
Now I arrived at the problem of detecting the 2D paper plane silhouette from the 3D point cloud data.
Here is an example of the data (I put the plane on a stick for testing, this will not be in the final implementation):
https://i.stack.imgur.com/EHaEr.gif
Basically, I have:
A 3D point cloud with points on the plane
A 2D shape of the plane
I would like to calculate what rotations/translations are needed to align the 2D shape to the 3D point cloud as accurate as possible.
I've searched online, but couldn't find a good way to do it. One way would be to use Iterative Closest Point (ICP) to first take a calibration pointcloud of the plane in a known orientation, and align it with the current orientation. But from what I've heard, ICP doesn't perform well if the pointclouds aren't kind of already closely aligned at the start.
Any help is appreciated! Coding language doesn't matter.
Does your 3d point cloud have outliers? How many in what way?
How did you use ICP exactly?
One way would be using ICP, with a hand-crafted initial guess using
pcl::transformPointCloud (*cloud_in, *cloud_icp, transformation_matrix);
(to mitigate the problem that ICP needs to be close to work.)
What you actually want is the plane-model that describes the position and orientation of your point-cloud right?
A good estimator of your underlying function can be found with: pcl::ransac
pcl::ransace model consensus
You can then get the computedModel coefficents.
Now finding the correct transformation is just: How to calculate transformation matrix from one plane to another?

Given a 3D model that is similar the shape of a cube, how do I map regular gridlines onto the 3D model?

Suppose I have a 3D model that roughly resembles a cube or a cuboid, and I wanted to estimate a regular set of gridlines that lays on top of the 3D model.
Is there an efficient way of doing this?
A brute force way might be to first estimate the 8 corner points, then use the geodesic function in CGAL to link up the corner points to form the initial path lines. Recursively, take the halfway point in each path find the geodesic path to the halfway point of the opposing path.
But that would take up too much computing time.
Is there a faster way to do this via texture mapping texture? I need to code this out so will be thankful if a specific algorithm can be referred to, rather than using a software such as blender, maya, etc.
thanks for any help!

Insert skeleton in 3D model programmatically

Background
I'm working on a project where a user gets scanned by a Kinect (v2). The result will be a generated 3D model which is suitable for use in games.
The scanning aspect is going quite well, and I've generated some good user models.
Example:
Note: This is just an early test model. It still needs to be cleaned up, and the stance needs to change to properly read skeletal data.
Problem
The problem I'm currently facing is that I'm unsure how to place skeletal data inside the generated 3D model. I can't seem to find a program that will let me insert the skeleton in the 3D model programmatically. I'd like to do this either via a program that I can control programmatically, or adjust the 3D model file in such a way that skeletal data gets included within the file.
What have I tried
I've been looking around for similar questions on Google and StackOverflow, but they usually refer to either motion capture or skeletal animation. I know Maya has the option to insert skeletons in 3D models, but as far as I could find that is always done by hand. Maybe there is a more technical term for the problem I'm trying to solve, but I don't know it.
I do have a train of thought on how to achieve the skeleton insertion. I imagine it to go like this:
Scan the user and generate a 3D model with Kinect;
1.2. Clean user model, getting rid of any deformations or unnecessary information. Close holes that are left in the clean up process.
Scan user skeletal data using the Kinect.
2.2. Extract the skeleton data.
2.3. Get joint locations and store as xyz-coordinates for 3D space. Store bone length and directions.
Read 3D skeleton data in a program that can create skeletons.
Save the new model with inserted skeleton.
Question
Can anyone recommend (I know, this is perhaps "opinion based") a program to read the skeletal data and insert it in to a 3D model? Is it possible to utilize Maya for this purpose?
Thanks in advance.
Note: I opted to post the question here and not on Graphics Design Stack Exchange (or other Stack Exchange sites) because I feel it's more coding related, and perhaps more useful for people who will search here in the future. Apologies if it's posted on the wrong site.
A tricky part of your question is what you mean by "inserting the skeleton". Typically bone data is very separate from your geometry, and stored in different places in your scene graph (with the bone data being hierarchical in nature).
There are file formats you can export to where you might establish some association between your geometry and skeleton, but that's very format-specific as to how you associate the two together (ex: FBX vs. Collada).
Probably the closest thing to "inserting" or, more appropriately, "attaching" a skeleton to a mesh is skinning. There you compute weight assignments, basically determining how much each bone influences a given vertex in your mesh.
This is a tough part to get right (both programmatically and artistically), and depending on your quality needs, is often a semi-automatic solution at best for the highest quality needs (commercial games, films, etc.) with artists laboring over tweaking the resulting weight assignments and/or skeleton.
There are algorithms that get pretty sophisticated in determining these weight assignments ranging from simple heuristics like just assigning weights based on nearest line distance (very crude, and will often fall apart near tricky areas like the pelvis or shoulder) or ones that actually consider the mesh as a solid volume (using voxels or tetrahedral representations) to try to assign weights. Example: http://blog.wolfire.com/2009/11/volumetric-heat-diffusion-skinning/
However, you might be able to get decent results using an algorithm like delta mush which allows you to get a bit sloppy with weight assignments but still get reasonably smooth deformations.
Now if you want to do this externally, pretty much any 3D animation software will do, including free ones like Blender. However, skinning and character animation in general is something that tends to take quite a bit of artistic skill and a lot of patience, so it's worth noting that it's not quite as easy as it might seem to make characters leap and dance and crouch and run and still look good even when you have a skeleton in advance. That weight association from skeleton to geometry is the toughest part. It's often the result of many hours of artists laboring over the deformations to get them to look right in a wide range of poses.

determine camera rotation and translation matrix from essential matrix

I am trying to extract rotation matrix and translation matrix from essential matrix.
I took these answers as reference:
Correct way to extract Translation from Essential Matrix through SVD
Extract Translation and Rotation from Fundamental Matrix
Now I've done the above steps applying SVD to essential matrix, but here comes the problem. According to my understanding about this subject, both R and T has two answers, which leads to 4 possible solutions of [R|T]. However only one of the solutions would fit in the physical situation.
My question is how can I determine which one of the 4 solutions is the correct one?
I am just a beginner on studying camera position. So if possible, please make the answer be as clear (but simple) as possible. Any suggestion would be appreciated, thanks.
The simplest is testing a point 3D position using the possible solution, that is, a reconstructed point will be in front of both cameras in only one of the possible 4 solutions.
So assuming one camera matrix is P = [I|0], you have 4 options for the other camera, but only one of the pairs will place such point in front them.
More details in Hartley and Zisserman's multiple view geometry (page 259)
If you can use Opencv (version 3.0+), you count with a function called "recoverPose", this function will do that job for you.
Ref: OpenCV documentation, http://docs.opencv.org/trunk/modules/calib3d/doc/calib3d.html

tetrahedrizing a mesh

I am looking for an algorithm that receives a 3d surface mesh (i.e comprised of 3d triangles that are a discretization of some manifold) and generates tetrahedra inside the mesh's volume.
i.e, I want the 3d equivalent to this 2d problem: given a closed curve, triangulate it's interior.
I am sorry if this is unclear, it's the best way I could think of explaining it.
For the 2d case there's Triangle. For a 3d case I could find none.
pygalmesh (a project of mine based on CGAL) can do just that.
pygalmesh-volume-from-surface elephant.vtu out.vtk --cell-size 1.0 --odt
https://github.com/nschloe/pygalmesh/#volume-meshes-from-surface-meshes
I found GRUMMP which seems to answer all the needs mentioned in the question, and more...
I haven't had any experience using GRUMMP, but as far as a 3D version of triangle there is tetgen. If you know the triangle switches it is built to resemble it. It also has fairly decent documentation and a python wrapper for it and triangle.
http://wias-berlin.de/software/tetgen/
http://mathema.tician.de/software/meshpy/