I'm trying to write bvh files to animate models using Kinect. So far, I managed to build the hierarchy of the skeleton, but I have problems understanding how to get the Motion part.
Since it's a bvh file, I only need to retrieve the rotation matrix (Rz, Rx, Ry) for each bone, but I have to take into account the relative angles of the parent bones. I'm a bit lost in this part, any help would be appreciated.
You can find several implementation for BVH files such as http://kinect-bvh-mocap.software.informer.com/.
Related
I'm trying to use Polygonal Surface Reconstruction with building point cloud to create simplified building models.
I did first tests with this CGAL code example and got first promising results.
As an example, I used this point cloud with vertex normals correctly oriented and got the following result from PSR. Some faces are clearly inverted (dark faces with normals pointing inside the watertight mesh and therefore not visible).
I was wondering if there a way to fix this face orientation error. I've noticed orientation methods on Polygon mesh but I don't really know to apply them to the resulting PSR surface mesh. As far as logic is concerned making normal point outwards should not be too complicated I guess.
Thanks in advance for any help
You can use the function reverse_face_orientations in the Polygon mesh processing package.
Note that this package has several functions that can help you to correct/modify your mesh.
I would like to generate visually appealing surface reconstruction from the the point clouds.
I am using point cloud library. I tried creating a mesh using poisson reconstruction method but later found that it gives a water tight reconstruction.
For example: In my case I have a point cloud of a room
Using the code at http://justpaste.it/code1 , I was able to get a reconstruction like this
(source: pcl-users.org)
The above picture has the surface which is covering the top view. This was visualized using MeshLab.
Then later on the MeshLab GUI when I press points, it looks like this.
(source: pcl-users.org)
But in the second picture there are points on its surface too(Not clearly visible in the attached picture).
Can you help in creating a model that has no points on the top and just has the inside structure ?
Any other suggestions to improve the reconstruction quality ?
The point cloud of the room and generated ply file can be downloaded from https://dl.dropboxusercontent.com/u/95042389/temp_pcd_ply_files.tar.bz2
One solution that works for me is obtaining a convex/concave hull of your point cloud. Then you can use this hull to filter/crop your mesh after Poisson reconstruction. If you use the PCL you can try ConvexHull or ConcaveHull together with CropHull and test the results. Hope this solves your issue, it did for me.
As far as my experience is concerned (meshing caves), meshing with Poisson will result in watertight model/mesh, which is why your model was covered entirely. I only deal with meshes using MeshLab but I am guessing it is the same thing. What I did try is using Ball-Pivoting meshing algorithm in MeshLab which result in non-watertight model. Maybe that is what you are looking for.
I need a help about how to get skeleton data from my modified Depth Image using KinectSDK.
I have two Kinect. And I can get the DepthImage from both of them. I transform the two Depth Image into a 3D-coordinate and combine them together by using OpenGL. Finally, I reproject the 3D scene and get a new DepthImage. (The resolution is also 640*480, and the FPS of my new DepthImage is about 20FPS)
Now, I want to get the skeleton from this new DepthImage by using KinectSDK.
Anyone can help me about this?
Thanks
This picture is my flow path:
Does Miscrosoft Kinect SDK provide any API that I can input detph image then return skeleton?
No.
I tried to find some libraries (not made by Microsoft) to accomplish this task, but it is not simple at all.
Try to follow this discussion on ResearchGate, there are some useful links (such as databases or early stage projects) from where you can start if you want to develop your library and share it with us.
I was hoping to do something similar, feed post-processed depth image back to Kinect for skeleton segmentation, but unfortunately there doesn't seem to be a way of doing this with existing API.
Why would you reconstruct a skeleton based upon your 3d depth data ?
The kinect Sdk can record a skeleton directly without such reconstruction.
And one camera is enough to record a skeleton.
if you use the kinect v2, then out of my head i can track 3 or 4 skeletons or so.
kinect provides basically 3 streams, rgb video, depth, skeleton.
I have started working with Kinect Fusion recently for my 3D reconstruction project. I have two questions in this field:
What is inside .STL file? is it the vertices for different objects in the scene?
How can I segment a specific object (e.g. my hand) in the reconstructed file? Is there a way to do so using Kinect Fusion ?
Thank you in advance!
Yes, there is a Wikipedia article on the STL format:
http://en.wikipedia.org/wiki/STL_format
Firstly, you would want your hand to be fixed, because Kinect Fusion will only reconstruct static scenes. Secondly, you could use the min and max depth threshold values to filter out the rest of the scene. Or if that does not work too well, you could reconstruct the whole scene, get the mesh, and then filter out the vertices that are beyond a certain depth for example.
I'm relatively new to OGRE graphics engine, so my question may seem too obvious, but searching for relevant information was not successful.
Given:
I have an OGRE application with a scene created of some meshes, lights, cameras and textures. It is rather simple, I think. That all is represented by a tree of scene nodes(internal object).
The goal:
To save the full tree of scene nodes or, preferably, an indicated branch of nodes of the tree to a ".mesh" file. To be able load it later as any other mesh. The ".mesh.xml" format is also fine. How it could be done?
If not:
If the desired thing is not possible, what is normal way to create those ".mesh" files? And where I could find some guides for it?
I think you're a bit confused: OGRE mesh file is a file that stores only geometric data of a given 3D model like positions, normals, texture coordinates, tangents, binormals, bone index, bone weights and so on. It also can store a subdivision of a single mesh in submeshes (generally based on the material), and each of them can have a reference to the proper material. In essence a mesh file only contains data on the models you would like to load on your game, nothing about the scene structure.
If you want to save (serialize) your scene, you have two choice:
Write your own scene serializer.
Using some library already provided by the OGRE community: for example DotScene format.
There are Ogre .mesh exporters for programs like Blender. A quick google for Ogre .mesh exporters should help you.