Does anyone know what data structure blender is using? Like Half-Edge / Winged Edge / Shared Vertext / Directed Edge etc.
I should prepare a presentation for my university about different data structures. As we're using Blender, I think, I should at least know that about Blender. I also did google it, but it seems like no one knows anything about that.
A bit late, but the data structure used draws from the "Radial Edge Structure"
Documented here: https://wiki.blender.org/wiki/Source/Modeling/BMesh/Design
Related
I tried executing the example in http://doc.cgal.org/latest/Surface_mesh_skeletonization/index.html to get the skeleton of a surface mesh.
I tried using a mesh model of blood vessels with thin structures. However, no matter how refined my meshes are, parts of the skeletons seems to always be outside the mesh models.
In the sample code, there seems to be no parameters which I am able to play around with, so I am asking if there is anything i can do to make sure the skeleton stays within the mesh model.
I have tried to refined the meshes, till the program crashes. Thanks for any help provided. thanks!
I guess you have used the free function setting all parameters to their default. In case you want to tune the parameters you need to use the class Mean_curvature_flow_skeletonization.
It has 3 parameters that need to be fine tuned so that your skeleton lies within the mesh:
quality_speed_tradeoff
medially_centered_speed_tradeoff
is_medially_centered
Note that the polyhedron demo includes a plugin where you can try the effect of the different parameters.
If you can share the mesh with me, I can also have a look.
Given that a user is static in a VR environment, which of the two camera types below would be better to create a more 'real' looking representation of an live-streamed presenter in the VR world?
1) Kinect (can measure depth)
2) Normal 2D camera such as a high end webcam (maybe something like the pointgrey Flea3) (software assisted 3D illusion from a static angle)
Would be grateful if anyone with any experience with the relevant technologies or fields would be able to help out!
Your question lacks the necessary information to provide a single correct answer. Is it your intent to provide a full 3D VR experience, or are you content with just 2D content? Is the presenter static, or are they moving around the viewer? Towards them? Away from them? Will you be using full spherical projection or something less complete, like cylindrical projection? And what sort of lighting do you think you'll need? These are all nontrivial questions, because the answers determine the best camera package to get your content.
You also fail to consider capturing with a 360º camera, which would be advantageous if the presenter is indeed moving around in the 360º space. My personal bias is towards capturing with these, but there's no single production solution unless you constrain the problem more thoroughly.
Background
I'm working on a project where a user gets scanned by a Kinect (v2). The result will be a generated 3D model which is suitable for use in games.
The scanning aspect is going quite well, and I've generated some good user models.
Example:
Note: This is just an early test model. It still needs to be cleaned up, and the stance needs to change to properly read skeletal data.
Problem
The problem I'm currently facing is that I'm unsure how to place skeletal data inside the generated 3D model. I can't seem to find a program that will let me insert the skeleton in the 3D model programmatically. I'd like to do this either via a program that I can control programmatically, or adjust the 3D model file in such a way that skeletal data gets included within the file.
What have I tried
I've been looking around for similar questions on Google and StackOverflow, but they usually refer to either motion capture or skeletal animation. I know Maya has the option to insert skeletons in 3D models, but as far as I could find that is always done by hand. Maybe there is a more technical term for the problem I'm trying to solve, but I don't know it.
I do have a train of thought on how to achieve the skeleton insertion. I imagine it to go like this:
Scan the user and generate a 3D model with Kinect;
1.2. Clean user model, getting rid of any deformations or unnecessary information. Close holes that are left in the clean up process.
Scan user skeletal data using the Kinect.
2.2. Extract the skeleton data.
2.3. Get joint locations and store as xyz-coordinates for 3D space. Store bone length and directions.
Read 3D skeleton data in a program that can create skeletons.
Save the new model with inserted skeleton.
Question
Can anyone recommend (I know, this is perhaps "opinion based") a program to read the skeletal data and insert it in to a 3D model? Is it possible to utilize Maya for this purpose?
Thanks in advance.
Note: I opted to post the question here and not on Graphics Design Stack Exchange (or other Stack Exchange sites) because I feel it's more coding related, and perhaps more useful for people who will search here in the future. Apologies if it's posted on the wrong site.
A tricky part of your question is what you mean by "inserting the skeleton". Typically bone data is very separate from your geometry, and stored in different places in your scene graph (with the bone data being hierarchical in nature).
There are file formats you can export to where you might establish some association between your geometry and skeleton, but that's very format-specific as to how you associate the two together (ex: FBX vs. Collada).
Probably the closest thing to "inserting" or, more appropriately, "attaching" a skeleton to a mesh is skinning. There you compute weight assignments, basically determining how much each bone influences a given vertex in your mesh.
This is a tough part to get right (both programmatically and artistically), and depending on your quality needs, is often a semi-automatic solution at best for the highest quality needs (commercial games, films, etc.) with artists laboring over tweaking the resulting weight assignments and/or skeleton.
There are algorithms that get pretty sophisticated in determining these weight assignments ranging from simple heuristics like just assigning weights based on nearest line distance (very crude, and will often fall apart near tricky areas like the pelvis or shoulder) or ones that actually consider the mesh as a solid volume (using voxels or tetrahedral representations) to try to assign weights. Example: http://blog.wolfire.com/2009/11/volumetric-heat-diffusion-skinning/
However, you might be able to get decent results using an algorithm like delta mush which allows you to get a bit sloppy with weight assignments but still get reasonably smooth deformations.
Now if you want to do this externally, pretty much any 3D animation software will do, including free ones like Blender. However, skinning and character animation in general is something that tends to take quite a bit of artistic skill and a lot of patience, so it's worth noting that it's not quite as easy as it might seem to make characters leap and dance and crouch and run and still look good even when you have a skeleton in advance. That weight association from skeleton to geometry is the toughest part. It's often the result of many hours of artists laboring over the deformations to get them to look right in a wide range of poses.
I would like to generate visually appealing surface reconstruction from the the point clouds.
I am using point cloud library. I tried creating a mesh using poisson reconstruction method but later found that it gives a water tight reconstruction.
For example: In my case I have a point cloud of a room
Using the code at http://justpaste.it/code1 , I was able to get a reconstruction like this
(source: pcl-users.org)
The above picture has the surface which is covering the top view. This was visualized using MeshLab.
Then later on the MeshLab GUI when I press points, it looks like this.
(source: pcl-users.org)
But in the second picture there are points on its surface too(Not clearly visible in the attached picture).
Can you help in creating a model that has no points on the top and just has the inside structure ?
Any other suggestions to improve the reconstruction quality ?
The point cloud of the room and generated ply file can be downloaded from https://dl.dropboxusercontent.com/u/95042389/temp_pcd_ply_files.tar.bz2
One solution that works for me is obtaining a convex/concave hull of your point cloud. Then you can use this hull to filter/crop your mesh after Poisson reconstruction. If you use the PCL you can try ConvexHull or ConcaveHull together with CropHull and test the results. Hope this solves your issue, it did for me.
As far as my experience is concerned (meshing caves), meshing with Poisson will result in watertight model/mesh, which is why your model was covered entirely. I only deal with meshes using MeshLab but I am guessing it is the same thing. What I did try is using Ball-Pivoting meshing algorithm in MeshLab which result in non-watertight model. Maybe that is what you are looking for.
I'm relatively new to OGRE graphics engine, so my question may seem too obvious, but searching for relevant information was not successful.
Given:
I have an OGRE application with a scene created of some meshes, lights, cameras and textures. It is rather simple, I think. That all is represented by a tree of scene nodes(internal object).
The goal:
To save the full tree of scene nodes or, preferably, an indicated branch of nodes of the tree to a ".mesh" file. To be able load it later as any other mesh. The ".mesh.xml" format is also fine. How it could be done?
If not:
If the desired thing is not possible, what is normal way to create those ".mesh" files? And where I could find some guides for it?
I think you're a bit confused: OGRE mesh file is a file that stores only geometric data of a given 3D model like positions, normals, texture coordinates, tangents, binormals, bone index, bone weights and so on. It also can store a subdivision of a single mesh in submeshes (generally based on the material), and each of them can have a reference to the proper material. In essence a mesh file only contains data on the models you would like to load on your game, nothing about the scene structure.
If you want to save (serialize) your scene, you have two choice:
Write your own scene serializer.
Using some library already provided by the OGRE community: for example DotScene format.
There are Ogre .mesh exporters for programs like Blender. A quick google for Ogre .mesh exporters should help you.