I would like to mesh multiple (about 25) objects all at once using CGAL.
However, only one or two objects are output.
Is this a bug in CGAL?
I use Gray_level_image to mesh.
The range is from 1.0 to 3.0 and I use float as number type.
Also, 1.0 to 2.0 is set to be in the spatial domain, 2.0 to 3.0 is the object domain, and I set 2.0 as isovalue.
In the upcoming CGAL version, there is a new example in the documentation, documented in the manual at 3.4.1 Domains From Segmented 3D Images, with a Custom Initialization. You can find the code in our Github page. That new example explains how to detect all connected components of the domain coded by the 3D image, and use a custom initialization to ensure that the initial mesh will have vertices on all connected components before the mesh refinement algorithm is launched.
That example is about a 3D segmented image (values at each voxels are integers) whereas you have a 3D gray-level image, but the method would be the same.
Related
I am currently using PCL to obtain a mesh from a point cloud, but I would like to use CGAL tetrahedral_isotropic_remeshing() method to improve the quality of my mesh in certain areas. I am having some difficulties in transforming my pcl::Polygonmesh into the CGAL::Tetrahedral_remeshing::Remeshing_triangulation_3 object required to use the isotropic remeshing method.
Thanks,
I am using Repast Simphony for a project that involves airspace and would like to have agents move in 3D continuous space above a GIS projection that has static ground-based agents. Currently, I have separate Geography and ContinuousSpace projections in the same context and move agents simultaneously in both projections, but the GIS display is only 2D in terms of agent motion.
I noticed that the Geometry objects used to set position in a Geography have a Coordinate.z fields, but setting the z value to anything other than NaN does nothing. I haven't found anything in the docs about this.
I plan on implementing the Projection interface and making my own projection, as I cannot implement the Geography and ContinuousSpace in the same class due to conflicting method signatures ('getAdder'). This seems a rather daunting task, so I figured it would be worth checking if there are any better ways of going about this?
You can elevate point markers in the 3D GIS display by overriding the
repast.simphony.visualization.gis3D.style.MarkStyle() method
public double getElevation(T obj)
that will place the point marker at the elevation specified in meters in the 3D GIS display. The JTS Coordinate object can store a z-value as you indicated, but none of the Geotools or JTS spatial math use this value as the CRS transforms are all based on 2D topography. I believe the getElevation() in the style specifies elevation relative to ground and not sea level. You can provide a method in your agents that provides the current elevation to the style, and then just have the style return the agent.getElevation().
I am trying to compute some mesh features for 3D models that I created using numpy-stl. I would like to compute all of the features given within pyradiomics, but I am not sure how to use them on just the meshes without them having all of the extra binary image, and matrix information? Unless there is a better program t use for shape feature extraction? Also, in the documentation, it says that there are some features you need to enable C extensions for. How can you do that in your python script?
C extensions are enabled by default. As of PyRadiomics 2.0, the python equivalents for those functions have been remove (horrible performance).
As to your meshes. PyRadiomics is build to extract features from images and binary labelmaps. To use meshes you would have to first convert them.
What features do you want to extract? PyRadiomics does use a sort of on-the-fly built mesh to calculate surface area and volume, which are also used in the calculation of several other shape features.
If you want to take a look at how volume and SA are calculated, the source code for that is in C (radiomics/src/cshape.c). The calculation of the derived features (e.g. sphericity) is in shape.py
I have a project where I have to recognize an entire room so I can calculate the distances between objects (like big ones eg. bed, table, etc.) and a person in that room. It is possible something like that using Microsoft Kinect?
Thank you!
Kinect provides you following
Depth Stream
Color Stream
Skeleton information
Its up to you how you use this data.
To answer your question - Official Micorosft Kinect SDK doesnt provides shape detection out of the box. But it does provide you skeleton data/face tracking with which you can detect distance of user from kinect.
Also with mapping color stream to depth stream you can detect how far a particular pixel is from kinect. In your implementation if you have unique characteristics of different objects like color,shape and size you can probably detect them and also detect the distance.
OpenCV is one of the library that i use for computer vision etc.
Again its up to you how you use this data.
Kinect camera provides depth and consequently 3D information (point cloud) about matte objects in the range 0.5-10 meters. With this information it is possible to segment out the floor (by fitting a plane) of the room and possibly walls and the ceiling. This step is important since these surfaces often connect separate objects making them a one big object.
The remaining parts of point cloud can be segmented by depth if they don't touch each other physically. Using color one can separate the objects even further. Note that we implicitly define an object as 3D dense and color consistent entity while other definitions are also possible.
As soon as you have your objects segmented you can measure the distances between your segments, analyse their shape, recognize artifacts or humans, etc. To the best of my knowledge however a Skeleton library can recognize humans after they moved for a few seconds. Below is a simple depth map that was broken on a few segments using depth but not color information.
I am trying to create a 3D mesh using VTK. VTK appears to provide a number of ways to create a mesh representing the surface of a 3D object. Filling in the object appears to be more difficult. The reason I want to do this is to pass the output to a FEM tool as a solid, not a balloon.
At the moment, I am playing with spheres and there seems to be a number of ways to create a mesh for the surface of a 3D object. What I can't seem to do is create a sphere with points inside the volume. The vtkUnstructuredGrid class allows me to represent such an object, but I can't seem to mesh this in the same way I can a vtkPolyData.
Is this a fundamental limitation of VTK or am I just not seeing the right tool?
As you said:
I want to do this is to pass the output to an FEM tool as a solid not a balloon
I asume you have your FEM mesh in your own format and you want to visualize it. To do so, you can turn your FEM mesh into a vtkUnstructuredGrid modifiying the code described here:
How to convert a mesh to VTK format?