I am using Repast Simphony for a project that involves airspace and would like to have agents move in 3D continuous space above a GIS projection that has static ground-based agents. Currently, I have separate Geography and ContinuousSpace projections in the same context and move agents simultaneously in both projections, but the GIS display is only 2D in terms of agent motion.
I noticed that the Geometry objects used to set position in a Geography have a Coordinate.z fields, but setting the z value to anything other than NaN does nothing. I haven't found anything in the docs about this.
I plan on implementing the Projection interface and making my own projection, as I cannot implement the Geography and ContinuousSpace in the same class due to conflicting method signatures ('getAdder'). This seems a rather daunting task, so I figured it would be worth checking if there are any better ways of going about this?
You can elevate point markers in the 3D GIS display by overriding the
repast.simphony.visualization.gis3D.style.MarkStyle() method
public double getElevation(T obj)
that will place the point marker at the elevation specified in meters in the 3D GIS display. The JTS Coordinate object can store a z-value as you indicated, but none of the Geotools or JTS spatial math use this value as the CRS transforms are all based on 2D topography. I believe the getElevation() in the style specifies elevation relative to ground and not sea level. You can provide a method in your agents that provides the current elevation to the style, and then just have the style return the agent.getElevation().
Related
I'm working on a project to detect the position and orientation of a paper plane.
To collect the data, I'm using an Intel Realsense D435, which gives me accurate, clean depth data to work with.
Now I arrived at the problem of detecting the 2D paper plane silhouette from the 3D point cloud data.
Here is an example of the data (I put the plane on a stick for testing, this will not be in the final implementation):
https://i.stack.imgur.com/EHaEr.gif
Basically, I have:
A 3D point cloud with points on the plane
A 2D shape of the plane
I would like to calculate what rotations/translations are needed to align the 2D shape to the 3D point cloud as accurate as possible.
I've searched online, but couldn't find a good way to do it. One way would be to use Iterative Closest Point (ICP) to first take a calibration pointcloud of the plane in a known orientation, and align it with the current orientation. But from what I've heard, ICP doesn't perform well if the pointclouds aren't kind of already closely aligned at the start.
Any help is appreciated! Coding language doesn't matter.
Does your 3d point cloud have outliers? How many in what way?
How did you use ICP exactly?
One way would be using ICP, with a hand-crafted initial guess using
pcl::transformPointCloud (*cloud_in, *cloud_icp, transformation_matrix);
(to mitigate the problem that ICP needs to be close to work.)
What you actually want is the plane-model that describes the position and orientation of your point-cloud right?
A good estimator of your underlying function can be found with: pcl::ransac
pcl::ransace model consensus
You can then get the computedModel coefficents.
Now finding the correct transformation is just: How to calculate transformation matrix from one plane to another?
I am working on a project where I save the Latitude and Longuite of a vehicle each an interval. I have also a route saved as an array of gps coordinates. So I would like to know if there is some library, that helps me to know if a point is inside the rout and other basic calculations with the coordinates as distance calculations for ex.
Any tool an any language helps!
Based on your comment, since you're not building a typical internet map, I might recommend you use a combination of Python and the Shapely library. You can see some nice examples on this post over at GIS.SE.
GIS Analyses: Geometry Types, Buffering, Intersection, etc.
In order to treat several individual Lat/Long positions as a "route", you'll need to format them as points in a LineString geometry type. Also beware: In most GIS software, points are arranged as X,Y. That means you'll be adding your points as Long,Lat. Inverting this is a common mistake that can be frustrating if you're not aware of it.
Next, in order to test whether any given point is within your route, you'll need to Buffer your route (LineString). I would use the accuracy of the GPS unit, + a few extra meters, as my buffering radius. This will give you a proper geometry (Polygon) for a Point-In-Polygon test (i.e. Intersection) that will calculate whether a given point is within the bounds of the route.
The GIS.SE post I linked to provides examples for both buffering and intersection using Python and Shapely.
Some notes about coordinates: Geodetic vs. Cartesian
I'm not confident if Shapely will perform reliable calculations on geodetic data, which is what we call the familiar coordinates you get from GPS. Before doing operations in Shapely, you may need to translate your long/lat points into projected X/Y coordinates for an appropriate coordinate system, such as UTM, etc. (Hopefully someone will comment whether this is necessary.)
Assuming this is necessary, you could add the PyProj library to give you a bridge between the GPS coordinates you have and the Cartesian coordinates you need. PyProj is the one-size-fits-all solution to this problem. However if UTM coordinates will work you might find the library cited here to be easier to implement.
If you decide to go with PyProj, it will help to know that your GPS data is described by the EPSG:4326 coordinate system. And if you are comfortable with UTM for your projected coordinates, you'll need need to determine an appropriate UTM zone for your area and get its Proj4 coordinate definition from SpatialReference.org.
For example I live in South Carolina, USA, which is UTM 17 North. So if I go to SpatialReference.org, search for "EPSG UTM zone 17N", select the option which references "WGS 1984" (I happen to know this means units in meters), then click on the Proj4 link, the site provides the coordinate system definition I'm after in Proj4 notation:
+proj=utm +zone=17 +ellps=WGS84 +datum=WGS84 +units=m +no_defs
If you're not comfortable diving into the world of coordinate systems, EPSG codes, Proj4 strings and such, you might want to favor that alternate coordinate translation library I mentioned earlier rather than PyProj. On the other hand, if you will benefit from a more localized coordinate system (most countries have their own localized systems), or if you need to keep your code portable for use in many areas, I'd recommend using PyProj and make sure to keep your Proj4 definition string in a config file, and NOT hard-coded throughout your app!
I need to generate a tetrahedral (volume) mesh of thin-walled object object. Think of objects like a bottle or a plastic bowl, etc, which are mostly hollow. The volumetric mesh is needed for an FEM simulation. A surface mesh of the outside surface of the object is available from measurement, using e.g. octomap or KinectFusion. Therefore the vertex spacing is relatively regular. The inner surface of the object can be calculated from the outside surface by moving all points inside, since the wall thickness is known.
So far, I have considered the following approaches:
Create a 3D Delaunay triangulation (which would destroy the existing surface meshes) and then remove all tetrahedra which are not between the two original surfaces. For this check, it might be required to create an implicit surface representation of the 2 surfaces.
Create a 3D Delaunay triangulation and remove tetrahedra which are "inside" (in the hollow space) or "outside" (of the outer surface) with Alphashapes.
Close the outside and inside meshes and load them into tetgen as the outside hull and as a hole respectively.
These approaches seem to be a bit inelegant to me, and they still have some pitfalls. I would probably need several libraries/tools for them. For 1 and 2, probably tetgen or another FEM meshing tool would still be required to create well-conditioned tetrahedra. Does anyone have a more straight-forward solution? I guess this should also be a common problem in 3D printing.
Concerning tools/libraries, I have looked into PCL, meshlab and tetgen so far. They all seem to do only part of the job. Ideally, I would like to use only open source libraries and avoid tools which require manual intervention.
One way is to:
create triangular mesh of surface points,
extrude (move) that surface to inner for a given thickness. That produces volume (triangular prism) mesh of a wall,
each prism can be split in three tetrahedrons.
The problem I see is aspect ratio.
A single layer of tetrahedra will not reproduce shell or bending behavior very well. A single element through the thickness will already require a large mesh. Putting more than one will likely break the bank in order to keep aspect ratios and angles acceptable.
I'd prefer brick or thick shell elements to tetrahedra in this case. I think the modeling will be easier and the behavior will be more faithful to the physics.
I have a project where I have to recognize an entire room so I can calculate the distances between objects (like big ones eg. bed, table, etc.) and a person in that room. It is possible something like that using Microsoft Kinect?
Thank you!
Kinect provides you following
Depth Stream
Color Stream
Skeleton information
Its up to you how you use this data.
To answer your question - Official Micorosft Kinect SDK doesnt provides shape detection out of the box. But it does provide you skeleton data/face tracking with which you can detect distance of user from kinect.
Also with mapping color stream to depth stream you can detect how far a particular pixel is from kinect. In your implementation if you have unique characteristics of different objects like color,shape and size you can probably detect them and also detect the distance.
OpenCV is one of the library that i use for computer vision etc.
Again its up to you how you use this data.
Kinect camera provides depth and consequently 3D information (point cloud) about matte objects in the range 0.5-10 meters. With this information it is possible to segment out the floor (by fitting a plane) of the room and possibly walls and the ceiling. This step is important since these surfaces often connect separate objects making them a one big object.
The remaining parts of point cloud can be segmented by depth if they don't touch each other physically. Using color one can separate the objects even further. Note that we implicitly define an object as 3D dense and color consistent entity while other definitions are also possible.
As soon as you have your objects segmented you can measure the distances between your segments, analyse their shape, recognize artifacts or humans, etc. To the best of my knowledge however a Skeleton library can recognize humans after they moved for a few seconds. Below is a simple depth map that was broken on a few segments using depth but not color information.
I hope to find some hints where to start with a problem I am dealing with.
I am using a Kinect sensor to capture 3d point clouds. I created a 3d object detector which is already working.
Here my task:
Lets say I have a point cloud 1. I detected a object in cloud A and I know the centroid position of my object (x1,y1,z1). Now I move my sensor around a path and create new clouds (e.g. cloud 2). In that cloud 2 I see the same object but e.g. from the side, where the object detection is not working fine.
I would like to transform the detected object form cloud 1 to cloud 2, to get the centroid also in cloud 2. For me it sound like I need a matrix (Translation, Rotation) to transform point from 1 to 2.
And ideas how I could solve my problem?
Maybe ICP? Are there better solutions?
THX!
In general, this task is called registration. It relies on having a good estimation of which points in cloud 1 correspond to which clouds in point 2 (more specifically, which given a point in cloud 1, which point in cloud 2 represents the same location on the detected object). There's a good overview in the PCL library documentation
If you have such a correspondence, you're in luck and you can directly compute a rotation and translation as demonstrated here.
If not, you'll need to estimate that correspondence. ICP does that for approximately aligned point clouds, but if your point clouds are not already fairly well aligned, you may want to start by estimating "key points" (such as book corners, distinct colors, etc) in your point clouds, computing a rotation and translation as above, and then performing ICP. As D.J.Duff mentioned, ICP works better in practice on point clouds that are already approximately aligned because it estimates correspondences using one of two metrics, minimal point-to-point distance or minimal point to plane distance, according to wikipedia, the latter works better in practice, but it does involve estimating normals, which can be tricky. If the correspondences are far off, the transforms likely will be as well.
I think what you were asking about was in particular to the Kinect Sensor and the API Microsoft released for it.
If you are not planning to do reconstruction, you can look into the AlignPointClouds function in Sensor Fusion namespace. This should take care of it automatically, in methods similar to the answer given by #pnhgiol.
On the other hand, if you are looking at doing reconstruction as well as point cloud transforms, the Reconstruction class is what you are looking for. All of which can be found out about, here.