I need to get relative lateral coordinates (distance to the vehicle from one side of the lane i.e Px) of vehicles.
I know that SUMO provides absolute x,y coordinates and distance traveled (Py).
Is there a way to get Px information at each timestep directly like Py?
This information is part of the raw dump (or netstate dump, see https://sumo.dlr.de/wiki/Simulation/Output/RawDump) but only if the sublane model is active. It is given as an absolute deviation from the center line of the line (which is always 0 if the sublane model is not active).
Related
I have 20 000 polygons in a dataset. I need to have the Euclidean Distance between all polygons, so a 20 000 x 20 000 distance matrix where for each of the polygons, the distance to all other polygons is stored.
I have read in some other threads the recommendation to use the "Near" tool in Arcmap. However, this tool only calculates the distance to the NEAREST polygon, while I need the distance from ALL polygons to ALL polygons.
Is there any solution for this?
Near tool: Calculates distance and additional proximity information between the
input features and the closest feature in another layer or feature
class.
In order to calculate the distance between the centroids of each of your polygons make sure your map is in a projected coordinate system.
Then, make sure the centroid points are calculated (detailed step-by-step here: https://support.esri.com/en/technical-article/000009381 )
Export your centroid point attribute table as a DBF (Click on Options > Export.)
Add the table to your map. Right click on the new table, Display XY Data, select Longitude for the X and Latitude for Y, and select the map's coordinate system to create an events layer.
Then, use the Point Distance tool (Details here: https://desktop.arcgis.com/en/arcmap/10.3/tools/analysis-toolbox/point-distance.htm ). The event points are both the input and near features. The output will be a table displaying distance between all polygon centroids on the map.
My Erlang module loads a GPX file and performs a certain function for every set of lat/long coordinates in the file. I want to increase the resolution, so to speak, of the GPX file by filling in more GPS points between the existing points. For example, the GPX file may have points every 100 feet, or even points at random intervals, and I may want to have points consistently every 10 feet. I don't want to actually modify the GPX file; I only want my script to calculate these midpoints at runtime.
I do not care about interpolating speed or anything else; only regular GPS coordinates.
As you can see in my function below, I am using the egpx module to extract lat/long points from a GPX file. I also have discovered modules such as geolib that can perform geographical calculations, but my limited knowledge in this area precludes me from solving this issue on my own.
In Erlang, how can I interpolate more GPS coordinates between two coordinates based on a predetermined interval, such as 10 feet?
EDIT: I found this question on Stack Overflow which may be pertinent, but I still need help understanding and applying the solution: How to generate coordinates in between two known points
get_gpx_points([], _) -> ok;
get_gpx_points([H|T], Acc) ->
Lat = egpx:get_lat(H), % Egpx converts GPX trackpoints to lat/long
Long = egpx:get_lon(H),
% I want to interpolate more lat/long points here based on a set interval, like 10 feet
LatList = io_lib:format("~.6f",[Lat]), % Converts to list of no. with six decimal points
LongList = io_lib:format("~.6f",[Long]),
do_something(LatList, LongList), % Run a certain function for every coordinate
get_gpx_points(T, Acc + 1). % Recurse until no more trackpoints found
I have a set of GPS Coordinates and I want to find the speed required for a UAV to travel between them. Doing this by calculating distance in x y z and then dividing by time to travel - m/s.
I know the great circle distance but I assume this will be incorrect since they are all relatively close together(within 10m)?
Is there an accurate way to do this?
For small distances you can use the haversine formula without a relevant loss of accuracy in comparison to Vincenty's f.e. Plus, it's designed to be accurate for very small distances. This can be read up here if you are interested.
You can do this by converting lat/long/alt into XYZ format for both points. Then, figure out the rotation angles to move one of those points (usually, the oldest point) so that it would be at lat=0 long=0 alt=0. Rotate the second position report (the newest point) by the same rotation angles. If you do it all correctly, you will find X equals the east offset, Y equals the north offset, and Z equals the up offset. You can use Pythagorean theorm with X and Y (north and east) offsets to determine the horizontal distance traveled. Normally, you just ignore the altitude differences and work with horizontal data only.
All of this assumes you are using accurate formulas to convert lat/lon/alt into XYZ. It also assumes you have enough precision in the lat/lon/alt values to be accurate. Approximations are not good if you want good results. Normally, you need about 6 decimal digits of precision in lat/lon values to compute positions down to the meter level of accuracy.
Keep in mind that this method doesn't work very well if you haven't moved far (greater than about 10 or 20 meters, more is better). There is enough noise in the GPS position reports that you are going to get jumpy velocity values that you will need to further filter to get good accuracy. The math approach isn't the problem here, it's the inherent noise in the GPS position reports. When you have good reports, you will get good velocity.
A GPS receiver doesn't normally use this approach to know velocity. It looks at the way doppler values change for each satellite and factor in current position to know what the velocity is. This works reasonably well when the vehicle is moving. It is a much faster way to detect changes in velocity (for instance, to release a position clamp). The normal user doesn't have access to the internal doppler values and the math gets very complicated, so it's not something you can do.
I have some Kinect data of somebody standing (reasonably) still and performing sets of punches. I am given it in the format of an x,y,z co-ordinate for each joint of which they are 20, so I have 60 data points per frame.
I'm trying to perform a classification task on the punches however I'm having some problems normalising my data. As you can see from the graph there are sections with much higher 'amplitude' than the others, my belief is that this is due to how close that person was to the kinect sensor when the readings were taken. (The graph is actually the first principal coefficient obtained by PCA for each frame, multiple sequences of the same punch are strung together in this graph)
Looking back at the data files it looks like those that are 'out' have a z co-ordinate (depth from sensor) of ~2.7 where as the others tent to hover around 3.3-3.6.
How can I perform a normalization with the depth values to make them closer to each other for each sequence? I've already tried differentiation to get the velocity, although it helps to normalise the output actually ends up too similar and makes it very hard to classify.
Edit: I should mention I am already using a normalization method by subtracting the hip position from each joint in an attempt to make the co-ordinates relative.
The Kinect can output some strange values when the person that is tracked is standing near the edges of the view of the Kinect. I would either completly ignore these data or just replace the data with an average of the previous 2 and next 2.
For example:
1,2,1,12,1,2,3
Replace 12 with (2 + 1 + 1 + 2) / 4 = 1.5
You can basically do this with the whole array of values you have, this way you have a more normalised line/graph.
You can also use the clippedEdges value to determine if one or more joints is outside the view.
Suppose you have a list of 2D points with an orientation assigned to them. Let the set S be defined as:
S={ (x,y,a) | (x,y) is a 2D point, a is an orientation (an angle) }.
Given an element s of S, we will indicate with s_p the point part and with s_a the angle part. I would like to know if there exist an efficient data structure such that, given a query point q, is able to return all the elements s in S such that
(dist(q_p, s_p) < threshold_1) AND (angle_diff(q_a, s_a) < threshold_2) (1)
where dist(p1,p2), with p1,p2 2D points, is the euclidean distance, and angle_diff(a1,a2), with a1,a2 angles, is the difference between angles (taken to be the smallest one). The data structure should be efficient w.r.t. insertion/deletion of elements and the search as defined above. The number of vectors can grow up to 10.000 and more, but take this with a grain of salt.
Now suppose to change the above requirement: instead of using the condition (1), let's request all the elements of S such that, given a distance function d, we want all elements of S such that d(q,s) < threshold. If i remember well, this last setup is called range-search. I don't know if the first case can be transformed in the second.
For the distance search I believe the accepted best method is a Binary Space Partition tree. This can be stored as a series of bits. Each two bits (for a 2D tree) or three bits (for a 3D tree) subdivides the space one more level, increasing resolution.
Using a BSP, locating a set of objects to compare distances with is pretty easy. Just find the smallest set of squares or cubes which contain the edges of your distance box.
For the angle, I don't know of anything. I suppose that you could store each object in a second list or tree sorted by its angle. Then you would find every object at the proper distance using the BSP, every object at the proper angles using the angle tree, then do a set intersection.
You have effectively described a "three dimensional cyclindrical space", ie. a space that is locally three dimensional but where one dimension is topologically cyclic. In other words, it is locally flat and may be modeled as the boundary of a four-dimensional object C4 in (x, y, z, w) defined by
z^2 + w^2 = 1
where
a = arctan(w/z)
With this model, the space defined by your constraints is a 2-dimensional cylinder wrapped "lengthwise" around a cross section wedge, where the wedge wraps around the 4-d cylindrical space with an angle of 2 * threshold_2. This can be modeled using a "modified k-d tree" approach (modified 3-d tree), where the data structure is not a tree but actually a graph (it has cycles). You can still partition this space into cells with hyperplane separation, but traveling along the curve defined by (z, w) in the positive direction may encounter a point encountered in the negative direction. The tree should be modified to actually lead to these nodes from both directions, so that the edges are bidirectional (in the z-w curve direction - the others are obviously still unidirectional).
These cycles do not change the effectiveness of the data structure in locating nearby points or allowing your constraint search. In fact, for the most part, those algorithms are only slightly modified (the simplest approach being to hold a visited node data structure to prevent cycles in the search - you test the next neighbors about to be searched).
This will work especially well for your criteria, since the region you define is effectively bounded by these axis-defined hyperplane-bounded cells of a k-d tree, and so the search termination will leave a region on average populated around pi / 4 percent of the area.