Differenet results from 2 identical gps modules using the same program at the same location - gps

I have two identical gps modules running the same program with antennas side by side. I get different results. I don't understand why
One example is 4211.41545 from the first unit and 4211.41481 from the second the timestamp is the same.

You don't say what the values you quote are so I will assume that the value you are showing is the Lat or Long value from the NMEA output data. If this is the case then the difference between the values is 0.00064 minutes of arc. The maximum physical distance that this will represent is around 1.1 metres (along a great circle). At 60 degrees north this would correspond to around 55cm in an E/W direction.
You do not say how far apart the antennae are or whether there are any obstructions above the level of their horizons which will introduce varying multipath signals that will be different for the two antennae. The two receivers will not necessarily be sampling the satellite signals at the same instants and so can have marginally different signal timings resulting in a different position result.
A typical consumer grade GPS will have a CEP figure of 2.5 metres meaning that 50% of the values for position in an unobstructed sky view position will lie within a 2.5 metre radius circle of the true position.
Taking this into account you should not expect any two adjacent GPS devices to give identical results.

Related

How to calculate drainage density of a basin using Arc-GIS?

In theory clasees, we all have learnt about drainage density (DD), which is the ratio of 'total length of stream network in a basin (L)' to the 'basin area(A)'. However, when, we are trying to determine DD of some Indian river basins using Arc-GIS, we are facing some confusions. For instance,
We have to define some particular break value in Arc-GIS to get streams of differnt orders. Lesser the break value, higher is the stream order, therefore resulting in a higher value of L, and vice versa. For example, when we set the breakvalue as 500 for a particular basin, we are obatining a stream order upto 7, but if we increase the break value to 1500, the max. stream order reduces to 5 - automatically the L value also reduces. Thus, the same baisn may yeild two differnt DD values, under two aforesaid considertions of break values.
We also fixed the theoretically least possible break value , i.e., >0 and obtained a extremely dense stream network with high DD value.
So, my question is, what should be the threshold break value for a particular basin in order to get the DD?
From literature, we found that, there are five classes of DD with the following value ranges (km/km2), i.e., very coarse (<2), coarse (2-4), moderate (4-6), fine (6-8), and very fine (>8). However, for 20 river basins across differnt parts of India, we obtained DD values ranging between 1.03 to 1.29, which makes all those basins fall under very coarse category. But, from our visual inspection (one sample bain attached below), it seems to be very less to us.
We want some justification/ clarification/ comment on it.

what does latDist mean in sumo traci.vehicle.changeSublane(vehID, latDist)?

I want to know something more about the latDist component in traci.vehicle.changeSublane(vehID, latDist) rather than what sumo says in "https://sumo.dlr.de/docs/TraCI/Change_Vehicle_State.html#lane_change_mode_0xb6". Does it have any interval? Do the values it takes are the matter of distance? Does it have values as threshold? What do we mean when using for instance "3.00" as latDist?
Best,
Ali
In SUMO's sublane model every vehicle has a continuous lateral position meaning it can be freely positioned in the boundaries of the edge, occupying one or more sublanes. This means a "lane change" is nothing more than a lateral movement. To make it independent of the actual sublane width (which has not so much relevance in reality) the offset to change is now given in meters and not in lane (or sublane) numbers. So an offset of 3.0 means move 3 meters to the left (in a right hand driven network).

SUPL MS-Assisted - what measurements are sent

In the MS-assisted case, it is the GPS receiver which sends the measurements for the SLP to calculate and revert. I understand the measurements include the Ephemeris, Iono, DGPS etc + Doppler shift that are sent. Please let me know if my understanding is right.
Does the SET send the code (the entire data transmitted by satellites as is) that it receives as is or splits it into the above components and send?
All the assistance information in SUPL is encapsulated using RRLP protocol (Radio resource location services (LCS) protocol for GSM), RRC (Radio Resource Control for UMTS) or TIA 801 (for CDMA 2000) or LPP (LTE Positioning Protocol for LTE). I'm just looking at RRLP standard ETSI TS 101 527. The following part sounds interesting:
A.3.2.5 GPS Measurement Information Element
The purpose of the GPS Measurement Information element is to provide
GPS measurement information from the MS to the SMLC. This information
includes the measurements of code phase and Doppler, which enables the
network-based GPS method where position is computed in the SMLC. The
proposed contents are shown in table A.5 below, and the individual
fields are described subsequently.
In subsequent section it is defined as:
reference frame - optional, 16 bits - the frame number of the last measured burst from the reference BTS modulo 42432
GPS TOW (time of week) - mandatory, 24 bits, unit of 1ms
number of satellites - mandatory, 4 bits
Then for each satellite the following set of data is transmitted:
satellite ID - 6 bits
C/No - 6 bits
Doppler shift - 16 bits, 0.2Hz unit
Whole Chips - 10 bits
Fractional Chips - 10 bits
Multipath Indicator - 2 bits
Pseudorange Multipath Error - 3+3 bits (mantissa/exponent)
I'm not familiar that much with GPS operation to understand all the parameters, but as far as I understand:
C/No is simply a signal(carrier) to noise ratio
Doppler shift - gives the frequency shift for a given satellite, obviously
Whole/Fractional Chips together give the phase (and thus satellite distance)
My understanding is that things like almanac, ephemeris, Iono, DGPS are all known on the network side. As far as I know those things are transferred from network to MS in MS-based mode.
Hope that helps.
Measurements collected from MS-assisted location requests include:
Satellite ID
code phase - whole chips
code phase - fractional chips
Doppler
Signal strength
Multipath indicator
pseudorange RMS indicator
In addition, the GPS time of measurements is also provided as one value (in milliseconds) for the time which all measurements are valid.
In practice, the required fields that need to be accurate and correct are:
Satellite ID
code phase - whole chips
code phase - fractional chips
Doppler
The code phase values for each satellite are almost always used for the most accurate location calculation. Doppler values can be used to estimate a rough location but aren't usually accurate enough to really contribute to the final solution.
The other values for signal strength, multipath indication, and RMS indicator usually vary in meaning so much between vendors that they don't really provide much benefit for the position calculation. They would normally be used for things like weighting other values so that good satellites count more in the final position.
The network already knows (or should know) the ephemeris and ionospheric model. They are not measurements collected by the handset.

How to group nearby latitude and longitude locations stored in SQL

Im trying to analyse data from cycle accidents in the UK to find statistical black spots. Here is the example of the data from another website. http://www.cycleinjury.co.uk/map
I am currently using SQLite to ~100k store lat / lon locations. I want to group nearby locations together. This task is called cluster analysis.
I would like simplify the dataset by ignoring isolated incidents and instead only showing the origin of clusters where more than one accident have taken place in a small area.
There are 3 problems I need to overcome.
Performance - How do I ensure finding nearby points is quick. Should I use SQLite's implementation of an R-Tree for example?
Chains - How do I avoid picking up chains of nearby points?
Density - How to take cycle population density into account? There is a far greater population density of cyclist in london then say Bristol, therefore there appears to be a greater number of backstops in London.
I would like to avoid 'chain' scenarios like this:
Instead I would like to find clusters:
London screenshot (I hand drew some clusters)...
Bristol screenshot - Much lower density - the same program ran over this area might not find any blackspots if relative density was not taken into account.
Any pointers would be great!
Well, your problem description reads exactly like the DBSCAN clustering algorithm (Wikipedia). It avoids chain effects in the sense that it requires them to be at least minPts objects.
As for the differences in densities across, that is what OPTICS (Wikipedia) is supposed do solve. You may need to use a different way of extracting clusters though.
Well, ok, maybe not 100% - you maybe want to have single hotspots, not areas that are "density connected". When thinking of an OPTICS plot, I figure you are only interested in small but deep valleys, not in large valleys. You could probably use the OPTICS plot an scan for local minima of "at least 10 accidents".
Update: Thanks for the pointer to the data set. It's really interesting. So I did not filter it down to cyclists, but right now I'm using all 1.2 million records with coordinates. I've fed them into ELKI for analysis, because it's really fast, and it actually can use the geodetic distance (i.e. on latitude and longitude) instead of Euclidean distance, to avoid bias. I've enabled the R*-tree index with STR bulk loading, because that is supposed to help to get the runtime down a lot. I'm running OPTICS with Xi=.1, epsilon=1 (km) and minPts=100 (looking for large clusters only). Runtime was around 11 Minutes, not too bad. The OPTICS plot of course would be 1.2 million pixels wide, so it's not really good for full visualization anymore. Given the huge threshold, it identified 18 clusters with 100-200 instances each. I'll try to visualize these clusters next. But definitely try a lower minPts for your experiments.
So here are the major clusters found:
51.690713 -0.045545 a crossing on A10 north of London just past M25
51.477804 -0.404462 "Waggoners Roundabout"
51.690713 -0.045545 "Halton Cross Roundabout" or the crossing south of it
51.436707 -0.499702 Fork of A30 and A308 Staines By-Pass
53.556186 -2.489059 M61 exit to A58, North-West of Manchester
55.170139 -1.532917 A189, North Seaton Roundabout
55.067229 -1.577334 A189 and A19, just south of this, a four lane roundabout.
51.570594 -0.096159 Manour House, Picadilly Line
53.477601 -1.152863 M18 and A1(M)
53.091369 -0.789684 A1, A17 and A46, a complex construct with roundabouts on both sides of A1.
52.949281 -0.97896 A52 and A46
50.659544 -1.15251 Isle of Wight, Sandown.
...
Note, these are just random points taken from the clusters. It may be sensible to compute e.g. cluster center and radius instead, but I didn't do that. I just wanted to get a glimpse of that data set, and it looks interesting.
Here are some screenshots, with minPts=50, epsilon=0.1, xi=0.02:
Notice that with OPTICS, clusters can be hierarchical. Here is a detail:
First, your example is quite misleading. You have two different sets of data, and you don't control the data. If it appears in a chain, then you will get a chain out.
This problem is not exactly suitable for a database. You'll have to write code or find a package that implements this algorithm on your platform.
There are many different clustering algorithms. One, k-means, is an iterative algorithm where you look for a fixed number of clusters. k-means requires a few complete scans of the data, and voila, you have your clusters. Indexes are not particularly helpful.
Another, which is usually appropriate on slightly smaller data sets, is hierarchical clustering -- you put the two closest things together, and then build the clusters. An index might be helpful here.
I recommend though that you peruse a site such as kdnuggets in order to see what software -- free and otherwise -- is available.

Is there a formula to change a latitude and longitude into a single number?

Can you tell me if there is a formula to change a latitude and longitude into a single number?
I plan to use this for a database table in software that provides routing for deliveries. The table row would have that number as well as the postal address. The database table would be sorted in ascending numeric order so the software can figure out which address the truck would need to go to first, second etc.
Please can you respond showing VB or VB.Net syntax so I can understand how it works?
For example I would use the following numbers for the latitude and longitude:
Lat = 40.71412890
Long = -73.96140740
Additional Information:
I'm developing an Android app using Basic4Android. Basic4Android uses a VB or VB.Net syntax with SQLite as the database.
Part of this app will have route planning. I want to use this number as the first column in an SQLite table and the other columns will be for the address. If I do a query within the app that sorts the rows in numerical ascending order, I will be able to figure out which postal address are closest to each other so it will take less time for me to go from house to house.
For example, if the numbers were:
194580, 199300, 178221
I can go to postal address 178221 then to 194580 and finally to 199300 and I won't need to take the long way around town to do my deliveries after they were sorted.
As an alternative, I would be happy if there was an easy way to call a web service that returns maybe a json response that has the single number if I send a postal address to the web site. Basic4Android does have http services that can send requests to a web site.
A latitude an longitude, can both be represented as 4 byte integer, such that the coordinates has an accuracy of 3cm which is sufficent for most applications.
Steps to create one 8 byte value of type long from latitude and longitude:
1) convert lat and lon to int by: int iLat = lat * 1E7;
2) Use a 8 byte long value to store both 4 byte int.
set upper 4byte to latitude, and lower 4 to longitude.
Now you have a 8 byte long representing a point on world up to 3cm accuracy.
There are other, better solutions, such ones that maintain similar numbers for near locations, but these are more complex.
You can add them up, but it makes little sense.
For instance a total of "10" - 8 lat and 2 long would then be the same as "10" - 3 lat and 7 long.
You can concatenate them, maybe with a dash.
But why do either? They are both really bad choices. A delivery system would want real x-y co-ordinates and if planning a route would want them seperate in order to calculate things like Euclidean distances.
Is this a homework question? I doubt a delivery service is designing their service structure on SO. Least hope not.
Based on AlexWien's anwser this is a solution in JavaScript:
pairCoordinates = function(lat, lng) {
return lat * 1e7 << 16 & 0xffff0000 | lng * 1e7 & 0x0000ffff;
}
How about this:
(lat+90)*180+lng
From Tom Clarkson's comment in Geospatial Indexing with Redis & Sinatra for a Facebook App
If you want to treat location as "one thing", the best way to handle this is to create a data structure that contains both values. A Class for OO languages, or a struct otherwise. Combining them into a single scalar value has little value, even for display.
Location is a really rich problem space, and there are dozens of ways to represent it. Lat/Lon is the tip of the iceberg.
As always, the right answer depends on what you're using it for, which you haven't mentioned.
I have created a method of putting the latitude and longitude into one base-36 number which for now I'm calling a geohexa.
The method works by dividing the world into a 36 x 36 grid. The first character is a longitude and the second character is a latitude. The latitude and longitude those two characters represent is the midpoint of that 'rectangle'. You just keep adding characters, alternating between longitude and latitude. Eventually the geohexa, when converted back to a lat and lon will be close enough to your original lat and lon.
Nine characters will typically get you within 5 meters of a randomly generated lat and lon.
The geohexa for London Bridge is hszaounu and for Tower Bridge is hszaqu88.
It is possible to sort the geohexa, and locations that are near each other will tend to be next to each other in a sorted list to some extent. However it by no means solves the travelling salesman problem!
The project, including a full explanation, implementations in Python, Java and JavaScript can be found here: https://github.com/Qarj/geohexa
You can use the Hilbert space filling curve to convert latitude,longitude into a single number: e.g., https://geocode.xyz/40.71413,-73.96141?geoit=xml 2222211311031 and https://geocode.xyz/40.71413,-73.96151?geoit=xml 2222211311026
The source code is here: https://github.com/eruci/geocode
In a nutshell:
Let X,Y be latitude,longitude
Truncate both to the 5th decimal point and convert to integers multiplying by 100000
Let XY = X+Y and YX = X-Y
Convert XY,YX to binary, and merge them into XYX by alternating the bits
Convert XYX to decimal
Add an extra number (1,2,3,4) to indicate when one or both XY,YX are negative numbers.
Now you have a single number that can be converted back to latitude,longitude and which preserves all their positional properties.
I found I can get good results by adding the latitude and longitude of a particular address by not including the house number and sorting the results in the database table by the added number following by a 2nd sort on the house number in ascending order.
I used this url to get the numbers I needed to add together:
http://where.yahooapis.com/geocode?q=stedman+st,+lowell,+ma