gpsd TPV json data has 95% conficence errors (ex: ept, epx, epy ...) How do you use the numbers? - gps

http://www.catb.org/gpsd/gpsd_json.html
Let's say I get
"alt":1343.127
"epv":32.321
in TPV data.
epv is "Estimated vertical error in meters, 95% confidence", so this means, at 95% of chance, the data has 32.321 meters differences in 1343.127(alt) meters from the actual altitude?
Same question goes to other error values such as ept, epx, epy, epd ..
ept for time
epx for longitude
epy for latitude
epv for altitude
epd for track (heading)??
eps for speed
epc for climb

The error estimates are within a "95% confidence interval" -- There is a 0.95 chance that the actual altitude in your example is between 1310 and 1375 meters.

Related

GPS distance: Pythagora's on an Equirectangular approximation vs Haversine fomula errors at different scales?

I'm trying to decide whether it makes cpu processing time sense to use the more complex Haversine formula instead of the faster Pythagorean's formula but while there seems to be a pretty unanimous answer on the lines of: "you can use Pythagora's formula for acceptable results on small distances but haversine is better", I can not find even a vague definition on what "small distances" mean.
This page, linked in the top answer to the very popular question Calculate distance between two latitude-longitude points? claims:
If performance is an issue and accuracy less important, for small distances Pythagoras’ theorem can be used on an equi­rectangular projec­tion:*
Accuracy is somewhat complex: along meridians there are no errors, otherwise they depend on distance, bearing, and latitude, but are small enough for many purposes*
the asterisc even says "Anyone care to quantify them?"
But this answer claims that the error is about 0.1% at 1000km (but it doesn't cite any reference, just personal observations) and that for 4km (even assuming the % doesn't shrink due to way smaller distance) it would mean under 4m of error which for public acces GPS is around the open-space best gps accuracy.
Now, I don't know what the average Joe thinks of when they say "small distances" but for me, 4km is definitely not a small distance (- I'm thinking more of tens of meters), so I would be grateful if someone can link or calculate a table of errors just like the one in this answer of Measuring accuracy of latitude and longitude? but I assume the errors would be higher near the poles so maybe choose 3 representative lattitudes (5*, 45* and 85*?) and calculate the error with respect to the decimal degree place.
Of course, I would also be happy with an answer that gives an exact meaning to "small distances".
Yes ... at 10 meters and up to 1km meters you're going to be very accurate using plain old Pythagoras Theorem. It's really ridiculous nobody talks about this, especially considering how much computational power you save.
Proof:
Take the top of the earth, since it will be a worst case, the top 90 miles longitude, so that it's a circle with the longitudinal lines intersecting in the middle.
Note above that as you zoom in to an area as small as 1km, just 50 miles from the poles, what originally looked like a trapezoid with curved top and bottom borders, essentially looks like a nearly perfect rectangle. In other words we can assume rectilinearity at 1km, and especially at a mere 10M.
Now, its true of course that the longitude degrees are much shorter near the poles than at the equator. For example any slack-jawed yokel can see that the rectangles made by the latitude and longitude lines grow taller, the aspect ratio increasing, as you get closer to the poles. In fact the relationship of the longitude distance is simply what it would be at the equator multiplied by the cosine of the latitude of anywhere along the path. ie. in the image above where "L" (longitude distance) and "l" (latitude distance) are both the same degrees it is:
LATcm = Latitude at *any* point along the path (because it's tiny compared to the earth)
L = l * cos(LATcm)
Thus, we can for 1km or less (even near the poles) calculate the distance very accurately using Pythagoras Theorem like so:
Where: latitude1, longitude1 = polar coordinates of the start point
and: latitude2, longitude2 = polar coordinates of the end point
distance = sqrt((latitude2-latitude1)^2 + ((longitude2-longitude1)*cos(latitude1))^2) * 111,139*60
Where 111,139*60 (above) is the number of meters within one degree at the equator,
because we have to convert the result from equator degrees to meters.
A neat thing about this is that GPS systems usually take measurements at about 10m or less, which means you can get very accurate over very large distances by summing up the results from this equation. As accurate as Haversine formula. The super-tiny errors don't magnify as you sum up the total because they are a percentage that remains the same as they are added up.
Reality is however that the Haversine formula (which is very accurate) isn't difficult, but relatively speaking Haversine will consume your processor at least 3 times more, and up to 31x more computational intensive according to this guy: https://blog.mapbox.com/fast-geodesic-approximations-with-cheap-ruler-106f229ad016.
For me this formula did come useful to me when I was using a system (Google sheets) that couldn't give me the significant digits that are necessary to do the haversine formula.

Rounding latitude and longitude to get within approximately 500 meters

I have over 350,000 pairs of latitude and longitude decimal values. These pairs represent locations in the US. These data give me fairly precise data, but I don't need to be this precise.
I'm looking for recommendations on how to round these values so that I can reduce my locations from 350,000 to something smaller.
Rounding both values to the nearest hundredth produces 138k pairs.
Rounding both values to the nearest thousandth produces 320k pairs.
If rounding to the 2nd decimal, I fear that I'm reaching out too far. If rounding to the 3rd decimal, I feel like I still have too many pairs. Ideally, I would like to be within perhaps 400 to 600 meters of the actual lat/lon. I suppose rounding to the 3rd decimal is indeed approaching that desired area, but slightly on the strict side.
Has anyone tried this, and maybe taken the approach of rounding latitude to the 2nd decimal and longitude to the 3rd decimal? Would you recommend one over the other? Again, most data in the continental US, with some in Alaska and Hawaii.
Keeping the precision in your latitude values is more helpful than the precision in longitude. At the equator the precision (in degrees) comes out to the same, but as you move north or south from there, the distance in a degree longitude becomes less and less.
If you really need within the 500 meters, and you assume your rounding of each location has equal loss of distance, you can round each value by 354 meters. (That is you target rounding each an equal distance on average.)
354 meters is about 0.0033 degrees of latitude anywhere in the world. (There's less than a percent of variance in this depending on location.)
But longitude changes size much more dramatically:
In northern Alaska, at a latitude of 70 degrees N, 354 meters comes close to 0.01 degrees longitude, but in Hawaii, the same distance east-west equals 0.0033 degrees.
Instead of just rounding, could you group to nearest even thousandth for latitude? For longitude, group to nearest hundredth if latitude is greater than 50 group to nearest even thousandth otherwise.
Might be a pain to explain to people, but not too painful to code, I think.

How to count peaks on chart in LabVIEW above some specific value. How to count amount of hills (Heart Rate Monitor)

I want to create some simple heart rate monitor in LabVIEW.
I have sensor which gives me heart workflow (upper graph): Waveform
On second graph (lower graph) is amount of hills (0 - valley, 1 - hill) and that hills are heart beats (that is voltage waveform). From this I want to get amount of those hills, then multiply this number by 6 and I'll get heart rate per minute.
Measuring card I use: NI USB-6009.
Any idea how to do that?
I can sent a VI file if anyone will be able to help me.
You could use Threshold Peak Detector VI
This VI does not identify the locations or the amplitudes of peaks
with great accuracy, but the VI does give an idea of where and how
often a signal crosses above a certain threshold value.
You could also use Waveform Peak Detection VI
The Waveform Peak Detection VI operates like the array-based Peak
Detector VI. The difference is that this VI's input is a waveform data
type, and the VI has error cluster input and output terminals.
Locations displays the output array of the peaks or valleys, which is
still in terms of the indices of the input waveform. For example, if
one element of Locations is 100, that means that there is a peak or
valley located at index 100 in the data array of the input waveform.
Figure 6 shows you a method for determining the times at which peaks
or valleys occur.
NI have a great tutorial that should answer all your questions, it can be found here:
I had some fun recreating some of your exercise here. I simulated a squarewave. In my sample of the square wave, I know how many samples I have and the sampling frequency. As a result, I calculate how much time my data sample represents. I then count the number of positive edges in the sample. I do some division to calculate beats/second and multiplication for beats/minute. The sampling frequency, Fs, and number of samples, N or #s are required to calculate your beats per minute metric. Their uses are shown below.
The contrived VI
Does that lead you to a solution for your application?

iBeacons: "Accuracy" unit

What is the unit in which core location framework gives the "Accuracy" (distance) for iBeacons. According to my knowledge it should be in Meters. But, in my app I have placed some beacons in distance of 19 Meters (63 foot) and the accuracy value of beacon from the framework is coming to be greater than 25 also sometimes.
The unit of CLBeacon.accuracy is in meters, but as you have witnessed, it is only a rough estimate. At short distances of 3 meters or less, the estimate will usually be within a meter. At longer distances it can be off by 10 meters or more.
This error is due to radio noise, multipath and attenuation. Estimation errors are a fundamental limitation of the technology, so you must set expectations appropriately.
Read more here: http://developer.radiusnetworks.com/2014/12/04/fundamentals-of-beacon-ranging.html

Calculating distance in m in xyz between GPS coordinates that are close together

I have a set of GPS Coordinates and I want to find the speed required for a UAV to travel between them. Doing this by calculating distance in x y z and then dividing by time to travel - m/s.
I know the great circle distance but I assume this will be incorrect since they are all relatively close together(within 10m)?
Is there an accurate way to do this?
For small distances you can use the haversine formula without a relevant loss of accuracy in comparison to Vincenty's f.e. Plus, it's designed to be accurate for very small distances. This can be read up here if you are interested.
You can do this by converting lat/long/alt into XYZ format for both points. Then, figure out the rotation angles to move one of those points (usually, the oldest point) so that it would be at lat=0 long=0 alt=0. Rotate the second position report (the newest point) by the same rotation angles. If you do it all correctly, you will find X equals the east offset, Y equals the north offset, and Z equals the up offset. You can use Pythagorean theorm with X and Y (north and east) offsets to determine the horizontal distance traveled. Normally, you just ignore the altitude differences and work with horizontal data only.
All of this assumes you are using accurate formulas to convert lat/lon/alt into XYZ. It also assumes you have enough precision in the lat/lon/alt values to be accurate. Approximations are not good if you want good results. Normally, you need about 6 decimal digits of precision in lat/lon values to compute positions down to the meter level of accuracy.
Keep in mind that this method doesn't work very well if you haven't moved far (greater than about 10 or 20 meters, more is better). There is enough noise in the GPS position reports that you are going to get jumpy velocity values that you will need to further filter to get good accuracy. The math approach isn't the problem here, it's the inherent noise in the GPS position reports. When you have good reports, you will get good velocity.
A GPS receiver doesn't normally use this approach to know velocity. It looks at the way doppler values change for each satellite and factor in current position to know what the velocity is. This works reasonably well when the vehicle is moving. It is a much faster way to detect changes in velocity (for instance, to release a position clamp). The normal user doesn't have access to the internal doppler values and the math gets very complicated, so it's not something you can do.