I can't really understand what accuracy means, I know that proxymity is a range of the iBeacon (immediate, near, far, unknown) based on the strenght of the signal, but what about accuracy?
The doc says
The accuracy of the proximity value, measured in meters from the
beacon
So does it mean that it's a value that says how much meters you're away from the iBeacon or it's just a value that tells you if you're close to one of the proximity zones?
Accuracy is an estimate of the distance in meters to the beacon. This is only a very rough estimate based on Bluetooth signal strength (RSSI) and varies quite a bit due to radio noise.
Due to the error, Apple recommends it be used only to determine the relative positions of beacons when multiple are visible.
You can read more about how this works here: http://developer.radiusnetworks.com/2014/12/04/fundamentals-of-beacon-ranging.html
Related
I'm busy with an app for rapid recording of gps positions. I've integrated the records with Google Maps, and it is clear that a few records, though not all of them, are quite far off - up to 200m out measured using Google Earth. This is probably due to the GPS accuracy (maybe the GPS wasn't on for long enough, enough satellites, etc). I can work with this, but I would like to report on the accuracy.
My question is, is there a property that returns the GPS accuracy (perhaps as HDOP / EPE in meters) in the Delphi Firemonkey location sensor for Android, or can one access it in another way? From what I can see this may only be possible on iOS, but then I would like to know where many of the GPS apps (GPS Essentials, Locus Maps) do it? Is it a Firemonkey limitation? The locationsensor.accuracy looks like the value I'm after, but that is an input?
Any advice will be appreciated! All I want to do is set a threshold to warn the user of possible inaccurate readings so he/she can wait a few seconds for better accuracy.
I have tried changing the LocationSensor.accuracy property, but as stated, I want an output from the GPS, not an input.
I was trying to find a way to calibrate a magnetometer attached to a vehicle as Figure 8 method of calibration is not really posible on vehicle.
Also removing magnetomer calibrating and fixing won't give exact results as fixing it back to vehicle introduces more hard iron distortion as it was calibrated without the vehicle environment.
My device also has a accelerometer and gps. Can I use accelerometer or gps data (this are calibrated) to automatically calibrate the magnetometer
Given that you are not happy with the results of off-vehicle calibration, I doubt that accelerometer and GPS data will help you a lot unless measured many times to average the noise (although technically it really depends on the precision of the sensors, so if you have 0.001% accelerometer you might get a very good data out of it and compensate inaccuracy of the GPS data).
From the question, I assume you want just a 2D data and you'll be using the Earth's magnetic field as a source (as otherwise, GPS wouldn't help). You might be better off renting a vehicle rotation stand for a day - it will have a steady well known angular velocity and you can record the magnetometer data for a long period of time (say for an hour, over 500 rotations or so) and then process it by averaging out any noise. Your vehicle will produce a different magnetic field while the engine is off, idle and running, so you might want to do three different experiments (or more, to deduce the engine RPM effect to the magnetic field it produces). Also, if the magnetometer is located close to the passengers, you will have additional influences from them and their devices. If rotation stand is not available (or not affordable), you can make a calibration experiment with the GPS (to use the accelerometers or not, will depend on their precision) as following:
find a large flat empty paved surface with no underground magnetic sources (walk around with your magnetometer to check) then put the
vehicle into a turn on this surface and fix the steering wheel use the cruise control to fix the speed
wait for couple of circles to ensure they are equal make a recording of 100 circles (or 500 to get better precision)
and then average the GPS noise out
You can do this on a different speed to get the engine magnetic field influence from it's RPM
I had performed a similar procedure to calibrate the optical sensor on the steering wheel to build the model of vehicle angular rotation from the steering wheel angle and current speed and that does not produce very accurate results due to the tire slipping differently on a different surface, but it should work okay for your problem.
I have set of data which includes position of a car and unknown emitter signal level. I have to estimate the distance based on this. Basically signal levels varies inversely to the square of distance. But when we include stuff like multipath,reflections etc we need to use a diff equation. Here come the Hata Okumura Model which can give us the path loss based on distance. However , the distance is unknown as I dont know where the emitter is. I only have access to different lat/long sets and the received signal level.
What I am asking is could you guys please guide me to techniques which would help me estimate the distance based on current pos and signal strength.All I am asking for is guidance towards a technique which might be useful.
I have looked into How to calculate distance from Wifi router using Signal Strength? but he has 3 fixed wifi signals and can use the FSPL. However in an urban environment it doesnot work.
Since the car is moving, using any diffraction model would be very difficult. The multipath environment is constantly changing due to moving car, and any reflection/diffraction model requires well-known object geometry around the car. In your problem you have moving car position time series [x(t),y(t)] which is known. You also have a time series of rough measurement of the distance between the car and the emitter [r(t)] of unknown position. You need to solve the stationary unknown emitter position (X,Y). So you have many noisy measurement with two unknown parameters to estimate. This is a classic Least Square Estimation problem. You can formulate r(ti) = sqrt((x(ti)-X)^2 + (y(ti)-Y)^2) and feed your data into this equation and do least square estimation. The data obviously is noisy due to multipath but the emitter is stationary and with overtime and during estimation process, the noise can be more or less smooth out.
Least Square Estimation
Most GPS systems report "accuracy" in units of meters, with the figure varying over orders of magnitude. What does this figure mean? How can it be translated to an error function for estimation, i.e. the probability of an actual position given the GPS reading and its reported accuracy?
According to the Wikipedia article on GPS accuracy, a reading down to 3 meters can be achieved by precisely timing the radio signals arriving at the receiver. This seems to correspond with the tightest error margin reported by e.g. an iPhone. But that wouldn't account for external signal distortion.
It sounds like an error function should have two domains, with a gentle linear slope out to the reported accuracy and then a polynomial or exponential increase further out.
Is there a better approach than to tinker with it? Do different GPS chipset vendors conform to any kind of standard meaning, or do they all provide only some kind of number for the sake of feature parity?
The number reported is usually called HEPE, Horizontal Estimated Position Error. In theory, 67% of the time the measurement should be within HEPE of the true position, and 33% of the time the measurement should be in horizontal error by more than the HEPE.
In practice, no one checks HEPE's very carefully, and in my experience, HEPE's reported for 3 or 4 satellite fixes are much larger than they need to be. That is, in my experience 3 satellite fixes are accurate to within a HEPE distance much more than 67% of the time.
The "assumed" error distribution is circular gaussian. So in principle you could find the right ratios for a circular gaussian and derive the 95% probability radius and so on. But to have these numbers be meaningful, you would need to do extensive statistical testing to verify that indeed you are getting around 95%.
The above are my impressions from working in the less accuracy sensitive parts of GPS over the years. Concievably, people who work on using GPS for aircraft landing may have a better sense of how to predict errors and error rates, but the techniques and methods they use are likely not available in consumer GPS devices.
I am working on an iOS application using location services. Having a background in experimental physics, I am wondering what exactly horizontalAccuracy in a location found in locationManager:didUpdateToLocation:fromLocation: stands for. The documentation is a bit sparse...
I assume that the accuracy gives a confidence interval based on a gaussian (or poisson?) distribution. Thus, with a certain probability, the actual position is within a circle with a radius of horizontalAccuracy, but could as well be outside that area. The question is then: how big is that probability? If horizontalAccuracy corresponds to 1σ, I'd have a probability of 68% to be within that circle with horizontalAccuracy, but looking the other way around, in nearly one third of the cases, the actual position will be outside that area. Thus, in certain cases, I'd rather use 2σ (2*horizontalAccuracy) or even 3σ (3*horizontalAccuracy) to calculate with.
To put it short: is there any indication somewhere, which confidence interval horizontalAccuracy has?
Comment to all who respond "Apple says it is within":
Well - the measurement can not be exact. It must have a certain level of uncertainty. If you repeat the measurement very often, you will get a distribution of results - probably a gaussian distribution. This gaussian has a certain width, which corresponds to the level of uncertainty of the measurements. Measuring the position more often will reduce the uncertainty and thus increase accuracy, but never will give you a distinct interval where the actual position is guaranteed to be in. You will only get a probability. But if the accuracy is 3sigma, we have 99,7% - which is close to certain.
To put it short - I doubt the documentation from Apple.
I have been looking for the same information and could not find any answers. The only pointer I have, is that on Android, they are using 1σ:
http://developer.android.com/reference/android/location/Location.html#getAccuracy%28%29
To all the non-believers, this link also explains a little bit how the accuracy thing works.
My guess is, the same is true on iOS, but there is no way to be sure - except for asking the guy who wrote the code ;)
Edit:
After some playing around and checking location updates vs. physical location it seems like it is more likely 3σ on iOS. There are two observations that lead me to believe that is true:
On Android locations that come from WiFi triangulation are usually reported as having an accuracy between 20 and 50 meters. On iOS it's between 65 and 165 meters.
When measuring the distance between a reported location and the device's physical location, it has been within the reported accuracy every time so far.
The iOS documentation doesn't specify the probability of containment, but android reports a one-sigma horizontal accuracy, which they define to represent 68% probability that the true location is within the circle.
Their explanation is that location errors follow a normal distribution, and therefore +/- one-sigma represents 68% probability. However, 68% is the probability for a one-dimensional normal distribution. In two dimensions, a one-sigma error represents 39% probability of containment within a circle (the distance error follows a Rayleigh distribution, a.k.a. a chi distribution with two degrees of freedom).
There are two possible explanations.
The circle truly represents 68% probability of containment, in which case android developers have scaled the one-dimensional sigma by a factor of about 1.5 so that the circle happens to represent 68%. In this case, their choice of 68% is completely arbitrary.
The circle actually represents 39% probability of containment. In this case, their description would be correct if you replaced a one-dimensional gaussian with a two-dimensional one and its associated probability.
I think the second explanation is more likely.
iOS: https://developer.apple.com/library/ios/documentation/CoreLocation/Reference/CLLocation_Class/index.html#//apple_ref/occ/instp/CLLocation/horizontalAccuracy
Android: http://developer.android.com/reference/android/location/Location.html#getAccuracy%28%29
Which is denoting the Accuracy Level of Location. Example: If horizontalAccuracy is 0 means high accuracy and 500 as horizontalAccuracy means low accuracy.
Location Services Provider updates the location based on the consolidated best value of cellular, WiFi (in the case of WiFi connections) and GPS. So, the location value will be oscillating base on coverage. You can filter it by using this horizontalAccuracy.
Horizontal accuracy of X indicates that your horizontal position can be X meters off.. Remember location can be found out using GPS, cell tower triangulation or wifi location data. CLLocationManager gives you a most accurate location from these 3 methods.. And say there is a chance it may be off by atmost X meters.
In what way is the documentation sparse?
The radius of uncertainty for the location, measured in meters. (read-only)
The location’s latitude and longitude identify the center of the circle, and this value indicates the radius of that circle. A negative value indicates that the location’s latitude and longitude are invalid.
So your location is within the circle. It isn't outside the circle, or the radius would be bigger. Your assumption about confidence intervals is incorrect.