I have an Adafruit Ultimate GPS module which I am trying to fuse with a BNO055 IMU sensor. I'm trying to follow https://github.com/slobdell/kalman-filter-example this kalman-filtering example. Although most of his code is pretty clear, I looked at his input json file(https://github.com/slobdell/kalman-filter-example/blob/master/pos_final.json) and saw that he's getting velocity north, velocity east and velocity down from the GPS module. I looked at the NMEA messages and none seem to give me that. What am I missing? How to get these direction velocities?
Thanks!
pos_final.json is not the input file, but the output file. The input file is taco_bell_data.json and is found in the tar.gz archive. It contains the following variables:
"timestamp": 1.482995526836e+09,
"gps_lat": 0,
"gps_lon": 0,
"gps_alt": 0,
"pitch": 13.841609,
"yaw": 225.25635,
"roll": 0.6795258,
"rel_forward_acc": -0.014887575,
"rel_up_acc": -0.025188839,
"abs_north_acc": -0.0056906715,
"abs_east_acc": 0.00010974275,
"abs_up_acc": 0.0040153866
He measures position with a GPS and orientation/acceleration with an accelerometer. The NED velocities that are found in pos_final.json are estimated by the Kalman filter. That's one of the main tasks of a Kalman filter (and other observers): to estimate unknown quantities.
A GPS will often output velocities, but they will be relative to the body of the object. You can convert the body-relative velocties to NED-velocities if you know the orientation of the body (roll, pitch and yaw). Let's say you have a drone moving at heading 030°, and the GPS says the forward velocity is 1 m/s, the drone will have the following North velocity:
vel_north = 1 m/s * cos(30°) = 0.86 m/s
and the following East velocity:
vel_east = 1 m/s * sin(30°) = 0.5 m/s
This doesn't take into account roll and pitch. To take roll and pitch into account you can take a look at rotation matrices or quaternions on Wikipedia.
The velocities are usually found in the VTG telegram the GPS outputs. It's not always being output. The GPS has to have that feature and it has to be enabled on the GPS. The RMC telegram can also be used.
The velocities from the GPS are often very noisy, which is why a Kalman filter is typically used instead of converting the body-relative velocities to NED-velocities with the method above. The GPS velocities will work fine in higher speeds though.
Related
I'm successfully using pitch detection features of ml5:
tutorial: https://ml5js.org/reference/api-PitchDetection/
model: https://cdn.jsdelivr.net/gh/ml5js/ml5-data-and-models/models/pitch-detection/crepe/
The issue:
No pitch above ±2000Hz is detected. I tried multiple devices and checked that the sounds are visible on sonograms so it's does not seem to be a mic issue.
I assumed it may be a result of sampling rate limitations / resampling done by the library, as the Nyquist frequency (max "recordable" frequency) is that of half of the sampling rate.
I hosted the ml5 sources localy and tried modifying the PitchDetection class
There I see the sampling rate seems to be resampled to 1024Hz for performance reasons. This does not sound right though as if I'm not mistaken, this would only allow detection of frequencies up to 512hz. I am definitely missing something (or a lot).
I tried fiddling with the rates, but increasing it to, say 2048 causes an error:
Error when checking : expected crepe_input to have shape [null,1024] but got array with shape [1,2048].
My question is:
Is there something in ml5 PitchDetection class I can modify, configure (perhaps a different model) to detect frequencies higher than 2000Hz using crepe model?
After more investigation, turns out the CREPE model itself supports up to ~1997Hz (seen in code) or 1975.5 Hz (seen in paper)
The paper about CREPE:
https://arxiv.org/abs/1802.06182
States:
The 360 pitch values are denoted as c1, c2..., 360 are selected so that they cover six octaves with 20-cent intervals between C1 and B7, corresponding to 32.70 Hz and 1975.5 Hz
The JS implementation has this mapping which maps the 360 intervals to 0 - 1997Hz range:
const cent_mapping = tf.add(tf.linspace(0, 7180, 360), tf.tensor(1997.3794084376191))
This means, short of retraining the model I'm probably out of luck at using it for now.
Edit:
After a good nights sleep I found a simple solution which works for my simple application.
In it's essence, it is to resample my audio buffer so it has 2 times lower pitch. CREPE than detects a pitch of 440Hz as 220Hz, and I just need to multiply it by 2.
The result is still more consistently correct than YIN algorithm for my real time, noisy application.
At first blush this presumably means -
(1) looking only at lower IR frequencies,
(2) select a IR frequency cut-off for low frequency buckets of the u/v FFT grid
(3) Once we have that, derive the power distribution - squares of amplitudes - for that IR range of frequency buckets the camera supports.
(4) Fit that distribution against the Rayleigh-Jones classical Black Box radiation formula:
(https://en.wikipedia.org/wiki/Rayleigh%E2%80%93Jeans_law#Other_forms_of_Rayleigh%E2%80%93Jeans_law)
(5) Assign a Temperature of 'best fit'.
The units for B(ν,T) are Power per unit frequency per unit surface area at equilibrium Temperature
Of course, this leaves many details out, such as (6) cancelling background, etc, but one could perhaps use the opposite facing camera to assist in that. Where buckets do not straddle the temperature of interest, (7) use a one-sided distribution to derive an inferred Gaussian curve to fit the Rayleigh-Jeans curve at that derived central frequency ν, for measured temperature T.
Finally (8) check if this procedure can consistently detect a high vs low surface temperature (9) check if it can consistently identify a 'fever' temperature (say, 101 Fahrenheit / 38 Celcius) pointing at a forehead.
If all that can be done, (10) Voila! a body fever detector
So those who are capable can fill us in on whether this is possible to do so for eventual posting at an app store as a free Covid19 safe body temperature app? I have a strong sense there's quite a few out there who can verify this in a week or two!
It appears that the analog signal assumed in (1) and (2) are not available in the Android digital Camera2 interface.
Android RAW image stream, that is uncompressed YUV, is already encoded Y green monochrome, and U,V are blue and red shifts from zero for converting green monochrome to color.
The original analog frequency / energy signal is not immediately accessible. So adaptation is not possible (yet).
I'm trying to use a VectorNav VN100 IMU to map a path through an underground tunnel (GPS denied environment) and am wondering what is the best approach to take to do this.
I get lots of data points from the VN100 these include: orientation/pose (Euler angles, quaternions), and acceleration and gyroscope values in three dimensions. The acceleration and gyro values are given in raw and filtered formats where filtered outputs have been filtered using an onboard Kalman filter.
In addition to IMU measurements I also measure GPS-RTK coordinates in three dimensions at the start and end-points of the tunnel.
How should I approach this mapping problem? I'm quite new to this area and do not know how to extract position from the acceleration and orientation data. I know acceleration can be integrated once to give velocity and that in turn can be integrated again to get position but how do I combine this data together with orientation data (quaternions) to get the path?
In robotics, Mapping means representing the environment using perception sensor (like 2D,3D laser or Cameras).
Once you got the map, it can be used by robot to know its location(Localization). Map is also used for find a path between locations to move from one place to another place(Path planning).
In your case you need a perception sensor to get the better location estimation. With only IMU you can track the position using Extended Kalman filter(EKF) but it drifts quickly.
Robot Operating System has EKF implementation you can refer it.
Ok so I came across a solution that gets me somewhat closer to my goal of finding the path travelled underground, although it is by no means the final solution I'm posting my algorithm here in the hopes that it helps someone else.
My method is as follows:
Rotate the Acceleration vector A = [Ax, Ay, Az] output by the VectorNav VN100 into the North, East, Down frame by multiplying by the quaternion VectorNav output Q = [q0, q1, q2, q3]. How to multiply a vector by a quaternion is outlined in this other post.
Basically you take the acceleration vector and add a fourth component on to the end of it to act as the scalar term, then multiply by the quaternion and it's conjugate (N.B. the scalar terms in both matrices should be in the same position, in this case the scalar quaternion term is the first term, so therefore a zero scalar term should be added on to the start of the acceleration vector) e.g. A = [0,Ax,Ay,Az]. Then perform the following multiplication:
A_ned = Q A Q*
where Q* is the complex conjugate of the quaternion (i, j, and k terms are negated).
Integrate the rotated acceleration vector to get the velocity vector: V_ned
Integrate the Velocity vector to get the position in north, east, down: R_ned
There is substantial drift in the velocity and position due to sensor bias which causes drift. This can be corrected for somewhat if we know the start and end velocity and start and end positions. In this case the start and end velocities were zero so I used this to correct the drift in the velocity vector.
Uncorrected Velocity
Corrected Velocity
My final comparison between IMU position vs GPS is shown here (read: there's still a long way to go).
GPS-RTK data vs VectorNav IMU data
Now I just need to come up with a sensor fusion algorithm to try to improve the position estimation...
I'm trying to track the distance a user has moved over time in my application using the GPS. I have the basic idea in place, so I store the previous location and when a new GPS location is sent I calculate the distance between them, and add that to the total distance. So far so good.
There are two big issues with this simple implementation:
Since the GPS is inacurate, when the user moves, the GPS points will not be a straight line but more of a "zig zag" pattern making it look like the user has moved longer than he actually have moved.
Also a accuracy problem. If the phone just lays on the table and polls GPS possitions, the answer is usually a couple of meters different every time, so you see the meters start accumulating even when the phone is laying still.
Both of these makes the tracking useless of coruse, since the number I'm providing is nowwhere near accurate enough.
But I guess that this problem is solvable since there are a lot of fitness trackers and similar out there that does track distance from GPS. I guess they do some kind of interpolation between the GPS values or something like that? I guess that won't be 100% accurate either, but probably good enough for my usage.
So what I'm after is basically a algorithm where I can put in my GPS positions, and get as good approximation of distance travelled as possible.
Note that I cannot presume that the user will follow roads, so I cannot use the Google Distance Matrix API or similar for this.
This is a common problem with the position data that is produced by GPS receivers. A typical consumer grade receiver that I have used has a position accuracy defined as a CEP of 2.5 metres. This means that for a stationary receiver in a "perfect" sky view environment over time 50% of the position fixes will lie within a circle with a radius of 2.5 metres. If you look at the position that the receiver reports it appears to wander at random around the true position sometimes moving a number of metres away from its true location. If you simply integrate the distance moved between samples then you will get a very large apparent distance travelled.for a stationary device.
A simple algorithm that I have used quite successfully for a vehicle odometer function is as follows
for(;;)
{
Stored_Position = Current_Position ;
do
{
Distance_Moved = Distance_Between( Current_Position, Stored_Position ) ;
} while ( Distance_Moved < MOVEMENT_THRESHOLD ) ;
Cumulative_Distance += Distance_Moved ;
}
The value of MOVEMENT_THRESHOLD will have an effect on the accuracy of the final result. If the value is too small then some of the random wandering performed by the stationary receiver will be included in the final result. If the value is too large then the path taken will be approximated to a series of straight lines each of which is as long as the threshold value. The extra distance travelled by the receiver as its path deviates from this straight line segment will be missed.
The accuracy of this approach, when compared with the vehicle odometer, was pretty good. How well it works with a pedestrian would have to be tested. The problem with people is that they can make much sharper turns than a vehicle resulting in larger errors from the straight line approximation. There is also the perennial problem with sky view obscuration and signal multipath caused by buildings, vehicles etc. that can induce positional errors of 10s of metres.
How to calculate Altitude from GPS Latitude and Longitude values.What is the exact mathematical equation to solve this problem.
It is possible for a given lat,lon to determine the height of the ground (above sea level, or above Referenz Elipsoid).
But since the earth surface, mountains, etc, do not follow a mathematic formula,
there are Laser scans, performed by Satelites, that measured such a height for e.g every 30 meters.
So there exist files where you can lookup such a height.
This is called a Digital Elevation Modell, or short (DEM)
Read more here: https://en.wikipedia.org/wiki/Digital_elevation_model
Such files are huge, very few application use that approach.
Many just take the altidude value as delivered by the GPS receiver.
You can find some charts with altitude data, like Maptech's. Each pixel has a corresponding lat, long, alt/depth information.
As #AlexWien said these files are huge and most of them must be bought.
If you are interest of using these files I can help you with a C++ code to read them.
You can calculate the geocentric radius, i.e., the radius of the reference Ellipsoid which is used as basis for the GPS altitude. It can be calculated from the the GPS latitude with this formula:
Read more about this at Wikipedia.