I am trying to understand the algorithms used in GIS/GPS, and writing a program for that. However, the issue is with test data, which needs to simulate the reception from satellites on any random day.
The test data should be
- a continuous stream from several sources that
- changes in the number of sources (more satellites is more accuracy in calculation)
- should be broadcast with a unique timestamp but needs not be received in order of the timestamp
How should I go about it? Is there a software pattern for this? Any toolkits that do this sort of live "broadcasting"?
[PS: I would be using either gcc or go, so something specific to those will be more useful]
Related
I'm currently investigating the WebRTC Statistics API, specifically the identifiers lastPacketReceivedTimestamp and estimatedPlayoutTimestamp. My aim for this is to evaluate when exactly the WebRTC API receives an RTP packet of video data and when exactly that packet is utilised to render a frame of video.
I can convert the values for lastPacketReceivedTimestamp from High Resolution Time to human-readable format, but I am struggling to do so with estimatedPlayoutTimestamp values.
Example outputs for lastPacketReceivedTimestamp are 1648396983645,1648396984656,1648396985657,1648396986656 - these convert well on https://www.epochconverter.com/.
Example outputs for estimatedPlayoutTimestamp are 3857385783571,3857385784570,3857385785580,3857385786570 - these do not convert well, instead reading as many years in the future.
Am I misunderstanding what the values of estimatedPlayoutTimestamp are? I thought they would just be the timestamp of when each packet is used in a render, but this does not appear to be the case. How should I go about finding when exactly each packet is used to render a frame of WebRTC video?
Thanks in advance!
estimatedPlayoutDelayTimestamp is defined as a NTP timestamp in the specification. These start on January 1st 1900, causing it to be 70 years in the future if you consider it to be based on the epoch.
Note that chrome://webrtc-internals currently does the calculation wrong as well...
I have a set of 12 IMUs mounted on the end effector of my robot arm which I read using a micro controller to determine it's movement. With my controller I can read two sensors simultaneously using direct memory access. After acquiring the measurements I would like to fuse them to make up for the sensor error and generate a more reliable reading than having only one sensor.
After some research my understanding is that I can use a Kalman filter to reach my desired outcome, but still have the problem of all the sensor values having different time stamps, since I can read only two at a time and even if both time stamps will be synchronized perfectly, the next pair will have a different time stamp if only in the µs range.
Now I know controls engineering principles but am completely new to the topic of sensor fusion and google presents me with too many results to find a solution in a reasonable amount of time.
Therefore my question, can anybody point me into the right direction by naming me a certain keyword I need to look for or literature I should work through to better understand that topic, please?
Thank you!
The topic you are dealing with is not an easy one. Try to have a look at the multi-rate kalman filters.
The idea is that you design different kalman filters for each combination of sensor that you can available at the same time, and use it when you have the data from those sensors, while the system state is passed between the various filters.
I'm using a HackRF One device and its corresponding osmocom Sink block inside of gnuradio-companion. Because the input to this block is Complex (i.e. a pair of Floats), I could conceivably send it an enormously large value. At some point the osmocom Sink will hit a maximum value and stop driving the attached HackRF to output stronger signals.
I'm trying to figure out what that maximum value is.
I've looked through the documentation at a number of different sites, for both the HackRF One and the osmocom source and can't find an answer. I tried looking through the source code itself, but couldn't see any clear indication there, although I may have missed something there.
http://sdr.osmocom.org/trac/wiki/GrOsmoSDR
https://github.com/osmocom/gr-osmosdr
I also thought of deriving the value empirically, but didn't trust my equipment to get a precise measure of when the block started to hit the rails.
Any ideas?
Thanks
Friedman
I'm using a HackRF One device and its corresponding osmocom Sink block inside of gnuradio-companion. Because the input to this block is Complex (i.e. a pair of Floats), I could conceivably send it an enormously large value.
No, the complexes z must meet
because the osmocom sink/the underlying drivers and devices map that -1 – +1 range to the range of the I and Q DAC values.
You're right, though, it's hard to measure empirically, because typically, the output amplifiers go into nonlinearity close to the maximum DAC outputs, and on top of that, everything is frequency-dependent, so e.g. 0.5+j0.5 at 400 MHz doesn't necessarily produce the same electrical field strength as 0.5+j0.5 at 1GHz.
That's true for all non-calibrated SDR devices (which, aside from your typical multi-10k-Dollar Signal Generator, is everything, unless you calibrate for all frequencies of interest yourself).
I am new to labview and I need help.
I am using myrio with gyroscope, and when I display the gyroscope values I get noise.
My question is: How can I implement lowpass filter to reduce the noise in X , Y and Z rates of the gyroscope?
I searched a lot, but I did not understand how can I know what is the sampling frequency, the low and the high cutoff frequency.
Thank you so much.
If you're data is noisy you should try to fix the problem before you digitize the data. If a physical low-pass filter will do the trick, install one. The better the signal before the DAQ the better the data will be once it's digitized.
Some other signal conditioning considerations: make sure to reduce the length of wire from the gyroscope to the DAQ to only what's necessary, if possible eliminate any sources of noise from the environment (like any large rotating magnets--seriously I once helped someone who was complaining about noise when they were using an unshielded wire next to an MRI machine), and if you're going to add any signal conditioning try to amplify close to your sensor.
If you still would like to filter in software, there's an example included with LabVIEW that demonstrates both the point-by-point VIs and the array based VIs. It's called PtByBp and Array Based Filter.vi and can be found in the Example Finder under Analysis, Signal Processing and Mathematics >> Filtering and Conditioning
Please install this FREE toolkit from ni.com: http://sine.ni.com/nips/cds/view/p/lang/en/nid/212733
There are examples and good ready to use application how to use myRIO gyroscope and how to do proper DSP.
Sampling frequency is how fast you sample. Look for this value in the ADC settings. Low and high cutoffs - play with those values. Doing an FFT on your signal may help you to determine spectral frequency density, and decide where to cut.
I'm using GPS units and mobile computers to track individual pedestrians' travels. I'd like to in real time "clean" the incoming GPS signal to improve its accuracy. Also, after the fact, not necessarily in real time, I would like to "lock" individuals' GPS fixes to positions along a road network. Have any techniques, resources, algorithms, or existing software to suggest on either front?
A few things I am already considering in terms of signal cleaning:
- drop fixes for which num. of satellites = 0
- drop fixes for which speed is unnaturally high (say, 600 mph)
And in terms of "locking" to the street network (which I hear is called "map matching"):
- lock to the nearest network edge based on root mean squared error
- when fixes are far away from road network, highlight those points and allow user to use a GUI (OpenLayers in a Web browser, say) to drag, snap, and drop on to the road network
Thanks for your ideas!
I assume you want to "clean" your data to remove erroneous spikes caused by dodgy readings. This is a basic dsp process. There are several approaches you could take to this, it depends how clever you want it to be.
At a basic level yes, you can just look for really large figures, but what is a really large figure? Yeah 600mph is fast, but not if you're in concorde. Whilst you are looking for a value which is "out of the ordinary", you are effectively hard-coding "ordinary". A better approach is to examine past data to determine what "ordinary" is, and then look for deviations. You might want to consider calculating the variance of the data over a small local window and then see if the z-score of your current data is greater than some threshold, and if so, exclude it.
One note: you should use 3 as the minimum satellites, not 0. A GPS needs at least three sources to calculate a horizontal location. Every GPS I have used includes a status flag in the data stream; less than 3 satellites is reported as "bad" data in some way.
You should also consider "stationary" data. How will you handle the pedestrian standing still for some period of time? Perhaps waiting at a crosswalk or interacting with a street vendor?
Depending on what you plan to do with the data, you may need to supress those extra data points or average them into a single point or location.
You mention this is for pedestrian tracking, but you also mention a road network. Pedestrians can travel a lot of places where a car cannot, and, indeed, which probably are not going to be on any map you find of a "road network". Most road maps don't have things like walking paths in parks, hiking trails, and so forth. Don't assume that "off the road network" means the GPS isn't getting an accurate fix.
In addition to Andrew's comments, you may also want to consider interference factors such as multipath, and how they are affected in your incoming GPS data stream, e.g. HDOPs in the GSA line of NMEA0183. In my own GPS controller software, I allow user specified rejection criteria against a range of QA related parameters.
I also tend to work on a moving window principle in this regard, where you can consider rejecting data that represents a spike based on surrounding data in the same window.
Read the posfix to see if the signal is valid (somewhere in the $GPGGA sentence if you parse raw NMEA strings). If it's 0, ignore the message.
Besides that you could look at the combination of HDOP and the number of satellites if you really need to be sure that the signal is very accurate, but in normal situations that shouldn't be necessary.
Of course it doesn't hurt to do some sanity checks on GPS signals:
latitude between -90..90;
longitude between -180..180 (or E..W, N..S, 0..90 and 0..180 if you're reading raw NMEA strings);
speed between 0 and 255 (for normal cars);
distance to previous measurement matches (based on lat/lon) matches roughly with the indicated speed;
timedifference with system time not larger than x (unless the system clock cannot be trusted or relies on GPS synchronisation :-) );
To do map matching, you basically iterate through your road segments, and check which segment is the most likely for your current position, direction, speed and possibly previous gps measurements and matches.
If you're not doing a realtime application, or if a delay in feedback is acceptable, you can even look into the 'future' to see which segment is the most likely.
Doing all that properly is an art by itself, and this space here is too short to go into it deeply.
It's often difficult to decide with 100% confidence on which road segment somebody resides. For example, if there are 2 parallel roads that are equally close to the current position it's a matter of creative heuristics.