I'm using a HackRF One device and its corresponding osmocom Sink block inside of gnuradio-companion. Because the input to this block is Complex (i.e. a pair of Floats), I could conceivably send it an enormously large value. At some point the osmocom Sink will hit a maximum value and stop driving the attached HackRF to output stronger signals.
I'm trying to figure out what that maximum value is.
I've looked through the documentation at a number of different sites, for both the HackRF One and the osmocom source and can't find an answer. I tried looking through the source code itself, but couldn't see any clear indication there, although I may have missed something there.
http://sdr.osmocom.org/trac/wiki/GrOsmoSDR
https://github.com/osmocom/gr-osmosdr
I also thought of deriving the value empirically, but didn't trust my equipment to get a precise measure of when the block started to hit the rails.
Any ideas?
Thanks
Friedman
I'm using a HackRF One device and its corresponding osmocom Sink block inside of gnuradio-companion. Because the input to this block is Complex (i.e. a pair of Floats), I could conceivably send it an enormously large value.
No, the complexes z must meet
because the osmocom sink/the underlying drivers and devices map that -1 – +1 range to the range of the I and Q DAC values.
You're right, though, it's hard to measure empirically, because typically, the output amplifiers go into nonlinearity close to the maximum DAC outputs, and on top of that, everything is frequency-dependent, so e.g. 0.5+j0.5 at 400 MHz doesn't necessarily produce the same electrical field strength as 0.5+j0.5 at 1GHz.
That's true for all non-calibrated SDR devices (which, aside from your typical multi-10k-Dollar Signal Generator, is everything, unless you calibrate for all frequencies of interest yourself).
Related
In GNU radio I am trying to use the frequency of one signal to generate another signal of a different frequency. Here is the flow diagram that I am using:
I generate a 50 kHz signal with a signal source block and feed this into a Log Power FFT block. I used the Argmax block to find the FFT bin with the most power and multiply that with a constant. I want to use this result as the input to the complex vco block to generate another signal with a different frequency. All vectors have a length of 4096.
However, looking at the output of the complex QT Gui Time Sink block, the output of the vco is always zero. This is strange to me because using a float QT Gui Time Sink to look at the output of the multiply block (which is also going to the input of the vco block), the result is 50,000 as expected. Why am I only getting zero out of the vco?
Also, my sample rate is set to 1M. I am assuming because of the vector length of 4096 that the sample rate out of the Argmax block will be 1M/4096 = 244. Is this correct?
I am running gnu radio companion on windows 10.
The proposed solution is not a solution. Please don't abuse the signal probe, which is really just that, a probe for slow, debugging or purely visual purposes. Every time I use it myself, I see how architecturally bad it is, and I personally think the project should be removing it from the block library altogether.
Now, instead of just say "probe is bad, do something else", let's analyse where your flow graph falls short:
your frequency estimation depends on the argmax of a block that was meant for pure visualization purposes. No, the output rate is not (sampling rate/FFT length), the output rate is roughly "frame rate" (but not actually exactly. That block is terrible and mixes "sample times" with "wall clock times"). Don't do that. If you need something like that, use the FFT block, followed my "complex to magnitude squared". You don't even want the logarithm - you're just looking for a maximum
Instead of looking for a maximum absolute value in an FFT, which is inherently a quantizing frequency estimator, use something that actually gives you an oscillation. There's multiple ways you can do that with a PLL!
your VCO solution probably does what it's programmed to do. You just use an inadequate sensitivity!
The sampling rate you assume in your time sinks is totally off, which is probably why you have the impression of a constant output – it just changes so slowly that you'll not notice.
So, I propose to instead, either / or:
Use the PLL Freq det. Feed the output of that into the VCO. Don't scale with a constant, but simply apply the proper sensitivity. Sensitivity is the factor between "input amplitude" and "phase advance per sample on the output in radians".
Use the PLL Carrier recovery. Use a resampler, or some other mathematical method, to generate the new frequency. You haven't told us how that other frequency relates to the input frequency, so I can't give you concrete advice.
Also notice that this very much suggest this being a case of "I'm trying to recreate an analog approach in digital"; that might be a good approach, but in many cases it's not.
If I might be as brazen: Describe why you need to generate that other frequency, for which purpose, in a post on https://dsp.stackexchange.com or to the GNU Radio mailing list discuss-gnuradio#gnu.org (sign up here). This really only barely is a programming problem, but really a signal processing problem. And there's a lot of people out there eager to help you find an appropriate solution that actually tackles your problem!
It looks like a better solution was to probe the output of the multiplier using a probe signal block along with a Function Probe Block to create a new variable. This variable could then be used as the frequency value in a separate Signal Source Block that is used to generate the new signal. This flow diagram seems to satisfy the original intended purpose: new flow diagram
I'm going to start off by saying that I'm very new to SDR and GNU Radio. This may be a dumb question, but I have been googling and testing things for about two months now trying to get this to work without success. Any help or pointers would be appreciated!
I'm attempting to use GNU Radio 3.8 to transfer a file using differential QPSK. I've tried to follow the tutorials on the wiki as well as several similar academic papers I found on the internet (which also seem to be based off the wiki tutorial). None of them worked on their own but combining what actually works from each one, I managed to create a flowgraph sans hardware that does indeed send and receive the data from a file. Here's the Flowgraph and here is a screenshot of the results. The results show the four constellation points, and the data from the file source matches up perfectly with the data having gone though the entire transmit+receive chain. In the simulation I have a throttle block and a channel model block where the LimeSDR Source and LimeSDR Sink block would be. So far so good (at least as far as I can tell).
When I actually start transmitting this signal with the SDR, the received data no longer matches up with what is transmitted. Here's the flowgraph I've been using for the transmission. I added a protocol formatter and some FEC blocks that I could have removed for this illustration, but the point is that simply looking at what bits are going into the modulator vs what's being recovered, the two do not match up. The constellation looks good (as far as I can tell) but the bits are all wrong. Here's a screenshot showing the bits being transmitted. You'll notice in the screenshot of the transmitted signal that the signal has a repeating series of three flat top "1's" surrounded on both sides by a period of "0's" (at time 1.5ms and 3.5ms). This is a screenshot of the received bits. At time 1ms and 3ms you can see how it is has significantly more transitions between 1 and 0 than it should.
So at this point I'm stumped. The simulation worked but the real world test does not. I've messed around with the RRC filter properties a significant amount. I have no clue if the values I have chosen are correct as I have not found a tutorial or explanation on how to do so. I just looked at some of the example flowgraphs and made some guesses as to how those values were derived and applied those guesses to my use case. It worked well in the simulation so I thought it would be fine in the real world test. I've tried a variety of samples per symbol but my goal is for a 4800 bit per second transfer speed, and using different samples per symbol didn't help anyway. What should I change in order to get this to work?
Bonus question: The constellation object has QPSK and DQPSK, and the constellation modulator has a differential checkbox. What is the best practice combination of selections to get a differential QPSK modulation?
I have three inputs in merge signals in different time, the out put of merge signals appeared to wait for all signals and outputted them. what I want is to have an output for every signal (on current output) as soon as it inputted.
For example: if I write (1) in initial value. 5,5,5 in all three numeric. with 3 sec time delay, I will have 6,7,and 16 in target 1, target 2, and target 3. And over all 16 on current output. I don't want that to appear at once on current output. I want to have as it appears in target with same time layout.
please see attached photo.
can anyone help me with that.
thanks.
All nodes in LabVIEW fire when all their inputs arrive. This language uses synchronous data flow, not asynchronous (which is the behavior you were describing).
The output of Merge Signals is a single data structure that contains all the input signals — merged, like the name says. :-)
To get the behavior you want, you need some sort of asynchronous communication. In older versions of LabVIEW, I would tell you to create a queue refnum and go look at examples of a producer/consumer pattern.
But in LabVIEW 2016 and later, right click on each of the tunnels coming out of your flat sequence and chose “Create>>Channel Writer...”. In the dialog that appears, choose the Messenger channel. Wire all the outputs of the new nodes together. This creates an asynchronous wire, which draws very differently from your regular wires. On the wire, right click and choose “Create>>Channel Reader...”. Put the reader node inside a For Loop and wire a 3 to the N terminal. Now you have the behavior that as each block finishes, it will send its data to the loop.
Move the Write nodes inside the Flat Sequence if you want to guarantee the enqueue order. If you wait and do the Writes outside, you’ll sometimes get out-of-order data (I.e. when the data generation nodes happen to run quickly).
Side note: I (and most LabVIEW architects) would strongly recommend you avoid using sequence structures as much as possible. They’re a bad habit to get into — lots of writings online about their disadvantages.
I'm working on project, connected to hardware, especially sensors and actuators. Both sensors and actuators can be digital and analog. Example of all four types:
digital sensor: returns one of several possible states (i.e. "presence"/"absence"/"can't find out")
analog sensor: returns any value from range, but results are round according to scale interval (i.e. temperature sensor returns any value from 0 to 60 Celsius, with one digit precision, for example 18.5 C)
digital actuator: can get one of several possible states (i.e. motor, that opens window, can set window "open"/"half-open"/"closed")
analog actuator: can gat any value from range, and value is also rounded according to scale interval (i.e. fan rotations per second can be from 0 to 10, for example 9.5)
The question is, how I can represent these classes, without violating OOP principles, at least make design clear and logical.
One of my solutions is on the picture:
But I think this solution is "ugly"
Next solution is here:
This solution is also bad, because of redundant attributes and not uniform methods' names.
What do you suggest? Maybe there exists design pattern that solves my problem?
Thanks for help!
Finally I came to solution.
In my concrete case, Sensor and Actuator classes are just abstraction for data about current value and socket port number. So I decide to get rid of difference and make one class for both sensors and actuators. And discrete and continuous devices have it's own classes.
I would solve this by using two different operations to return either analog or digital values. The reason is simply that you will not really mix them. If you read temperatures you will readAnalog() to get the value and for a switch you would call readBinary(). This way you can implement that reading a float from a binary would raise an exception.
Also, I would not mix this in a Device. Actuators and sensors are very different and the only thing they might inherit is the plastic housing (and even that seems strange).
I'm using GPS units and mobile computers to track individual pedestrians' travels. I'd like to in real time "clean" the incoming GPS signal to improve its accuracy. Also, after the fact, not necessarily in real time, I would like to "lock" individuals' GPS fixes to positions along a road network. Have any techniques, resources, algorithms, or existing software to suggest on either front?
A few things I am already considering in terms of signal cleaning:
- drop fixes for which num. of satellites = 0
- drop fixes for which speed is unnaturally high (say, 600 mph)
And in terms of "locking" to the street network (which I hear is called "map matching"):
- lock to the nearest network edge based on root mean squared error
- when fixes are far away from road network, highlight those points and allow user to use a GUI (OpenLayers in a Web browser, say) to drag, snap, and drop on to the road network
Thanks for your ideas!
I assume you want to "clean" your data to remove erroneous spikes caused by dodgy readings. This is a basic dsp process. There are several approaches you could take to this, it depends how clever you want it to be.
At a basic level yes, you can just look for really large figures, but what is a really large figure? Yeah 600mph is fast, but not if you're in concorde. Whilst you are looking for a value which is "out of the ordinary", you are effectively hard-coding "ordinary". A better approach is to examine past data to determine what "ordinary" is, and then look for deviations. You might want to consider calculating the variance of the data over a small local window and then see if the z-score of your current data is greater than some threshold, and if so, exclude it.
One note: you should use 3 as the minimum satellites, not 0. A GPS needs at least three sources to calculate a horizontal location. Every GPS I have used includes a status flag in the data stream; less than 3 satellites is reported as "bad" data in some way.
You should also consider "stationary" data. How will you handle the pedestrian standing still for some period of time? Perhaps waiting at a crosswalk or interacting with a street vendor?
Depending on what you plan to do with the data, you may need to supress those extra data points or average them into a single point or location.
You mention this is for pedestrian tracking, but you also mention a road network. Pedestrians can travel a lot of places where a car cannot, and, indeed, which probably are not going to be on any map you find of a "road network". Most road maps don't have things like walking paths in parks, hiking trails, and so forth. Don't assume that "off the road network" means the GPS isn't getting an accurate fix.
In addition to Andrew's comments, you may also want to consider interference factors such as multipath, and how they are affected in your incoming GPS data stream, e.g. HDOPs in the GSA line of NMEA0183. In my own GPS controller software, I allow user specified rejection criteria against a range of QA related parameters.
I also tend to work on a moving window principle in this regard, where you can consider rejecting data that represents a spike based on surrounding data in the same window.
Read the posfix to see if the signal is valid (somewhere in the $GPGGA sentence if you parse raw NMEA strings). If it's 0, ignore the message.
Besides that you could look at the combination of HDOP and the number of satellites if you really need to be sure that the signal is very accurate, but in normal situations that shouldn't be necessary.
Of course it doesn't hurt to do some sanity checks on GPS signals:
latitude between -90..90;
longitude between -180..180 (or E..W, N..S, 0..90 and 0..180 if you're reading raw NMEA strings);
speed between 0 and 255 (for normal cars);
distance to previous measurement matches (based on lat/lon) matches roughly with the indicated speed;
timedifference with system time not larger than x (unless the system clock cannot be trusted or relies on GPS synchronisation :-) );
To do map matching, you basically iterate through your road segments, and check which segment is the most likely for your current position, direction, speed and possibly previous gps measurements and matches.
If you're not doing a realtime application, or if a delay in feedback is acceptable, you can even look into the 'future' to see which segment is the most likely.
Doing all that properly is an art by itself, and this space here is too short to go into it deeply.
It's often difficult to decide with 100% confidence on which road segment somebody resides. For example, if there are 2 parallel roads that are equally close to the current position it's a matter of creative heuristics.