I'm trying to implement a BPSK demodulation on GNU Radio using a USRP N210. However I'm not quite getting the results I was hoping for.
I'm using the "Symbol Sync" block for timing synchronization and the "Costas Loop" block for carrier recovery as you can see in the flowgraph below :
The results I'm getting are the following, as you can see I'm getting two separate lines while I should be obtaining only two dots :
I think there there might be errors in my parameters. Do you have any idea on what could be wrong ?
I would say that this is good first approximation of BPSK receiver.
You do not show how your transmitter is organized, so many things have to be assumed and there is no clear answer of the questions, only recommendations.
As it was mentioned in the comment, your sample rate is too low and cannot be achieved by USRP. Even 200 ksps is close to the limit of the radio. The clock rate of ADC of N210 is 100 MHz. In order to achieve 200 ksps, the radio has to decimate by factor of 500. The maximum factor for N210 is 512, so even 200 ksps is too much work for the radio.
There is possibility of frequency difference between TX and RX, that can not be compensated by costas loop, so this may be the source of the problem.
You are using shaping filter in transmitter and there is no matched filter in receiver. The matched filter could be incorporated in the timing synchronization (symbol clock), but I would suggest to divide the functions of matched filtering and synchronization by diffrent bloks.
At the end of receiver you need demodulator.
Related
In GNU radio I am trying to use the frequency of one signal to generate another signal of a different frequency. Here is the flow diagram that I am using:
I generate a 50 kHz signal with a signal source block and feed this into a Log Power FFT block. I used the Argmax block to find the FFT bin with the most power and multiply that with a constant. I want to use this result as the input to the complex vco block to generate another signal with a different frequency. All vectors have a length of 4096.
However, looking at the output of the complex QT Gui Time Sink block, the output of the vco is always zero. This is strange to me because using a float QT Gui Time Sink to look at the output of the multiply block (which is also going to the input of the vco block), the result is 50,000 as expected. Why am I only getting zero out of the vco?
Also, my sample rate is set to 1M. I am assuming because of the vector length of 4096 that the sample rate out of the Argmax block will be 1M/4096 = 244. Is this correct?
I am running gnu radio companion on windows 10.
The proposed solution is not a solution. Please don't abuse the signal probe, which is really just that, a probe for slow, debugging or purely visual purposes. Every time I use it myself, I see how architecturally bad it is, and I personally think the project should be removing it from the block library altogether.
Now, instead of just say "probe is bad, do something else", let's analyse where your flow graph falls short:
your frequency estimation depends on the argmax of a block that was meant for pure visualization purposes. No, the output rate is not (sampling rate/FFT length), the output rate is roughly "frame rate" (but not actually exactly. That block is terrible and mixes "sample times" with "wall clock times"). Don't do that. If you need something like that, use the FFT block, followed my "complex to magnitude squared". You don't even want the logarithm - you're just looking for a maximum
Instead of looking for a maximum absolute value in an FFT, which is inherently a quantizing frequency estimator, use something that actually gives you an oscillation. There's multiple ways you can do that with a PLL!
your VCO solution probably does what it's programmed to do. You just use an inadequate sensitivity!
The sampling rate you assume in your time sinks is totally off, which is probably why you have the impression of a constant output – it just changes so slowly that you'll not notice.
So, I propose to instead, either / or:
Use the PLL Freq det. Feed the output of that into the VCO. Don't scale with a constant, but simply apply the proper sensitivity. Sensitivity is the factor between "input amplitude" and "phase advance per sample on the output in radians".
Use the PLL Carrier recovery. Use a resampler, or some other mathematical method, to generate the new frequency. You haven't told us how that other frequency relates to the input frequency, so I can't give you concrete advice.
Also notice that this very much suggest this being a case of "I'm trying to recreate an analog approach in digital"; that might be a good approach, but in many cases it's not.
If I might be as brazen: Describe why you need to generate that other frequency, for which purpose, in a post on https://dsp.stackexchange.com or to the GNU Radio mailing list discuss-gnuradio#gnu.org (sign up here). This really only barely is a programming problem, but really a signal processing problem. And there's a lot of people out there eager to help you find an appropriate solution that actually tackles your problem!
It looks like a better solution was to probe the output of the multiplier using a probe signal block along with a Function Probe Block to create a new variable. This variable could then be used as the frequency value in a separate Signal Source Block that is used to generate the new signal. This flow diagram seems to satisfy the original intended purpose: new flow diagram
I'm going to start off by saying that I'm very new to SDR and GNU Radio. This may be a dumb question, but I have been googling and testing things for about two months now trying to get this to work without success. Any help or pointers would be appreciated!
I'm attempting to use GNU Radio 3.8 to transfer a file using differential QPSK. I've tried to follow the tutorials on the wiki as well as several similar academic papers I found on the internet (which also seem to be based off the wiki tutorial). None of them worked on their own but combining what actually works from each one, I managed to create a flowgraph sans hardware that does indeed send and receive the data from a file. Here's the Flowgraph and here is a screenshot of the results. The results show the four constellation points, and the data from the file source matches up perfectly with the data having gone though the entire transmit+receive chain. In the simulation I have a throttle block and a channel model block where the LimeSDR Source and LimeSDR Sink block would be. So far so good (at least as far as I can tell).
When I actually start transmitting this signal with the SDR, the received data no longer matches up with what is transmitted. Here's the flowgraph I've been using for the transmission. I added a protocol formatter and some FEC blocks that I could have removed for this illustration, but the point is that simply looking at what bits are going into the modulator vs what's being recovered, the two do not match up. The constellation looks good (as far as I can tell) but the bits are all wrong. Here's a screenshot showing the bits being transmitted. You'll notice in the screenshot of the transmitted signal that the signal has a repeating series of three flat top "1's" surrounded on both sides by a period of "0's" (at time 1.5ms and 3.5ms). This is a screenshot of the received bits. At time 1ms and 3ms you can see how it is has significantly more transitions between 1 and 0 than it should.
So at this point I'm stumped. The simulation worked but the real world test does not. I've messed around with the RRC filter properties a significant amount. I have no clue if the values I have chosen are correct as I have not found a tutorial or explanation on how to do so. I just looked at some of the example flowgraphs and made some guesses as to how those values were derived and applied those guesses to my use case. It worked well in the simulation so I thought it would be fine in the real world test. I've tried a variety of samples per symbol but my goal is for a 4800 bit per second transfer speed, and using different samples per symbol didn't help anyway. What should I change in order to get this to work?
Bonus question: The constellation object has QPSK and DQPSK, and the constellation modulator has a differential checkbox. What is the best practice combination of selections to get a differential QPSK modulation?
I'm using a HackRF One device and its corresponding osmocom Sink block inside of gnuradio-companion. Because the input to this block is Complex (i.e. a pair of Floats), I could conceivably send it an enormously large value. At some point the osmocom Sink will hit a maximum value and stop driving the attached HackRF to output stronger signals.
I'm trying to figure out what that maximum value is.
I've looked through the documentation at a number of different sites, for both the HackRF One and the osmocom source and can't find an answer. I tried looking through the source code itself, but couldn't see any clear indication there, although I may have missed something there.
http://sdr.osmocom.org/trac/wiki/GrOsmoSDR
https://github.com/osmocom/gr-osmosdr
I also thought of deriving the value empirically, but didn't trust my equipment to get a precise measure of when the block started to hit the rails.
Any ideas?
Thanks
Friedman
I'm using a HackRF One device and its corresponding osmocom Sink block inside of gnuradio-companion. Because the input to this block is Complex (i.e. a pair of Floats), I could conceivably send it an enormously large value.
No, the complexes z must meet
because the osmocom sink/the underlying drivers and devices map that -1 – +1 range to the range of the I and Q DAC values.
You're right, though, it's hard to measure empirically, because typically, the output amplifiers go into nonlinearity close to the maximum DAC outputs, and on top of that, everything is frequency-dependent, so e.g. 0.5+j0.5 at 400 MHz doesn't necessarily produce the same electrical field strength as 0.5+j0.5 at 1GHz.
That's true for all non-calibrated SDR devices (which, aside from your typical multi-10k-Dollar Signal Generator, is everything, unless you calibrate for all frequencies of interest yourself).
I'm trying to make an embedded thingy that detects the presence of a 19kHz tone from an electret microphone. I have a multistage bandpass filter/preamp hooked into the ADC of a microcontroller, and am trying to figure out the best way to digitally condition the signal in order to detect the presence of the tone.
I have implemented a Goertzel filter to look for the frequency of interest. My ADC takes 400 samples at a frequency of 4000KHz, then the micro processes the block and adds the result to a 100 point moving average. Looking at terminal output after each block, I can definitely see an overall jump in the numbers when the transmitter is turned on. However, there's a lot of noise in the power readings when the thing is turned on, and the the noise floor in the room I'm in keeps changing, too. I am not sure how to tune the thresholding level/filter out all of this noise.
I've tried a few things, but they all seem to be pretty noisy as the baseline of my signal drifts all over the place:
Preprocessing the block with Hamming/Blackman windows
Ratio of total received block power to band power in filter output
Ratio of power of band in interest (19kHz) to a band outside of,
but near band of interest (18.5kHz)
EDIT: I've done some more reading since posting this. Is calculating (2*Ew)/(N*Et) where Ew is the output from my filter and Et is the sum of the squares in my block the best way to do this test?
Any advice on how to deal with this and/or do a better method of signal extraction?
Thanks!
I am new to labview and I need help.
I am using myrio with gyroscope, and when I display the gyroscope values I get noise.
My question is: How can I implement lowpass filter to reduce the noise in X , Y and Z rates of the gyroscope?
I searched a lot, but I did not understand how can I know what is the sampling frequency, the low and the high cutoff frequency.
Thank you so much.
If you're data is noisy you should try to fix the problem before you digitize the data. If a physical low-pass filter will do the trick, install one. The better the signal before the DAQ the better the data will be once it's digitized.
Some other signal conditioning considerations: make sure to reduce the length of wire from the gyroscope to the DAQ to only what's necessary, if possible eliminate any sources of noise from the environment (like any large rotating magnets--seriously I once helped someone who was complaining about noise when they were using an unshielded wire next to an MRI machine), and if you're going to add any signal conditioning try to amplify close to your sensor.
If you still would like to filter in software, there's an example included with LabVIEW that demonstrates both the point-by-point VIs and the array based VIs. It's called PtByBp and Array Based Filter.vi and can be found in the Example Finder under Analysis, Signal Processing and Mathematics >> Filtering and Conditioning
Please install this FREE toolkit from ni.com: http://sine.ni.com/nips/cds/view/p/lang/en/nid/212733
There are examples and good ready to use application how to use myRIO gyroscope and how to do proper DSP.
Sampling frequency is how fast you sample. Look for this value in the ADC settings. Low and high cutoffs - play with those values. Doing an FFT on your signal may help you to determine spectral frequency density, and decide where to cut.