Simple QPSK transmiter, large sidelobes pulsation - gnuradio

I have a simple flowgraph for QPSK transmitter with USRP.
After execution, there is lage sidelobes, that pulsate.
During the periods of large sidelobes, there is a drop in amplutude of main lobe.
There is no such pulsations if I make similar transmitter with Matlab.
I suscpect discontinues in sorce.
Comments and advice are appreciated.

Your pool of random data is far too short; you'll see data periodicity in spectrum very quickly; it might be that this is exactly what happens. So, try with num_samples 2**20 instead.
You can observe your transmit spectrum yourself before even transmitting it: use the Qt GUI frequency sink or waterfall sink with an FFT length that corresponds to the FFT length you use in gqrx.
Your sample rate is at the least end of all possible sampling rates. Here, the roll-off of the interpolation filters inside the USRP will definitely show. Don't do that to yourself. Use sps = 16, samp_rate = 1e6 instead.
Make sure you're not getting any underruns in your tranmitter, nor overruns in your receiver. If that happens at these incredibly low sampling rates, something is wrong with your computer setup

Changes make no difference. The following is # 2**20 number of samples, 1 MHz sample rate and 20 samples per symbol. There is no underrun.
# 5 Mhz sample rate I start receiving underrun.

I found the problem and a solution.
The problem is that the level of the signal after modulator is too strong for the USRP input. After modulator the abs value of the signal reach 9. I don't know the maximum level of the signal that USRP expects. I presume something like 1 peak to peak
The solution is to restrict the level by multiplication with a constant. With constant=0.5, there is still distortions. Value of 0.2 is ok.
Here is the new flowgraph:

Related

GNURadio Companion Blocks for Z-Wave using RTL-SDR dongle

I'm using RTL-SDR generic dongle for receiving frames of Z-Wave protocol. I use real Z-Wave devices. I'm using scapy-radio and I've also downloaded EZ-Wave. However, none of them implements blocks for all Z-Wave data rates, modulations and codings. I've received some frames using original solution of EZ-Wave, however I assume I can't receive frames at all data rates, codings and modulations. Now I'm trying to implement solution according to their blocks to implement all of them.
Z-Wave procotol uses these modulations, data rates and coding:
9.6 kbps - FSK - Manchester
40 kbps - FSK - NRZ
100 kbps - GFSK - NRZ
These are my actual blocks (not able receving anything at all right now):
For example, I will explain my view on blocks for receiving at
9.6 kbps - FSK - Manchester
RTL-SDR Source
variable center_freq = 869500000
variable r1_freq_offset = 800e3
Ch0: Frequency: center_freq_3-r1_freq_offset, so I've got 868.7 Mhz on RTL-SDR Source block.
Frequency Xlating FIR Filter
Center frequency = - 800Khz to get frequency 868.95 Mhz (Europe). To be honest, I'm not sure why I do this and I need an explanation. I'm trying to implement those blocks according to EZ-Wave implementation of blocks for 40 kbps-FSK-NRZ (as I assume). They use sample rate 2M and different configurations, which I did not understand.
Taps = firdes.low_pass(1,samp_rate_1,samp_rate_1/2,5e3,firdes.WIN_HAMMING). I don't understand, what should be transition bw (5e3 in my case)
Sample rate = 19.2e3, because data rate/baud is 9.6 Kbps and according to Nyquist–Shannon sampling theorem, sampling rate should be at least double to data rate, so 2*9.6=19.2. So I'm trying to resample default 2M from source to 19.2 Kbps.
Simple squelch
I use default value (-40) and I'm not sure, if I should change this or not.
Quadrature Demod
should do the FSK demodulation and I use default value of gain. I'm not sure if this is a right way to do FSK demodulation.
Gain = 2(samp_rate_1)/(2*math.pi*20e3/8.0)*
Low Pass Filter
Sample rate = 19.2k to use the same new sample rate
Cuttoff Freq = 9.6k, I assume this according to https://nccgroup.github.io/RFTM/fsk_receiver.html
Transition width = 4.8 which is also sample_rate/2
Clock Recovery MM
Most of the parameters are default.
Omega = 2, because samp_rate/baud
Binary Slicer
is for getting binary code of signal
Zwave PacketSink 9.6
should the the Manchester decoding.
I would like to ask, what should I change on my blocks to achieve proper receiving of Z-Wave frames at all data rates, modulation and coding. When I start receiving, I'm able to see messages from my devices at FFT sink and Waterfall sink. The message debug doesn't print packets (like from original EZ-Wave solution) but only
Looking for sync : 575555aa
Looking for sync : 565555aa
Looking for sync : aa5555aa
what should be value in frame_shift_register, according to C code for Manchester decoding (ZWave PacketSink 9.6). I've seen similar post, however this is a bit different and to be honest, I'm stuck here.
I will be grateful for any help.
Let's look at the GFSK case. First of all, the sampling rate of the RTL source, 2M Baud is pretty high. For the maximum data rate, 100 kbps - GFSK, a sample rate of say 400 ~ 500kbaud will do just fine. There is also the power squelch block. This block prevents signals below a certain threshold to pass. This is not good because it filters low power signals that may contain information. There is also the sample rate issue between the lowpass filter and the MM clock recovery block. The output of the symbol recovery block should be 100kbaud (because for GFSK, sample rate = symbol rate). Using the omega value of 2 and working backward, the input to the MM block should be 200kbaud. But, the lowpass filter produces samples at 2Mbaud, 10 times than expected. You have to do proper decimation.
I implemented a GFSK receiver once for our CubeSat. Timing recovery was done by the PFB block, which is more reliable than the MM one. You can find the paper here:
https://www.researchgate.net/publication/309149646_Software-defined_radio_transceiver_for_QB50_CubeSat_telemetry_and_telecommand?_sg=HvZBpQBp8nIFh6mIqm4yksaAwTpx1V6QvJY0EfvyPMIz_IEXuLv2pODOnMToUAXMYDmInec76zviSg.ukZBHrLrmEbJlO6nZbF4X0eyhFjxFqVW2Q50cSbr0OHLt5vRUCTpaHi9CR7UBNMkwc_KJc1PO_TiGkdigaSXZA&_sgd%5Bnc%5D=1&_sgd%5Bncwor%5D=0
Some more details on the receiver could also be found here:
GFSK modulation/demodulation with GNU Radio and USRP
M.
I appreciate your answer, I've changed my sample rates. Now I'm still working on 9.6Kbps, FSK demodulation and Manchester decoding. Currently, output from my M&M clock recovery looks like this:
I would like to ask you what do think about this signal. As I said, it should be FSK demodulation and then I should use Manchester decoding. Do I still need usage of PCB block? Primary, I have to do 9.6kbps, FSK and Manchester, so I will look at 100Kbps GFSK NRZ if there will be some time left.
Sample rate is 1M because of RTL-SDR dongle limitations (225001 to 300000 and 900001 to 3200000).
Current blocks:
I don't understand :
Taps of Frequency Xlating FIR Filter firdes.low_pass(1,samp_rate_1,40e3,20e3,firdes.WIN_HAMMING)
Cuttoff Freq and Transition Width of Low Pass filter
Clock Recovery M&M aswell, so consider its values "random".
ClockRecovery Output:
I was trying to use PCB block according to your work at ResearchGate. However, I was unsuccessful because I still don't understand all that science behind the clock recovery.
Doing Low-pass filtering twice is because original Z-Wave blocks from scapy-radio for 40Kbps, FSK and NRZ coding are made like this (and it works):
So I thought I will be just about changing few parameters and decoder (Zwave PacketSink9.6).
I also uploaded my current blocks here.
Moses Browne Mwakyanjala, I'm also trying to implement that thing according to your work.
Maybe there is a problem with a clock recovery and Manchester decoding. Manchester decoding use transitions 0->1 and 1->0 to encode 0s and 1s. How can I properly configure clock recovery to achieve correct sample rate and transitions for Manchester decoding? Manchester decoder (Z-Wave PacketSink 9.6) is able to find the preamble and ends only with looking for sync.
I would like to also ask you, where can I find my modulation index "h" mentioned in your work?
Thank you

How can I calculate the Noise Floor in GNU Radio Companion?

To my understanding, the noise floor for each USRP may be different. I want to know how I can calculate the noise floor without physically going into the fft and spotting it out manually. I want to know if there is a block in GNU Radio that will calculate this, or if there is a stream of blocks I can use to calculate it. Please provide a block diagram in your answer ( block 1 ---> block 2 ---> ...etc.).
For my application, let's say I have a QT GUI frequency sink that is showing all noise at the moment. I want to calculate the noise floor so that I have a value that represents "no signal present" ie. noise. Once I have this value, I plan to set a threshold 5dB higher than the noise floor to indicate that a signal has been detected. I've been able to kind of eye ball the average noise value from the QT GUI Frequency Sink but that's not good enough for me. I want to be able to calculate it and not have to look into the plot every time to update the noise value every time I change USRPs.
For instance:
You can see the average noise value for this is around -55dB. I want to calculate this without having to eye ball it. This way, when a signal gets transmitted at (in this example) 0Hz, then the power of the signal will increase and I can see if a signal was detected.

How to achive higher PWM frequency?

I am using the libraries provided by C18 compiler to open and set the duty cycle for PWM usage. I noticed that the max PWM frequency I can get with 100% duty-cycle is about 13.5 KHz. The lower the duty-cycle the higher the PWM frequency. How can I achieve a higher PWM frequency with still 100% duty-cycle? Is it possible to at least get more than 13.5 KHz? I just can't figure out what I missing, maybe someone can help here, and I am using PIC18F87J1.
Here is the C18 C Compiler Libraries
Here is PIC18F87J1 datasheet
Here is a snippet of the code I am using regarding PWM.
TRISCbits.TRISC1 = 0;
OpenTimer2(TIMER_INT_OFF & T2_PS_1_1 & T2_POST_1_1);
OpenPWM2(0x03ff);
SetDCPWM2(255);
Your help is appreciated, thanks!
For a start, you have the parameters to the functions reversed. Open() takes a char value less than 256, and Set() takes a 10-bit number.
That said, you have chosen the largest value (255) which gives the lowest frequency. As the datasheet explains, the Open() function takes a value for the period as the parameter. A higher frequency is equivalent to a shorter period, and vice versa.
Lastly, why would you want a duty cycle of 100%? That is the same as having the pin always high. In that case, frequency doesn't matter at all. Just turn the pin on and don't use PWM at all.
You haven't said what you are driving with this PWM, but generally speaking, having the frequency too high can cause problems. It can produce radio interference, overheat, and so on.
Your question indicates you misunderstand the purpose of PWM and what the terms refers to, so here is a tl;dr.
PWM simulates a voltage between 0 and Vcc by rapidly turning a pin high and low. The simulated voltage is proportional to the time_high/(time_high + time_low). The percent of time the pin is at Vcc is called the duty cycle. (So 100% duty is always on, giving Vcc volts. 0% duty is always off, giving 0 V.)
The rate at which this on/off cycle repeats is called the PWM frequency. If the frequency is too small (the period too long) the load will see the pin voltage fluctuating. The goal is to run the PWM fast enough to smooth out the voltage the load sees, but not so fast as to cause other problems. The available frequencies are appropriate for most applications. Also note that setting the frequency high (period small) will also reduce the accuracy of the duty cycle. This is explained in the datasheet. The reason is basically that the duty cycle must ultimately be converted to clock ticks on versus clock ticks off. The faster the frequency, the fewer ways to divide the clock ticks in each cycle.

Calculating the maximum physical rate (Nyquist performance limitation) of an ADC onboard a microcontroller

I'm trying to evaluate the maximum physical rate (Nyquist performance limit) of the A/Ds integrated on board various PIC microcontrollers.
However, to do the calculation requires parameters that I'm not finding explicitly stated in the datasheets, specifically Tacq, Fosc, TAD, and divisor parameters.
I've proceeded by making some assumptions but would be helpful to have a sanity check -- am I doing the maximum physical rate calculations correctly?
For illustration purposes only, I've taken the simplest possible PIC10F220 that has an ADC. This is to focus specifically on the interpretation of Tacq, Fosc, TAD, and divisor parameters, and not to suggest that any practical functionality could be implemented on this very basic chip. (This is to Clifford's points in the comments below.)
Calculation:
Nyquist Performance Analysis of PIC10F220
- Runs at clock speed of 8MHz.
- Has an instruction cycle of 0.5us [4 clock steps per instruction]
So:
- Get Tacq = 6.06 us [acquisition time for ADC, assuming chip temp. = 50*C]
[from datasheet p34]
- Set Fosc := 8MHz [? should this be internal clock speed ?]
- Set divisor := 4 [? assuming this is 4 from 4 clock steps per CPU instruction ?]
- This gives TAD = 0.5us [TAD = 1/(Fosc/divisor) ]
- Get conversion time is 13*TAD [from datasheet p31]
- This gives conversion time 6.5 us
- So ADC duration is 12.56 us [? Tacq + 13*TAD]
Assuming 10 instructions for a simple load/store/threshold done in real-time before the next sample (this is just a stub -- the point is the rest of the calculation):
- This adds another 5 us [0.5 us per instruction]
- To give total ADC and handling time of 17.56 us [ 12.56us + 1us + 4us ]
- before the sampling loop repeats [? Again Tacq ? + 13*TAD + handling ]
- If this is correct, then the max sampling rate is 56.9 ksps [ 1/ total time ]
- So the Nyquist frequency for this sampling rate is 28 kHz. [1/2 sampling rate]
Which means the (theoretical) performance of this system --- chip's A/D with the hypothetical real-time handling code --- is for signals that are bandlimited to 28 kHz.
Is this a correct assignment / interpretation of the data sheet in obtaining Tacq, Fosc, TAD, and divisor parameters and using them to obtain the maximum physical rate, or Nyquist performance limit, of this chip?
Thanks,
You're not going to be able to do much processing in 8 instructions, but assuming you're just doing something simple like storing the incoming samples to a buffer, or detecting a threshold, then your analysis looks good.
The actual chips I'm considering for the design are the dsPIC33FJ128MC804 (with 16b A/D) or dsPIC30F3014 (with 12b A/D).
That is an important distinction; the dsPIC ADC supports ping-pong DMA transfers of multiple channels simultaneously, so can minimise the effective software overhead per sample. That makes the calculation a somewhat different one. You need to determine from the sample rate and the DMA buffer size the time between sample buffer interrupts; that is how much processing time you have to deal with each buffer. If you are using Microchip's DSP library, it gives precise cycle time formulae for each algorithm, and block processing is considerably more efficient that sample-by-sample processing.
My last project was on a dsPIC33 with two channels sampled at 48KHz and 32word sample buffers (giving 667us to process each pair of buffers). The software processing was therefore entirely independent of the sampling since by using DMA they take place simultaneously.

How to analyse 'noisiness' of an array of points

Have done fft (see earlier posting if you are interested!) and got a result, which helps me. Would like to analyse the noisiness / spikiness of an array (actually a vb.nre collection of single). Um, how to explain ...
When signal is good, fft power results is 512 data points (frequency buckets) with low values in all but maybe 2 or 3 array entries, and a decent range (i.e. the peak is high, relative to the noise value in the nearly empty buckets. So when graphed, we have a nice big spike in the values in those few buckets.
When signal is poor/noisy, data values spread (max to min) is low, and there's proportionally higher noise in many more buckets.
What's a good, computationally non-intensive was of analysing the noisiness of this data set? Would some kind of statistical method, standard deviations or something help ?
The key is defining what is noise and what is signal, for which modelling assumptions must be made. Often an assumption is made of white noise (constant power per frequency band) or noise of some other power spectrum, and that model is fitted to the data. The signal to noise ratio can then be used to measure the amount of noise.
Fitting a noise model depends on the nature of your data: if you know that the real signal will have no power in the high frequency components, you can look there for an indication of the noise level, and use the model to predict what the noise will be at the lower frequency components where there is both signal and noise. Alternatively, if your signal is constant in time, taking multiple FFTs at different points in time and comparing them to get a standard deviation for each frequency band can give the level of noise present.
I hope I'm not patronising you to mention the issues inherent with windowing functions when performing FFTs: these can have the effect of introducing spurious "noise" into the frequency spectrum which is in fact an artifact of the periodic nature of the FFT. There's a tradeoff between getting sharp peaks and 'sideband' noise - more here www.ee.iitm.ac.in/~nitin/_media/ee462/fftwindows.pdf
Calculate a standard deviation and then you decide the threshold that will indicate noise. In practice this is usually easy and allows you to easily tweak the "noise level" as needed.
There is a nice single pass stddev algorithm in Knuth. Here is link that describes an implementation.
Standard Deviation
calculate the signal to noise ratio
http://en.wikipedia.org/wiki/Signal-to-noise_ratio
you could also check the stdev for each point and if it's under some level you choose then the signal is good else it's not.
wouldn't the spike be
treated as a noise glitch in SNR, an
outlier to be discarded, as it were?
If it's clear from the time-domain data that there are such spikes, then they will certainly create a lot of noise in the frequency spectrum. Chosing to ignore them is a good idea, but unfortunately the FFT can't accept data with 'holes' in it where the spikes have been removed. There are two techniques to get around this. The 'dirty trick' method is to set the outlier sample to be the average of the two samples on either site, and compute the FFT with a full set of data.
The harder but more-correct method is to use a Lomb Normalised Periodogram (see the book 'Numerical Recipes' by W.H.Press et al.), which does a similar job to the FFT but can cope with missing data properly.