How to slow down a file source in GNU Radio? - gnuradio

I'm attempting to unpack bytes from an input file in GNU Radio Companion into a binary bitstream. My problem is that the Unpack K Bits block works at the same sample rate as the file source. So by the time the first bit of byte 1 is clocked out, byte 2 has already been loaded. How do I either slow down the file source or speed up the Unpack K Bits block? Is there a way I can tell GNU Radio Companion to repeat each byte from the file source 8 times?
Note that "after pack" is displaying 4 times as much data as "before pack".

My problem is that the Unpack K Bits block works at the same sample rate as the file source
No it doesn't. Unpack K Bits is an interpolator block. In your case the interpolation is 8. For every bytes 8 new bytes are produced.
The result is right, but the time scale of your sink is wrong. You have to change the sampling rate at the second GUI Time Sink to fit the true sampling rate of the flowgraph after the Unpack K Bits.
So instead of 32e3 it should be 8*32e3.

Manos' answer is very good, but I want to add to this:
This is a common misunderstanding for people that just got in touch with doing digital signal processing down at the sample layer:
GNU Radio doesn't have a notion of sampling rate itself. The term sampling rate is only used by certain blocks to e.g. calculate the period of a sine (in the case of the signal source: Period = f_signal/f_sample), or to calculate times or frequencies that are written on display axes (like in your case).
"Slowing down" means "making the computer process samples slower", but doesn't change the signal.
All you need to do is match what you want the displaying sink to show as time units with what you configure it to do.

Related

Simple QPSK transmiter, large sidelobes pulsation

I have a simple flowgraph for QPSK transmitter with USRP.
After execution, there is lage sidelobes, that pulsate.
During the periods of large sidelobes, there is a drop in amplutude of main lobe.
There is no such pulsations if I make similar transmitter with Matlab.
I suscpect discontinues in sorce.
Comments and advice are appreciated.
Your pool of random data is far too short; you'll see data periodicity in spectrum very quickly; it might be that this is exactly what happens. So, try with num_samples 2**20 instead.
You can observe your transmit spectrum yourself before even transmitting it: use the Qt GUI frequency sink or waterfall sink with an FFT length that corresponds to the FFT length you use in gqrx.
Your sample rate is at the least end of all possible sampling rates. Here, the roll-off of the interpolation filters inside the USRP will definitely show. Don't do that to yourself. Use sps = 16, samp_rate = 1e6 instead.
Make sure you're not getting any underruns in your tranmitter, nor overruns in your receiver. If that happens at these incredibly low sampling rates, something is wrong with your computer setup
Changes make no difference. The following is # 2**20 number of samples, 1 MHz sample rate and 20 samples per symbol. There is no underrun.
# 5 Mhz sample rate I start receiving underrun.
I found the problem and a solution.
The problem is that the level of the signal after modulator is too strong for the USRP input. After modulator the abs value of the signal reach 9. I don't know the maximum level of the signal that USRP expects. I presume something like 1 peak to peak
The solution is to restrict the level by multiplication with a constant. With constant=0.5, there is still distortions. Value of 0.2 is ok.
Here is the new flowgraph:

Big Oh! algorithms running in O(4^N)

Locked. There are disputes about this question’s content being resolved at this time. It is not currently accepting new answers or interactions.
For algorithms running in
O(16^N)
If we triple the size, the time is multiplied by what number??
This is an interesting question because while equivalent questions for runtimes like Θ(n) or Θ(n3) have clean answers, the answer here is a bit more nuanced.
Let's start with a simpler question. We have an algorithm whose runtime is Θ(n2), and on a "sufficiently large" input the runtime is T seconds. What should we expect the runtime to be once we triple the size of the input? To answer this, let's imagine, just for simplicity's sake, that the actual runtime of this function is closely approximated by cn2, and let's have k be the "sufficiently large" input we plugged into it. Then, plugging in 3k, we see that the runtime is
c(3k)2 = 9ck2 = 9(ck2) = 9T.
That last step follows because the cost of running the algorithm on an input of size k is T, meaning that ck2 = T.
Something important to notice here - tripling the size of the input does not change the fact that the runtime here is Θ(n2). The runtime is still quadratic; we're just changing how big the input is.
More generally, for any algorithm whose runtime is Θ(nm) for some fixed constant m, the runtime will grow by roughly a factor of 3m if you triple the size of the input. That's because
c(3k)m = 3mckm = 3mT.
But something interesting happens if we try performing this same analysis on a function whose runtime is Θ(4n). Let's imagine that we ran this algorithm on some input k and it took T time units to finish. Then running this algorithm on an input of size 3k will take time roughly
c43k = c4k42k = T42k = 16kT.
Notice how we aren't left with a constant multiple of the original cost, but rather something that's 16k times bigger. In particular, that means that the amount by which the algorithm slows down will depend on how big the input is. For example, the slowdown going from input size 10 to input size 30 is a factor of 1620, while the slowdown going from input size 30 to input size 90 is a staggering 1660. For what it's worth, 1660 = 2240, which is pretty close to the number of atoms in the observable universe.
And, intuitively, that makes sense. Exponential functions grow at a rate proportional to how big they already are. That means that the scale in runtime for doubling or tripling the size of the input will lead to a runtime change that depends on the size of that input.
And, as above, notice that the runtime is not now Θ(43n). The runtime is still Θ(4n); we're just changing which inputs we're plugging in.
So, to summarize:
The runtime of the function slows down by a factor of 42n if you triple the size of the input n. This means that the slowdown depends on how big the input is.
The runtime of the function stays at Θ(4n) when we do this. All that's changing is where we're evaluating the 4n.
Hope this helps!
The time complexity of the algorithm represents the growth in run-time of the algorithm with respect to the growth in input size. So, if our input size increases by 3 times, that means we have now new value for our input size.
Hence, time complexity of the algorithm still remains same. i.e O(4^N)

GNURadio Companion Blocks for Z-Wave using RTL-SDR dongle

I'm using RTL-SDR generic dongle for receiving frames of Z-Wave protocol. I use real Z-Wave devices. I'm using scapy-radio and I've also downloaded EZ-Wave. However, none of them implements blocks for all Z-Wave data rates, modulations and codings. I've received some frames using original solution of EZ-Wave, however I assume I can't receive frames at all data rates, codings and modulations. Now I'm trying to implement solution according to their blocks to implement all of them.
Z-Wave procotol uses these modulations, data rates and coding:
9.6 kbps - FSK - Manchester
40 kbps - FSK - NRZ
100 kbps - GFSK - NRZ
These are my actual blocks (not able receving anything at all right now):
For example, I will explain my view on blocks for receiving at
9.6 kbps - FSK - Manchester
RTL-SDR Source
variable center_freq = 869500000
variable r1_freq_offset = 800e3
Ch0: Frequency: center_freq_3-r1_freq_offset, so I've got 868.7 Mhz on RTL-SDR Source block.
Frequency Xlating FIR Filter
Center frequency = - 800Khz to get frequency 868.95 Mhz (Europe). To be honest, I'm not sure why I do this and I need an explanation. I'm trying to implement those blocks according to EZ-Wave implementation of blocks for 40 kbps-FSK-NRZ (as I assume). They use sample rate 2M and different configurations, which I did not understand.
Taps = firdes.low_pass(1,samp_rate_1,samp_rate_1/2,5e3,firdes.WIN_HAMMING). I don't understand, what should be transition bw (5e3 in my case)
Sample rate = 19.2e3, because data rate/baud is 9.6 Kbps and according to Nyquist–Shannon sampling theorem, sampling rate should be at least double to data rate, so 2*9.6=19.2. So I'm trying to resample default 2M from source to 19.2 Kbps.
Simple squelch
I use default value (-40) and I'm not sure, if I should change this or not.
Quadrature Demod
should do the FSK demodulation and I use default value of gain. I'm not sure if this is a right way to do FSK demodulation.
Gain = 2(samp_rate_1)/(2*math.pi*20e3/8.0)*
Low Pass Filter
Sample rate = 19.2k to use the same new sample rate
Cuttoff Freq = 9.6k, I assume this according to https://nccgroup.github.io/RFTM/fsk_receiver.html
Transition width = 4.8 which is also sample_rate/2
Clock Recovery MM
Most of the parameters are default.
Omega = 2, because samp_rate/baud
Binary Slicer
is for getting binary code of signal
Zwave PacketSink 9.6
should the the Manchester decoding.
I would like to ask, what should I change on my blocks to achieve proper receiving of Z-Wave frames at all data rates, modulation and coding. When I start receiving, I'm able to see messages from my devices at FFT sink and Waterfall sink. The message debug doesn't print packets (like from original EZ-Wave solution) but only
Looking for sync : 575555aa
Looking for sync : 565555aa
Looking for sync : aa5555aa
what should be value in frame_shift_register, according to C code for Manchester decoding (ZWave PacketSink 9.6). I've seen similar post, however this is a bit different and to be honest, I'm stuck here.
I will be grateful for any help.
Let's look at the GFSK case. First of all, the sampling rate of the RTL source, 2M Baud is pretty high. For the maximum data rate, 100 kbps - GFSK, a sample rate of say 400 ~ 500kbaud will do just fine. There is also the power squelch block. This block prevents signals below a certain threshold to pass. This is not good because it filters low power signals that may contain information. There is also the sample rate issue between the lowpass filter and the MM clock recovery block. The output of the symbol recovery block should be 100kbaud (because for GFSK, sample rate = symbol rate). Using the omega value of 2 and working backward, the input to the MM block should be 200kbaud. But, the lowpass filter produces samples at 2Mbaud, 10 times than expected. You have to do proper decimation.
I implemented a GFSK receiver once for our CubeSat. Timing recovery was done by the PFB block, which is more reliable than the MM one. You can find the paper here:
https://www.researchgate.net/publication/309149646_Software-defined_radio_transceiver_for_QB50_CubeSat_telemetry_and_telecommand?_sg=HvZBpQBp8nIFh6mIqm4yksaAwTpx1V6QvJY0EfvyPMIz_IEXuLv2pODOnMToUAXMYDmInec76zviSg.ukZBHrLrmEbJlO6nZbF4X0eyhFjxFqVW2Q50cSbr0OHLt5vRUCTpaHi9CR7UBNMkwc_KJc1PO_TiGkdigaSXZA&_sgd%5Bnc%5D=1&_sgd%5Bncwor%5D=0
Some more details on the receiver could also be found here:
GFSK modulation/demodulation with GNU Radio and USRP
M.
I appreciate your answer, I've changed my sample rates. Now I'm still working on 9.6Kbps, FSK demodulation and Manchester decoding. Currently, output from my M&M clock recovery looks like this:
I would like to ask you what do think about this signal. As I said, it should be FSK demodulation and then I should use Manchester decoding. Do I still need usage of PCB block? Primary, I have to do 9.6kbps, FSK and Manchester, so I will look at 100Kbps GFSK NRZ if there will be some time left.
Sample rate is 1M because of RTL-SDR dongle limitations (225001 to 300000 and 900001 to 3200000).
Current blocks:
I don't understand :
Taps of Frequency Xlating FIR Filter firdes.low_pass(1,samp_rate_1,40e3,20e3,firdes.WIN_HAMMING)
Cuttoff Freq and Transition Width of Low Pass filter
Clock Recovery M&M aswell, so consider its values "random".
ClockRecovery Output:
I was trying to use PCB block according to your work at ResearchGate. However, I was unsuccessful because I still don't understand all that science behind the clock recovery.
Doing Low-pass filtering twice is because original Z-Wave blocks from scapy-radio for 40Kbps, FSK and NRZ coding are made like this (and it works):
So I thought I will be just about changing few parameters and decoder (Zwave PacketSink9.6).
I also uploaded my current blocks here.
Moses Browne Mwakyanjala, I'm also trying to implement that thing according to your work.
Maybe there is a problem with a clock recovery and Manchester decoding. Manchester decoding use transitions 0->1 and 1->0 to encode 0s and 1s. How can I properly configure clock recovery to achieve correct sample rate and transitions for Manchester decoding? Manchester decoder (Z-Wave PacketSink 9.6) is able to find the preamble and ends only with looking for sync.
I would like to also ask you, where can I find my modulation index "h" mentioned in your work?
Thank you

CRC of input data shorter than poly width

I'm in the process of writing a paper during my studies on implementing CRC in Excel with VBA.
I've created a fairly straightforward, modular algorithm that uses Ross's parametrized model.
It works flawlessly for any length polynomian and any combination of parameters except for one; when the length of the input data is shorter than the width of the polynomial and an initial value is chosen ("INIT") that has any bits set which are "past" the length of the input data.
Example:
Input Data: 0x4C
Poly: 0x1021
Xorout: 0x0000
Refin: False
Refout: False
If I choose no INIT or any INIT like 0x##00, I get the same checksum as any of the online CRC generators. If any bit of the last two hex characters is set - like 0x0001 - my result is invalid.
I believe the question boils down to "How is the register initialized if only one byte of input data is present for a two byte INIT parameter?"
It turns out I was misled (or I very well may have misinterpreted) the explaination of how to use the INIT parameter on the sunshine2k website.
The INIT value must not be XORed with the first n input bytes per se (n being the width of the register / cropped poly / checksum), but must only be XORed in after the n 0-Bits have been appended to the input data.
This specification does not matter when input data is equal or larger than n bytes, but it does matter when the input data is too short.

gnu radio - bit rate

I have propably very stupid/simple question to GnuRadio users.
I have a Random Source as a source of bits [-1, 1]. And I want to multiply every bit with cosinus to make bpsk modulator.
Problem is that Bits are generated as fast as possible... (dont have enything in common with samp_rate). When I have 1 period of cosinus, there are generated many bits from Random Source.
Question is, how can I slow down bit rate generation ??
Thanks for any help
(I dont want to use DPSK Mod :))
Strictly speaking you can not delay the generation of bits. However you can increase the duration of each symbol. This can be done with the repeat block of GNU Radio. This block takes a parameter called interpolation that corresponds to the number of times an input item will be repeated at the output.
So you find the period of your cosine in samples, lets say p. For each random bit produced by the Random source block, you repeated it p times with the repeat block. With this way you increase the duration of your random symbol. Then you pass the resulting samples to the multiply block of your flowgraph.