Im currently creating a satellite ground station which will be used to control our cubesat in coming months. The modulation scheme used is GFSK and the baud rate is 9600. I have tried to run some tests by using a USRP board before I could try to communicate with the satellite by directly connecting the tx and rx blocks in the flowgraph.I was able to send and receive a png file using this flowgraph.
However, when I connect the tx and rx output to my USRP B210 TX/RX(transmission sink) and RX2(reception source) as shown below, I receive no data even though the two the source and sinks have been connected to each other carefully by RF cables with attenuators.
Below are the assumptions I took into account when I was making the second flowgraph. Please tell me if im on the right path.
Transmitter side : The packet decoder and GFSK mod blocks use 20 samples per symbol. Baud rate is 9600 and sample rate is 20*baud rate = 192K. Since the expected symbol rate by the satellite is baud_rate = 9600, I included a rational resampler and set UHD symbol rate to baud_rate. Is my logic correct?
GFSK mod and demod : For both of these blocks, I calculated sensitivity as S = Pi * Modulation_index/Samples_Per_Symbol. The default BT value of 0.5 is used. Are my calculations sound? Is there a link for to find documentation for GFSK blocks? My derivations are based on the GFSK python source code which is a poor substitute for documentation.
Packet Encoder/Decoder : The output of packet decoder is null even though the GFSK demod block give some kind of output which is rather meaningless. Is this normal? What is the meaning of the threshold variable and why its value is -1?
I'm a newbie in GNU Radio as well as GFSK in general. So please drop me any further references.
Thanks in advance.
Moses.
I was finally able to solve the problem. All I did was re-implementing the GFSK demod in GRC. If you go into source of gfsk.py, you will find out that the blocks used are Quadrature Demod --> M&M Clock recovery --> Binary slicer which can easily be connected in GRC directly. As Marcus suggested in my other thread, GFSK demodulation with Xlating filter in GNU Radio , I replaced the M&M Clock recovery block with PFB block. My flowgraph is shown below.
Even if I can not answer all of your questions, I provide below some thoughts:
When using hardware devices the Throttle MUST be removed from the flow-graph. The hardware device is now responsible for the rate limiting. Mixing hardware device and the Throttle block may break the real-time boundary of your flow-graph required by the device. Underflows or Overflows messages should be produced by the UHD driver in such a case.
Are you sure that the USRP can support the requested sampling rate? You may need also to change the master_clock_rate of the device, if the requested sampling rate is not an integer decimation of the clock. If this is not possible consider some kind of re-sampling.
EDIT: The B200 can not provide 192e3 sampling rate with the default clock. You can set the master_clock_rate at 19.2e6. The hardware will apply then the proper decimation. The master_clock_rate can be changed either by the device specific arguments or the Clock Rate field of the UHD Sink/Source blocks that presents in the latests GNU Radio versions.
Related
I am trying to build an ofdm flow graph with usrp, as shown below
Flowgraph
The flowgraph works fine without any errors. However, there is no received signal at the receiver end as shown in results(1) and results(2).
It seems that the USRP does not get data/signal from the OFDM Trasnmitter, and the signal shown at Rx spectrum is just noise.
Any recommendation to solve this problem?
your sampling rate is too low. The USRPs you're using probably aren't running at the rate - look for the warnings at the beginning of your flow graph execution about other sampling rates being used
The gain is too low, probably, for over-the-air transmission.
The analog bandwidth settings make no sense. Since this seems to be a network-connected USRP: only very few of these have adjustable frontend bandwidths, anyway, but using a frontend bandwidth much larger than your sampling rate makes no sense.
Most importantly: This is communications, with a random channel, noise, interferers and device imperfections. It's very normal for packets to get lost.
I have two USRP B200 connected to two Raspberry Pi's that I want to communicate via AX.25. Here are the flowgraphs:
TX:
RX:
They work well and are able to communicate. However, if I change the samp_rate to 200k on both the TX and RX, the RX is not able to receive the messages sent. However, if I use Direwolf with RTL-SDR, I am able to receive the messages sent at 200k. Can anybody help me how to receive the data sent at 200k?
Thanks!
This is really an outdated version of GNU Radio. gr-ax25 is ported to GNU Radio 3.9 and 3.8; please use a modern GNU Radio (3.8 or 3.9), that solves a lot of problems you'd otherwise run into later on.
Also, as UHD will print to the console, 150 kS/s is lower than any sampling rate than any USRP actually supports, and hence is substituted for a different one – which works, because both ends of the communication do the same!
It's still not "proper". You need use a higher rate (recommendation: 1 MS/s) and use a resampler to go down to the very low AX.25 rates; the "rational resampler" block will do that!
Then, you misconfigured both your NBFM TX and RX – your quadrature rate is not the 48 kHz you configured, but your actual sampling rate / 4.
Of course, as long as both ends of the communication are "wrong in the same way", they have a chance of working together. But any actually correct receiver implementation won't be able to make out what you meant to send.
I'd recommend you look into the apps/ subofolder of dl1ksv's gr-ax25. That contains a properly set up APRS transceiver, for a sampling rate of 192 kS/s; I'd recommend you replace the fcdproplus blocks with your UHD USRP source/sink blocks and use a rational resampler, so that at the USRP Source you decimate by 16, and on the USRP Sink you interpolate by 16. Your sampling rate for both Sink and Source would then be 192000*16.
Currently I am working with the Ettus Research's N310 on the implementation of different PSK modulation schemes. I am interested on measuring the Bit Error Rate -BER- for each scheme when I transmit data between two USRPs located one beside the other. Therefore, I am employing GNU Radio Companion for the SW development. In the case of the BPSK transceiver, I am using a standard configuration of a vector source and a constellation modulator to create the PSK symbols, which are transmitted at 2.45GHz using the VERT2450 antennas. These antennas work in the frequency range from 2.4 - 2.5 GHz and from 4.9 - 5.9 GHz. Since I have a desktop computer with only one ethernet port, I am using the NetGear GS108 switch, which has 16Gbps bandwidth and a forwarding rate of 10Mbps port. The current SW setup is shown in the following figure:
I am using as input a vector of only zeros since I am interested to probe that my transceiver detects correctly one constellation. However, I am having continious jumps between the constellation points as you can see from the picture in the left side. I have several questions about my setup:
What is the correct baud rate for each modulation scheme? It means how many symbols per second should I use for BPSK, QPSK, 8PSK and 16 QAM.
Since the USRP N310 has a default sample rate of 125MSamples/second, and my desktop machine can only deal with 5MSamples/second, then I have a decimation rate of 25(sample_rate_usrp/sample_rate_desktop). What is the value for the sps -samples per symbol- parameter that I should assign in each block of the transceiver?
When is the CMA equalizer necessary? Since the USRPs have a static position, then there is no frequency changes due to the Doppler effect. Consequently, an equalizer should not be necessary. Why is this reasoning not correct? I suppressed the equalizer and the constellation diagram is presented as a circle.
Does the Polyphase Clock Synchonization really synchronize the received signal with the transmitted signal or can I supprime it and replace it with an equalizer?
I would really appreciate if someone could help me to bring some light to all of this questions.
Thanks in advance
See my response at https://lists.gnu.org/archive/html/discuss-gnuradio/2020-08/msg00172.html
The 'correct' baud rate is anything you want to use.
You need to check the minimum sample rate for the N310.
The CMA Equalizer is optional under your conditions. I left it out of the BPSK to simplify the flowgraph.
The Polyphase Clock Sync Block recovers the timing of the received signal. The equalization is for fading and is a separate function.
It looks like you're modulating with 8 SPS but then demodulating 16 SPS -- 8:1 on the PCSync and 2:1 on the CMA eq.
I am working on a project to transmit and receive the binary data by using QPSK modulation and demodulation technique on GNURadio via SDR (BladeRFx40). Here is the sketch of the task to be implemented.
The flow graph is simple and workable when the intent is not to use bladeRF or is solely to modulate and demodulate binary data as the image shows
But problem arises when using osmocom source and sink (i.e. QPSK transceiver via BladeRFx40).
Few Important Questions and Problems Regarding the Working:
On the receiver side, the osmocom source(or the received signal) when tested directly using FFT plot gives no signal. How can this be made to work successfully?
Theoretically, QPSK modulation is mapping plus up-conversion, but in GNURadio, QPSK Mod block only shows mapping but no up-conversion, does the purpose of up-conversion is fulfilled itself when osmocom sink block is used(since it shows frequency at which signal is to be transmitted)? Or the up-conversion is done separately by multiplying QPSK Mod Block output with sinosoid along with osmocom sink block? Precisely how is up-conversion done on GNURadio for such a task?
If only i do modulation and demodulation without transmitting and receiving on SDR platform, then i must up-convert and down-convert it separately according to my understanding. Even then, i am unable to get the binary data: here is the attachment for it too, Kindly rectify me for any misplacement or misuse of the blocks and recommend for any changes needed in the flow graph of image.
I recommend that you visit the following link (if you have not seen it yet). When we talk about digital modulations, and these are transmitted / received by a USRP, HackRF, etc., the recovery of the signal is not as easy as in FM or AM.
I do not fully understand your question because I just entered the world of SDR, but in the sdr sinks (UHD or OSMOCON) is where the RF frequency is configured (M / GHz).
If you want to simulate the TX / RX process, you do not need to configure the RF frequency because it will not transmit this signal to the air. You will be working in Baseband
Hi Im using the following RF module
http://www.apogeekits.com/rf_receiver_module_rx433.htm
on an embedded board with the PIC16F628A. Sadly, I realized that the signal strength was in analog form and couldn't get any ideas to get the RSSI reading off the pin because well my PIC is digital DUH!.
My basic idea was
To get the RSSI value from my Receiver
Send it to the PIC
Link the PIC to a PC via RS232
Plot a graph of time vs RSSI of the receiver (so I can make out how close my TX is to my RX)
I thought it was bloody brilliant at first but ive hit a dead end here. Any ideas on getting the RSSI data to my PC from this receiver would be nice.
Thanks in Advance
You can get a PIC that has an integrated ADC for sampling the analog signal. Or, you can use an external ADC chip to do the conversion. You would connect that to your PIC using SPI or I2C.
The simplest thing to do is obviously to use a more appropriate microcontroller - one with an ADC! There are many (most), including PICs (though that wouldn't be my first choice).
Attaching an external SPI or I2C ADC might be a bit tedious since having no SPI or I2C on your part, you'd have to bit-bash it. If you do that, use an SPI part - its simpler. Your sample rate will suffer and may end-up being a bit jittery if you are not careful.
Another solution is to use a voltage controlled PWM, then use the timer input capture to time the pulse width. That will give you good regularity and potentially good resolution. You can get a chip (example) to do that, or grow your own. That last option requires a triangle wave input as well as the measured (control) voltage, but on the same site...
In a similar vein, you could use a low frequency VCO (example) and use the output to clock one of the timers, then using a second timer periodically sampling the first and reset it. The count will relate to the voltage, though not necessarily a linear relationship, linearisation could be none on the PIC or at the receiving PC - I'd go for the latter - your micro will suck at arithmetic (performance wise) - even integer arithmetic, especially if it involves division.