I am trying to set up 5 USRP1 and some daughterboards on 2.4 and 5 GHz.
Some of them are out of order and some work properly, but I don't know which is which. I am trying to send a symbol (QAM modulation) sequence then I am trying to pass it with a file source to both a USRP sink and an FFT sink.
I am trying to find articles and tutorials, how sample rates are connected and how to set up them but I can't understand what am I missing. Could somebody please help with the schemes?
128 MS/s is not a rate that is possible with the USRP1. The console will contain a UHD warning that a differen, possible rate was chosen (most likely, 8MS/s).
Now, you contradict that rate by having a "Throttle" block in your flow graph - that block's job is only (and nothing more) to slow down the average rate at which samples are being let through – and that is something your "USRP Sink" already does. In fact, modern versions of GRC will warn you that using a throttle block in the same flow graph as a hardware sink or source is a bad idea.
Now, you'll say "ok, if the USRP sink actually needs to consume but 8MS/s, and my interpolator makes 128 MS/s out of my nominally 1M/s flow (really, signals within GNU Radio don't have a sampling rate), then that's gotta be fast enough to satisfy the 8MS/s demand!".
But the fact is that a 128-interpolator is really a CPU-intense thing, and the resulting rate might not be that high, made even worse by the choppy nature of how Throttle works.
In fact, your interpolator is totally unnecessary. The USRP internally has proper interpolators for integer fractions of its master clock rate 64MS/s, which means that you can set the USRP Sink to a sampling rate of 1MS/s and just directly connect the file source to it.
Related
I am trying to build an ofdm flow graph with usrp, as shown below
Flowgraph
The flowgraph works fine without any errors. However, there is no received signal at the receiver end as shown in results(1) and results(2).
It seems that the USRP does not get data/signal from the OFDM Trasnmitter, and the signal shown at Rx spectrum is just noise.
Any recommendation to solve this problem?
your sampling rate is too low. The USRPs you're using probably aren't running at the rate - look for the warnings at the beginning of your flow graph execution about other sampling rates being used
The gain is too low, probably, for over-the-air transmission.
The analog bandwidth settings make no sense. Since this seems to be a network-connected USRP: only very few of these have adjustable frontend bandwidths, anyway, but using a frontend bandwidth much larger than your sampling rate makes no sense.
Most importantly: This is communications, with a random channel, noise, interferers and device imperfections. It's very normal for packets to get lost.
I have two USRP B200 connected to two Raspberry Pi's that I want to communicate via AX.25. Here are the flowgraphs:
TX:
RX:
They work well and are able to communicate. However, if I change the samp_rate to 200k on both the TX and RX, the RX is not able to receive the messages sent. However, if I use Direwolf with RTL-SDR, I am able to receive the messages sent at 200k. Can anybody help me how to receive the data sent at 200k?
Thanks!
This is really an outdated version of GNU Radio. gr-ax25 is ported to GNU Radio 3.9 and 3.8; please use a modern GNU Radio (3.8 or 3.9), that solves a lot of problems you'd otherwise run into later on.
Also, as UHD will print to the console, 150 kS/s is lower than any sampling rate than any USRP actually supports, and hence is substituted for a different one – which works, because both ends of the communication do the same!
It's still not "proper". You need use a higher rate (recommendation: 1 MS/s) and use a resampler to go down to the very low AX.25 rates; the "rational resampler" block will do that!
Then, you misconfigured both your NBFM TX and RX – your quadrature rate is not the 48 kHz you configured, but your actual sampling rate / 4.
Of course, as long as both ends of the communication are "wrong in the same way", they have a chance of working together. But any actually correct receiver implementation won't be able to make out what you meant to send.
I'd recommend you look into the apps/ subofolder of dl1ksv's gr-ax25. That contains a properly set up APRS transceiver, for a sampling rate of 192 kS/s; I'd recommend you replace the fcdproplus blocks with your UHD USRP source/sink blocks and use a rational resampler, so that at the USRP Source you decimate by 16, and on the USRP Sink you interpolate by 16. Your sampling rate for both Sink and Source would then be 192000*16.
The N210 is connected to the RF frontend, which gets configured using the GNU Radio Companion.
I can see the signal with the FFT plot; I need the received signal (usrp2 output) as digital numbers.The usrp_sense_spectrum.py output the power and noise_floor as digital numbers as well.
I would appreciate any help from your side.
Answer from the USRP/GNU Radio mailing lists:
Dear Abs,
you've asked this question on discuss-gnuradio and already got two
answers. In case you've missed those, and to avoid that people tell
you what you already know:
Sylvain wrote that, due to a large number of factors contributing to
what you see as digital amplitude, you will need to calibrate
yourself, using exactly the system you want to use to measure power:
You mean you want the signal power as a dBm value ?
Just won't happen ... Too many things in the chain, you'd have to
calibrate it for a specific freq / board / gain / temp / phase of the
moon / ...
And I explained that if you have a mathematical representation of how
your estimator works, you might be able to write a custom estimator
block for both of your values of interest:
>
I assume you already have definite formulas that define the estimator for these two numbers.
Unless you can directly "click together" that estimator in GRC, you will most likely have to implement it.
In many cases, doing this in Python is rather easy (especially if you come from a python or matlab background),
so I'd recommend reading at least the first 3 chapters of
https://gnuradio.org/redmine/projects/gnuradio/wiki/Guided_Tutorials
If these answers didn't help you out, I think it would be wise to
explain what these answers are lacking, instead of just re-posting the
same question.
Best regards, Marcus
I suggest that you write a python application and stream raw UDP bytes from the USRP block. Simply add a UDP Sink block and connect it to the output of the UDH: USRP Source block. Select an appropriate port and stream to 127.0.0.1 (localhost)
Now in your python application open a listening UDP socket on the same port and receive the data. Each sample from the UDH: USRP Source is a complex pair of single prevision floats. This means 8 bytes per sample. The I float is first, followed by the Q float.
Note that the you need to pay special attention to the Payload Size field in the UDP Sink. Since you are streaming localhost, you can use a very large value here. I suggest you use something like 1024*8 here. This means that each packet will contain 1024 IQ Pairs.
I suggest you first connect a Signal Source and just pipe a sin() wave over the UDP socket into your Python or C application. This will allow you to verify that you are getting the float bytes correct. Make sure to check for glitches due to overflowing buffers. (this will be your biggest problem).
Please comment or update your post if you have further questions.
I'm using ublox NEO-M8N-0-01 GNSS module.
This module supports up to 5Hz GPS+GLONASS and 10 Hz GPS only.
However, when I try to change the sampling rate (via UBX-CFG-RATE in the messages view) I can only increase it to 5 Hz (Measurement period = 200ms). Any value below 200ms is impossible (changes the box to pink).
It happens even if I only produce NMEA message GxGGA.
The way I made it only GPS was via UBX-CFG-GNSS
Has anyone encountered this issue?
Thanks in advance
Roi Yozevitch
You don't say how you are setting the rate however going by your description I'm assuming you are using the ublox u-center software.
There is a simple explanation for this issue and a simple solution: Their software has a bug in (or wasn't updated to match the final specification of the part).
The solution is to not use u-center, it's the PC software that's complaining not the receiver. The receiver itself doesn't care what the spec sheet says, it will try it's best to run at whatever rate you request.
Sending commands directly I've managed to get a fairly reliable 10Hz GPS+Glonass. There is the occasional missing point but most of the time it keeps up.
Running GPS only you can get faster than 10Hz. If you play with the settings and restrict it to 8 channels 18-19Hz is fairly reliable. Unfortunately 20Hz is pushing it too far, you end up getting positions at 10Hz.
Obviously when running at these update rates make sure that your baud rate is high enough to cope with the requested messages and rate.
I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.