tc (netem) - reported loss is different to set loss - udp

I'm doing some testing on some links, bench marking and I'm using the tc command to simulate packet loss across WANs.
What I have found is that after issuing the command to drop 15% of packets, the reported packet loss is different for different streams within my VoIP analysis.
The setup is simple, two clients attached to a server through a L2 switch. minimal packet loss normally only around 50 machines to this switch.
I'm looking for factors which have made this so, are there facets of the 'tc' command I am not familiar with?
http://i.stack.imgur.com/OBzxH.png [ss of packet loss]
[edit]
The 26.1% is loss I caused, it is the other two I am curious about.

It turns out that wireshark was causing my problem, when a VoIP stream changed to, say the hold music, the stream was switched to the hold music from the server, and when the user was taken off hold the sequence numbers had jumped a few hundred making wireshark think there was packet loss.

Related

GNU Radio and bladeRF on Raspberry Pi (simple FSK system)

I am having a problem porting a GNU Radio setup from PC (windows 10, USB3) to Raspberry Pi 2 (USB2). USB bandwidth and CPU should not be a problem I think (only around 30% utilization while running). Essentially it looks like the RPi is 'pausing' during transmission, while the PC is not. The receiver is running on PC in both cases. I am including a pic of what I see after the FSK demod when running transmitter on PC vs Pi (circled 'pause' area), as well as a picture of my (admittedly sloppy) schematic. Any help/tips is greatly appreciated.gnuradio schemreceived signals
Edit: It appears it may actually be processing limitations. Switching from 9400 baud to 2400 baud makes the issue go away. If anyone has experience with GNURadio...am I doing anything overly inefficient or should I just drop comm rate?
The first thing I would do would be to lower your sample rates.
You don't need 1.5Ms/s if you are going to keep only the lowest 32k in your low pass filter.
Then you could do the same for your second stage after the quadrature demod if it's not enough (by the way, the sample rate of your second low pass filter does not seem to match the actual sample rate of the stage which is still 1.5Ms/s if I'm not mistaken).
Anyway, Gnuradio uses a lot of processing power so try not to use a sampling rate way above what you actually need ;)
In your case, you could cut the incoming sample rate down to 64k (say 80 for safety). 18 times less samples to process might do the trick :)

LWIP PBUF, extra bytes when sending UDP?

I am using LWIP in an application that needs high data rates. So i allocate 4 pbufs once and store their address and with some hardware magic, fill them one after another and, tell the program that buffer is ready and the software sends it as UDP packets, how ever after some time when i sniff the packet I have about 60 extra bytes in my packet, they seem like extra UDP headers but in the payload.
any workaround/suggestion?
On my project at work, we had pbuf corruption that was causing a similar kind of issue. We were using multiple MACs of different types from xilinx and there was unhappiness in the pbuf department. What I would recommend you do is turn on full lwip debug on for IP layer and possibly UDP layer. Then manually trim the prints to something that is managable that reproduces the problem (lwip has a minimum print level - you can use this to help trim out things like warning versus serious prints). In our case we would get UDP or IP layer checksum errors and it was a sign of bad stuff. Also, it is helpful to test in only one direction at a time, to limit the possibilities of bad stuff in one direction. We used iperf examples from xilinx and expanded on them. These were helpful with troubleshooting the issue. BTW 4 pbufs is nothing... When I have looked at ethernet traffic - there is a ton of stuff going on, overhead etc... There is tons of potential problems, from too little ARP table entries and on... Four pbufs is rediculously low, if you are that trapped for memory, I feel sorry for you trying to use lwIP. That just sounds like a nightmare. Also, be careful that typically prints are blocking... so that will mess up performance. It's wise to replace the lwip debug prints with a non blocking routine that you know will not clog up your realtime performance.

Retrieve data from USRP N210 device

The N210 is connected to the RF frontend, which gets configured using the GNU Radio Companion.
I can see the signal with the FFT plot; I need the received signal (usrp2 output) as digital numbers.The usrp_sense_spectrum.py output the power and noise_floor as digital numbers as well.
I would appreciate any help from your side.
Answer from the USRP/GNU Radio mailing lists:
Dear Abs,
you've asked this question on discuss-gnuradio and already got two
answers. In case you've missed those, and to avoid that people tell
you what you already know:
Sylvain wrote that, due to a large number of factors contributing to
what you see as digital amplitude, you will need to calibrate
yourself, using exactly the system you want to use to measure power:
You mean you want the signal power as a dBm value ?
Just won't happen ... Too many things in the chain, you'd have to
calibrate it for a specific freq / board / gain / temp / phase of the
moon / ...
And I explained that if you have a mathematical representation of how
your estimator works, you might be able to write a custom estimator
block for both of your values of interest:
>
I assume you already have definite formulas that define the estimator for these two numbers.
Unless you can directly "click together" that estimator in GRC, you will most likely have to implement it.
In many cases, doing this in Python is rather easy (especially if you come from a python or matlab background),
so I'd recommend reading at least the first 3 chapters of
https://gnuradio.org/redmine/projects/gnuradio/wiki/Guided_Tutorials
If these answers didn't help you out, I think it would be wise to
explain what these answers are lacking, instead of just re-posting the
same question.
Best regards, Marcus
I suggest that you write a python application and stream raw UDP bytes from the USRP block. Simply add a UDP Sink block and connect it to the output of the UDH: USRP Source block. Select an appropriate port and stream to 127.0.0.1 (localhost)
Now in your python application open a listening UDP socket on the same port and receive the data. Each sample from the UDH: USRP Source is a complex pair of single prevision floats. This means 8 bytes per sample. The I float is first, followed by the Q float.
Note that the you need to pay special attention to the Payload Size field in the UDP Sink. Since you are streaming localhost, you can use a very large value here. I suggest you use something like 1024*8 here. This means that each packet will contain 1024 IQ Pairs.
I suggest you first connect a Signal Source and just pipe a sin() wave over the UDP socket into your Python or C application. This will allow you to verify that you are getting the float bytes correct. Make sure to check for glitches due to overflowing buffers. (this will be your biggest problem).
Please comment or update your post if you have further questions.

Enabling 10 Hz sampling rate in Ublox modules

I'm using ublox NEO-M8N-0-01 GNSS module.
This module supports up to 5Hz GPS+GLONASS and 10 Hz GPS only.
However, when I try to change the sampling rate (via UBX-CFG-RATE in the messages view) I can only increase it to 5 Hz (Measurement period = 200ms). Any value below 200ms is impossible (changes the box to pink).
It happens even if I only produce NMEA message GxGGA.
The way I made it only GPS was via UBX-CFG-GNSS
Has anyone encountered this issue?
Thanks in advance
Roi Yozevitch
You don't say how you are setting the rate however going by your description I'm assuming you are using the ublox u-center software.
There is a simple explanation for this issue and a simple solution: Their software has a bug in (or wasn't updated to match the final specification of the part).
The solution is to not use u-center, it's the PC software that's complaining not the receiver. The receiver itself doesn't care what the spec sheet says, it will try it's best to run at whatever rate you request.
Sending commands directly I've managed to get a fairly reliable 10Hz GPS+Glonass. There is the occasional missing point but most of the time it keeps up.
Running GPS only you can get faster than 10Hz. If you play with the settings and restrict it to 8 channels 18-19Hz is fairly reliable. Unfortunately 20Hz is pushing it too far, you end up getting positions at 10Hz.
Obviously when running at these update rates make sure that your baud rate is high enough to cope with the requested messages and rate.

WasapiLoopbackCapture to WaveOut

I'm using WasapiLoopbackCapture to capture sound coming from my speakers and then using onDataAvailable to send it to another device and I'm attempting to play the data sent using the WaveOut class and a BufferedWaveProvider and just adding a sample everytime data is sent from my client using the onDataAvailable. I'm having problems sending sound. The most functioning I've managed to get it is:
Not syncing the Wave format of the client and the server, just sending data and adding it to the sample. Problem is this is stutters very much even though I checked the buffer stored size and it has 51 seconds. I even have to increase the buffer size which eventually overflows anyway.
I tried syncing the Wave format and I just get clicks but have no problem with buffer size. I also tried making sure that at least a second was stored in the buffer but that had zero effect.
If anyone could point me in the right direction that would be great.
Uncompressed audio takes up a lot of space on a network. On my machine the WasapiLoopbackCapture object produces 32-bit (IeeeFloat) stereo samples at 44100 samples per second, for around 2.7Mbit/sec total raw bandwidth. Once you factor in TCP packet overheads and so on, that's quite a lot of data you're transferring.
The first thing I would suggest though is that you plug in some profiling code at each step in the process to get an idea of where your bottlenecks are happening. How fast is data arriving from the capture device? How big are your packets? How long does it take to service each call to your OnDataAvailable event handler? How much data are you sending per second across the network? How fast is the data arriving at the client? Figure out where the bottlenecks are and you get a much better idea of what the bottlenecks are.
Try building a simulated server that reads data from a wave file in various WaveFormats (channels, bits per sample and sample rate) and simulates sending that data across the network to the client. You might find that the problem goes away at lower bandwidth. And if bandwidth is the issue, compression might be the solution.
If you're using a single-threaded model, and servicing each OnDataAvailable event takes longer than the recording frequency (ie: number of expected calls to OnDataAvailable per second) then there's going to be a data loss issue. Multiple threads can help with this - one to get the data from the audio system, another to process and send the data. But you can end up in the same position: losing data because you're not dealing with it quickly enough. When that happens it's handy to know about it, because it indicates a problem in the program. Find out when and where it happens - overflow in input, processing or output buffers all have different potential reasons and need different attention.