Consistent increases and decreases in iPerf TCP throughput - testing

I am using iPerf to try and test the Wi-Fi performance on my router. I have set up two computers, a HP Zbook (Client) and a Macbook Pro (Server), to demonstrate this connection. The client is connected directly to the router via LAN and the server is connected to the router via Wi-Fi.
My iPerf script sets the TCP window size and sends data for certain time limits from the client to the server. The output on my server goes up to an expected throughput for a few seconds and down to a very low throughput for a few seconds at relatively constant intervals for all Wi-Fi configurations on my router (various bands, 802.11 protocols and channel bandwidths) as well as in noisy and clean environments. Can anyone suggest a possible reason for this? Is this how the Wi-Fi protocol works? Or is this a problem with iPerf?
The iperf version on the client is iperf3 v3.0.11 (windows 64 bit) and iperf3 v3.0.1 (mac osx).
Client OS: Windows 10
Server OS: Mac OS X El Capitan v 10.11.5
I have ran a TCP test as well as two UDP tests (with bandwidth set to 1.05Mbps and 150Mbps) and attached the output screenshots. Wi-Fi config: 802.11ac, 40MHz, 5GHz
jPerf depiction of my iPerf script for a 180 second test case on 5GHz, 80MHz, 802.11ac
Testing screenshots: https://imageshack.com/a/SktM/1

Please include the iperf output for both client and server and use interval reports (-i). Also, include iperf -v and the operating system information. Also, make a run with UDP and capture the reports on both client and server. If you're using Linux on the client and iperf 2.0.9, the -e (enhanced reports) can provide even more information.
Wi-Fi throughput problems are like somebody having a fever in an Emergency Room. There are many different factors that have to be looked at.
Bob

Related

replaying multicast UDP packet capture via tcpreplay not being seen by client

I'm having no joy in getting a replayed UDP Multicast packet to be "seen" by a client program on a different machine.
Details:
I have two machines on my local (wired) network connected through one unmanaged switch. One machine (running tcpreplay) is running Ubuntu 20.04, the other machine is running Windows 10.
On the Windows machine I have a Python program I wrote which listens for UDP multicast packets on port 5110 (this is dictated by the source of the UDP stream which is a commercial program). When I run the commercial program, my Python code correctly consumes the incoming packets and all seems to be working fine. I have a lot of work yet to do on the contents of those packets after they are received, but that isn't important for this issue.
So, moving forward, I decided it would be great to be able to work on the Python code without having the commercial program always running in the background hogging up resources. I figured if I could catch a snippet of UDP broadcasts from that program, I should be able to replay at leisure without having to run that resource hog.
So, on the Windows machine, I captured a UDP multicast packet stream using Wireshark and saved to a pcap file which I then copied to the Ubuntu machine.
I then attempted to replay that pcap file (on the Ubuntu machine) as follows:
$sudo tcpreplay -i enp5s0 single.pcap
To my disappointment, my Python program (on the Windows machine) did not receive the incoming packets.
Back on the Windows machine, I fired up Wireshark again and captured the "replayed" packet coming from the Ubuntu machine - so it appears the packet did make it out of my Ubuntu machine and into my Windows one. The contents of both the source packet (sent by tcpreplay) and the received packet (grabbed by Wireshark) appear identical - including the source and destination MAC addresses and the checksums. A diff on the byte contents of each packet yields no differences.
However, my Python program still stoically sits there waiting at:
data, address = sock.recvfrom(1024)
Here on stackoverflow, I did find this thread which seems to be an identical problem, however none of the solutions presented within helped (including changing the rp_filter parameter). I also saw mention of a Windows program, "Colasoft PacketPlayer", which I tried - running on the same machine as my Python client. This appears to have the same apparent results (i.e. no joy). I did not initially try that route as I was concerned with generating the packet on the same machine which is listening for it. (As an aside, I did also capture the replayed packet from Colasoft PacketPlayer and it too appears identical to the source packet).
At this point I'm out of ideas and am reaching out to the community for possible next steps?

Using iperf3 for measuring UDP throughput on STM32 board

I have STM3220G-Eval board with STM32F207 MCU. I've loaded UDP Echo Server lwIP based sample application (from CubeMX archive). This app used port #7. I've tried to use iperf3 in client mode (Windows OS), but it failed to work with the board (though Echotool successfully worked as a client). Can iperf3 work with custom UDP echo server?
Short answer: Not really. The iperf3 client and server need to communicate with each other over a control channel that is set up before the test starts. This allows them to exchange test parameters, ending conditions, and so on. If you wanted to make an iperf3 server on your embedded system, it would need to speak the (not very well documented) control protocol used by the iperf3 client.
iperf version 2 doesn't use a control channel; it might work for your application if all you need to do is send UDP packets to your board.
Bruce.

UDP on iperf Windows 7 shows only 2.5 Mbps

I tried to run an iperf test between two windows 7 laptops with one hosting a ad-hoc network. Specifically I wanted to see whether I could see a visible difference in throughput between the built in PCI card and a USB wifi adapter.
Unfortunately under both conditions I managed to see a total speed of 2.5 Mbps only.
Is Windows throttling my UDP bandwidth in some way or is iperf 2 not compatible with Windows 7?
I tested on a windows 7 to windows 10 PC as well and saw the same issue.
A wireshark trace shows almost no retries and most of the packets appear to be using 58.5 and above n rates.
However it appeared that data was being sent in bursts
This image shows the graph of packets sent
I couldn't find any information on this. I will try using iperf 3 in the meantime and also test the performace via a standard AP and update this question.
This is a screenshot of the cmd prompt
Thanks in Advance!
Update: Iperf3 showed much higher speeds. 60 Mbps or so. I'll probably need to read up on the differences between the two softwares. I don't really understand why this difference exists.

Simulate poor bandwidth in a testing environment (Mac OS X)?

We have a customized Flash/HTML5 video player we use for users on our site. I'm currently fleshing out the experience for users who have 'suboptimal' bandwidth--basically we'd like the client side code to be able to detect poor user experience due to excessive buffering. I would like to test this "poor bandwidth" handling code in my local development environment.
Does anyone know of good techniques for simulating "poor bandwidth" in a local environment for testing purposes?
More specifically I have my local browser connecting to a virtual machine with instances of uWSGI, nginx, and python/django and I would like to be able to inject arbitrary amounts of delay into the delivery of content from these systems. (I'm primarily concerned with doing this with nginx, which does the video content delivery/streaming).
EDIT: It may be relevant that the dev environment is Mac OS X.
Just use nginx's configuration.
While OS X Lion's Network Link Conditioner works as expected it's still annoying to use when I'm really just trying to test a subset of a web app's behavior--i.e., the slow video buffering handling system.
As such, I've found it much more convenient to set rate limiting in my nginx.conf file, e.g.,:
location ~ /files/(.*\.(mp4|m4v|mov))$ {
...
limit_rate 50k; # <-- Limit download rate per connection to 50kbps
...
}
EDIT: See the nginx HttpCoreModule docs.
FreeBSD is ancestor of Mac OS, so you can use built-in powerful firewall called ipfw.
It can be used in many different cases, for example simulate low bandwidth. Use your own IP address loopback (127.0.0.1) or a remote server (8.8.8.8 in that case).
We do a video interviewing web-application, so I'd like to share with our experience of simulation of bad connection, see example below:
$ sudo su
$ ipfw show
$ ipfw pipe 1 config delay 600ms bw 256kbit/s
$ ipfw add pipe 1 dst-ip 8.8.8.8 dst-port 80
$ ipfw flush
ipfw pipe allows you to simulate slow and unstable connection with using delay, bw and even prob to simulate packet losses.
I just found the Mac OS X Network Link Conditioner but I'm not yet sure it works on loopback, which it would need to for my purposes.
EDIT: This seems to work on loopback, so it seems to solve my problem! This is probably the way to go if you're on OS X 10.7
I'm using this program NetLimiter to simulate "poor bandwidth". It's not free, but have a trial version that works well. Is only for windows :(

Slow DNS lookup on iOS simulator

I'm using NSURLConnection to access a web service (on a .local host). When I access the host by hostname, I'm seeing a delay of 5+ seconds, but when I access it by IP, the connection completes almost instantly.
Running the app on an actual iPhone, instead of the simulator, does not show any delays at all (testing was done on the same network connection). So this seems to be a problem specific to the iOS Simulator or OS X.
I'm able to simulate the problem using the following terminal commands:
nslookup webservice.myhost.local (which is fast)
dscacheutil -q host -a name webservice.myhost.local (shows the delay)
When analyzing the network traffic using Wireshark of the dscacheutil command, I'm seeing several Standard query AAAA requests which are marked red and get an empty response. Once these are done, I see a Standard query A request which has a response containing the correct IP address. The AAAA requests take up about 5 seconds, which would explain the delay.
Does the web service perhaps have IPv6 enabled and you can't use that from the simulator?
I see this on OSX for example when running a local IPv4 only DNS service - if I run dig #localhost is hangs for some seconds until the initial IPv6 connection times out, and then it tries IPv4.
This answer solved the problem for me. (Create an IPv6 ::1 loopback entry to go along with each 127.0.0.1.)
For anyone else who stumbles across this issue... I myself had to disable IPv6 on my machine to avoid the hang in the simulator while IPv6 fails. I did so following these instructions: https://discussions.apple.com/message/18097613#18097613
Which were to:
"To disable IPv6 in OS X Lion, you will need to use the Terminal.
Applications > Utilities > Terminal
To determine what are all of your Mac's network interfaces are, issue the following command: networksetup -listallnetworkservices
To disable IPv6 for wireless, issue the following command: networksetup -setv6off Wi-Fi;
To disable IPv6 for Ethernet, issue the following command: networksetup -setv6off Ethernet
To re-enable IPv6, use -setv6automatic instead"