I am trying to understand the difference between iperf (version=2.0.8b) and iperf3 (version=3.15) which are showing different network bandwidths between the two VMs and with same parameters.
When I run "iperf -s" on server and "iperf -c -t 30 -P 8" on client, I get the bandwidth equal to 45 Gb/s.
But, when I run "iperf3 -s" on server and "iperf3 -c -t 30 -P8", I am getting 25 Gb/s as network bandwidth. So, there is a difference of 15Gb/s.
Any idea what could be the cause of this big difference? What are the main differences between iperf and iperf3?
Thanks a lot
NKD
There are a couple of possible reasons for the difference: One is that iperf2 has a multi-threaded design that might very possibly perform better than iperf3 on parallel tests (-P 8). Another is that iperf3's TCP window size might be set too small and you might need to make it larger with the -w option.
More information on the comparative use of iperf2 and iperf3 can be found here:
http://fasterdata.es.net/performance-testing/network-troubleshooting-tools/throughput-tool-comparision/
Related
I have a program that connects to a remote machine via SSH. I want to upload and run a binary on that machine. In order to do that I need to know what OS it is (I will support Linux, Mac and probably Windows), and what CPU architecture (I will probably only support x86_64, but it would be good to be able to detect others and print a sensible error, if this is possible).
It doesn't look like the SSH protocol itself provides any of this information. Is there a simple, ROBUST way to do this? With as few hacks as possible (no hairy Bash scripts!).
The best thing I can think of is to try running uname -s -m, and whatever the Windows equivalent is and parse the results.
The SSH protocol doesn't provide any information about the remote system except its protocol version. However, oftentimes vendors will include a string in the protocol string. For example, if you do nc gitlab.com 22 </dev/null | head -n 1, you can tell that GitLab runs Ubuntu.
However, not all remote systems provide this information, so for a reliable test, you'll probably need to log into the system. As mentioned, you can run uname on Unix systems, and cmd /c ver on Windows systems to find out what OS you're on. Note that the latter will not work on Windows if you log into a MinGW-based bash on Windows, since the /c will be rewritten as C:\; you'll need to double the slash or use uname there.
I'm not aware of a single command that you can invoke that will work on all systems, so you'll probably have to make multiple shell requests. You are probably better off doing this using an SSH library, since the OpenSSH binary will print any banner from the remote side whether you want it or not, and that can be confused with the output you get from the remote side.
I am using iPerf to try and test the Wi-Fi performance on my router. I have set up two computers, a HP Zbook (Client) and a Macbook Pro (Server), to demonstrate this connection. The client is connected directly to the router via LAN and the server is connected to the router via Wi-Fi.
My iPerf script sets the TCP window size and sends data for certain time limits from the client to the server. The output on my server goes up to an expected throughput for a few seconds and down to a very low throughput for a few seconds at relatively constant intervals for all Wi-Fi configurations on my router (various bands, 802.11 protocols and channel bandwidths) as well as in noisy and clean environments. Can anyone suggest a possible reason for this? Is this how the Wi-Fi protocol works? Or is this a problem with iPerf?
The iperf version on the client is iperf3 v3.0.11 (windows 64 bit) and iperf3 v3.0.1 (mac osx).
Client OS: Windows 10
Server OS: Mac OS X El Capitan v 10.11.5
I have ran a TCP test as well as two UDP tests (with bandwidth set to 1.05Mbps and 150Mbps) and attached the output screenshots. Wi-Fi config: 802.11ac, 40MHz, 5GHz
jPerf depiction of my iPerf script for a 180 second test case on 5GHz, 80MHz, 802.11ac
Testing screenshots: https://imageshack.com/a/SktM/1
Please include the iperf output for both client and server and use interval reports (-i). Also, include iperf -v and the operating system information. Also, make a run with UDP and capture the reports on both client and server. If you're using Linux on the client and iperf 2.0.9, the -e (enhanced reports) can provide even more information.
Wi-Fi throughput problems are like somebody having a fever in an Emergency Room. There are many different factors that have to be looked at.
Bob
We have a wildly 8.2 running on a virtualized ubuntu 14.04 behind a firewall (against DoS attacks,...) in a DMZ. (About 1200 - 3000 requests per hour.)
With Safari the download of some resource files often (about every 2nd time) fails (s. screenshot, all files are locally stored) while there is rarely a problem with other browsers (chrome, firefox)
Is there any plausible cause why there is a different behavior with Safari than with other browsers?
Has anybody ever had similar problems maybe regarding some firewall setting?
Is there any other hint where we could start looking for the cause of the problem? (Implementation, router, lack of resources...)
I know, the question is a little unprecise, that's probably why it was voted down. But I'll post what the solution was anyway.
On the server side we ran a
tcpdump -i eth0 -n -A dst port 80 | grep 'specificUrlPath'
on some other (client) machine we issued a
curl -X POST http://hostname/specificUrlPath
So we saw that the request did not always reach the server network interface and knew that there had to be a problem with the network in between.
The cause for the problem was, that NAT was switched on on the router for the server machine and the NAT implementation was obviously not able to manage as many connections. As soon as NAT was switched off everything worked as it should.
I am not sure, why requests of some browsers were more likely to be served than by others but I guess this is also due to the specific router software.
We have a customized Flash/HTML5 video player we use for users on our site. I'm currently fleshing out the experience for users who have 'suboptimal' bandwidth--basically we'd like the client side code to be able to detect poor user experience due to excessive buffering. I would like to test this "poor bandwidth" handling code in my local development environment.
Does anyone know of good techniques for simulating "poor bandwidth" in a local environment for testing purposes?
More specifically I have my local browser connecting to a virtual machine with instances of uWSGI, nginx, and python/django and I would like to be able to inject arbitrary amounts of delay into the delivery of content from these systems. (I'm primarily concerned with doing this with nginx, which does the video content delivery/streaming).
EDIT: It may be relevant that the dev environment is Mac OS X.
Just use nginx's configuration.
While OS X Lion's Network Link Conditioner works as expected it's still annoying to use when I'm really just trying to test a subset of a web app's behavior--i.e., the slow video buffering handling system.
As such, I've found it much more convenient to set rate limiting in my nginx.conf file, e.g.,:
location ~ /files/(.*\.(mp4|m4v|mov))$ {
...
limit_rate 50k; # <-- Limit download rate per connection to 50kbps
...
}
EDIT: See the nginx HttpCoreModule docs.
FreeBSD is ancestor of Mac OS, so you can use built-in powerful firewall called ipfw.
It can be used in many different cases, for example simulate low bandwidth. Use your own IP address loopback (127.0.0.1) or a remote server (8.8.8.8 in that case).
We do a video interviewing web-application, so I'd like to share with our experience of simulation of bad connection, see example below:
$ sudo su
$ ipfw show
$ ipfw pipe 1 config delay 600ms bw 256kbit/s
$ ipfw add pipe 1 dst-ip 8.8.8.8 dst-port 80
$ ipfw flush
ipfw pipe allows you to simulate slow and unstable connection with using delay, bw and even prob to simulate packet losses.
I just found the Mac OS X Network Link Conditioner but I'm not yet sure it works on loopback, which it would need to for my purposes.
EDIT: This seems to work on loopback, so it seems to solve my problem! This is probably the way to go if you're on OS X 10.7
I'm using this program NetLimiter to simulate "poor bandwidth". It's not free, but have a trial version that works well. Is only for windows :(
I run this command on ubuntu:
./ab -n 2000 -c 10 http://localhost:7000/index.html
and each time I get a different number for "Time per request".(huge difference like once is 0.66 ms next time is 0.17 ms).
Why is the result unstable and how can I measure the actual performance of the Apache server?
If just the first request is slower than the following and the next results tend to be faster than it´s very likely some kind of cache which speeds up the response. In the simplest case this is just the disk cache of the os.
If you´re benchmarking on a virtual machine you probably will not get very credible results:
http://communities.vmware.com/docs/DOC-5581
Benchmarking, Profiling on Virtual Machines
Here some general best practices for benchmarking web servers:
http://www.cyberciti.biz/tips/howto-performance-benchmarks-a-web-server.html