How to generate traffic faster on iperf? - iperf

I'm using a TCP implementation that creates multiples subflows and I'm trying to test it with iperf, the problem is the server doesn't fill all the subflows so I can't test it properly. My question is:
How can I get iperf to generate (more) traffic faster?

TCP flow depends on some kernel module related to TCP and also it depends upon the TCP congestion control algorithm used by the kernel.
there are some TCP kernel parameter.
after modifying that , it works for me(am using ubuntu kernel 4.10.3):
echo 0 > /sys/module/tcp_cubic/parameters/hystart
echo 0 > /sys/module/tcp_cubic/parameters/hystart_detect
try this one, it worked good for me. also there are some other parameters which am listing below, check the values of these parameter according to kernel version you are using. (am using ubuntu kernel 4.10.3)
echo 150 > /proc/sys/net/ipv4/tcp_pacing_ca_ratio
echo 900 > /proc/sys/net/ipv4/tcp_pacing_ss_ratio
I tested my throughput with above values and it improved my TCP performance in multi-client environment.

Related

What is the simplest way to emulate a bidirectional UDP connection between two ports on localhost?

I'm adapting code that used a direct connection between udp://localhost:9080 and udp://localhost:5554 to insert ports 19080 and 15554. On one side, 9080 now talks and listens to 19080 instead of directly to 5554. Similarly, 5554 now talks and listens to 15554. What's missing is a bidirectional connection between 19080 and 15554. All the socat examples I've seen seem to ignore this simplest of cases in favor of specialized ones of limited usefulness.
I previously seemed to have success with:
sudo socat UDP4:localhost:19080 UDP4:localhost:15554 &
but I found that it may have been due to a program bug that bypassed the connection. It no longer works.
I've also been given tentative suggestions to use a pair of more cryptic commands that likewise don't work:
sudo socat UDP4-RECVFROM:19080,fork UDP4-SENDTO:localhost:15554 &
sudo socat UDP4-RECVFROM:15554,fork UDP4-SENDTO:localhost:19080 &
and additionally seem to overcomplicate the manpage statement that "Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them."
I can see from Wireshark that both sides are correctly using their respective sides of the connection to send UDP packets, but neither side is receiving what the other side has sent, due to the opacity of socat used in either of these ways.
Has anyone implemented this simplest of cases simply, reproducibly, and unambiguously? It was suggested to me as a way around writing my own emulator to pass packets back and forth between the ports, but the time spent getting socat to cooperate could likewise be put to better use.
You use fix ports, and you do not specify if one direction is initiating the transfers.
Therefore the datagram addresses are to prefer. Something like the following command should do the trick:
socat \
UDP-DATAGRAM:localhost:9080,bind=localhost:19080,sourceport=9080 \
UDP-DATAGRAM:localhost:5554,bind=localhost:15554,sourceport=5554
Only the 5-digit port numbers belong in the socat commands. The connections from or to 9988, 9080, and 5554 are direct existing connections. I only need socat for the emulated connections that would exist if an actual appliance existed.
I haven't tested this but it appears possible that the two 'more cryptic' commands might cause a non-desirable loop... perhaps the destination ports could be modified as shown below and perhaps that may help achieve your objective. This may not be viable based on your application as you may need to adjust your receive sockets accordingly.
sudo socat UDP4-RECVFROM:19080,fork UDP4-SENDTO:localhost:5554 &
sudo socat UDP4-RECVFROM:15554,fork UDP4-SENDTO:localhost:9080 &

ping command with UDP client-server

I am confused about usage of ping command on mininet. When I implement UDP server and client and execute them with mininet, do I have to use ping command to measure packet loss, delay etc. Or ping command is not used for measuring statistics of UDP server client?
Are you asking how to implement your own ping?
Ping is simply a tool that uses ICMP datagrams to measure point-to-point network latencies and other such things.
https://www.ietf.org/rfc/rfc792.txt

Using ServiceStack Redis with Twemproxy

I've been using ServiceStack PooledRedisClientManager with success. I'm now adding Twemproxy into the mix and have 4 Redis instances fronted with Twemproxy running on a single Ubuntu server.
This has caused problems with light load tests (100 users) connecting to Redis through ServiceStack. I've tried the original PooledRedisClientManager and BasicRedisClientManager, both are giving the error No connection could be made because the target machine actively refused it
Is there something I need to do to get these two to play nice together? This is the Twemproxy config
alpha:
listen: 0.0.0.0:12112
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: true
redis: true
timeout: 400
server_retry_timeout: 30000
server_failure_limit: 3
server_connections: 1000
servers:
- 0.0.0.0:6379:1
- 0.0.0.0:6380:1
- 0.0.0.0:6381:1
- 0.0.0.0:6382:1
I can connect to each one of the Redis server instances individually, it just fails going through Twemproxy.
I haven't used twemproxy before but I would say your list of servers is wrong. I don't think you are using 0.0.0.0 correctly.
Your servers would need to be (for your local testing):
servers:
- 127.0.0.1:6379:1
- 127.0.0.1:6380:1
- 127.0.0.1:6381:1
- 127.0.0.1:6382:1
You use 0.0.0.0 on the listen command to tell twemproxy to listen on all available network interfaces on the server. This mean twemproxy will try to listen on:
the loopback address 127.0.0.1 (localhost),
on your private IP (i.e. 192.168.0.1) and
on your public IP (i.e. 134.xxx.50.34)
When you are specifying servers, the server config needs to know the actual address it should connect on. 0.0.0.0 doesn't make sense. It needs a real value. So when you come to use different Redis machines you will want to use, the private IPs of each machine like this:
servers:
- 192.168.0.10:6379:1
- 192.168.0.13:6379:1
- 192.168.0.14:6379:1
- 192.168.0.27:6379:1
Obviously your IP addresses will be different. You can use ifconfig to determine the IP on each machine. Though it may be worth using a hostname if your IPs are not statically assigned.
Update:
As you have said you are still having issues, I would make these recommendations:
Remove auto_eject_hosts: true. If you were getting some connectivity, then after time you end up with no connectivity, it's because something has caused twemproxy to think there was something wrong with the Redis hosts and reject them.
So eventually when your ServiceStack client connects to twemproxy, there will be no hosts to pass the request onto and you get the error No connection could be made because the target machine actively refused it.
Do you actually have enough RAM to stress test your local machine this way? You are running at least 4 instances of Redis, which require real memory to store the values, twemproxy consumes a large amount of memory to buffer the requests it passes to Redis, this memory pool is never released, see here for more information. Your ServiceStack app will consume memory - more so in Debug mode. You'll probably have Visual Studio or another IDE open, the stress test application, and your operating system. On top of all that there will likely be background processes and other applications you haven't closed.
A good practice is to try to run tests on isolated hardware as far as possible. If it is not possible, then the system must be monitored to check the benchmark is not impacted by some external activity.
You should read the Redis article here about benchmarking.
As you are using this in a localhost situation use the BasicRedisClientManager not the PooledRedisClientManager.

UDP Health Check

So we have an application that makes udp calls and sends packets. However, since responses are given for UDP calls, how could we ensure that the service is up and the port is open and that things are getting stored?
The only thought we have right now is to send in test packets and ensure they are getting saved out to the db.
So my over all question is, is there a better, easier way to ensure that our udp calls are succeeding?
On the listening host, you can validate that the port is open with netstat. For example, if your application uses UDP port 68, you could run:
# Grep for :<port> from netstat output.
$ netstat -lnu | grep :68
udp 0 0 0.0.0.0:68 0.0.0.0:*
You could also send some test data to your application, and then check your database to verify that the fixture data made it into your database. That doesn't mean it always will be, just that it's working at the time of the test.
Ultimately, the problem is that UDP packets are best-effort, and not guaranteed. So unless you can configure your logging platform to send some sort of acknowledgment after data is received and/or written, then you can't guarantee anything. The very nature of UDP is that it leaves acknowledgments (if any) to the application layer.
We took a different approach and we are checking to make sure the calls made it to the db. Its easy enough to query a table and ensure records are in there. If none recent, we know something is wrong. CodeGnome had a good idea, just not the route we went. Thanks!

How to measure bandwith of an SSH tunnel?

I'm running a SSH Tunnel with OpenSSH on linux using the subprocess python module.
I want to find out how many bytes were sent and received from that SSH tunnel.
How can I find it out?
ssh(1) provides no mechanism for this. The OS does not provide a mechanism for this. Packet sniffing with e.g. tcpdump(1) is an option, but that would probably require root privileges, and would only be approximate if ssh(1) connections are made to the remote peer outside of your application. IPTables Accounting would give you similar tradeoffs, but would probably be much less overhead than tcpdump(1).
If you don't mind being very approximate, you could keep track of all the data you send to and read from your subprocess. ssh(1) will compress data before encrypting it, so you might over-estimate the amount of data sent, but ssh(1) will also have some overhead for re-keying, channel control, message authenticity, and so on, so it might even come close for 'average' data.
Of course, if a router along the way decides to drop every other packet, your TCP stack will send twice the data, maybe more.
Very approximate indeed.
You could measure the raw ssh transfer with something like pv:
ssh user#remote -t "cat /dev/urandom" | pv > /dev/null
ssh user#remote -t "pv > /dev/null" < /dev/urandom
(you could try with /dev/zero - but if you are using ssh compression, you'd get a very unreal transfer rate.)