iperf: Meaning of columns in UDP measurement - pandas

I call iperf automaticly in a python script for the server and the client and save the output in a csv file:
server.cmd('iperf -s -p 5202 -u -t 50 -y C > result/iperf_server_output.csv') client.cmd('iperf -c 10.1.1.2 -p 5202 -i 1 -u -b 100m -t 20 -y C > result/iperf_client_output.csv')
result/iperf_server_output.csv stays empty, and result/iperf_client_output.csv looks like this:
20220921142402,10.1.1.1,55922,10.1.1.2,5202,1,0.0-1.0,12502350,100018800,0.000,0,8505,0.000,0
20220921142403,10.1.1.1,55922,10.1.1.2,5202,1,1.0-2.0,12499410,99995280,0.000,0,8503,0.000,0
20220921142404,10.1.1.1,55922,10.1.1.2,5202,1,2.0-3.0,12499410,99995280,0.000,0,8503,0.000,0
20220921142405,10.1.1.1,55922,10.1.1.2,5202,1,3.0-4.0,12500880,100007040,0.000,0,8504,0.000,0
20220921142406,10.1.1.1,55922,10.1.1.2,5202,1,4.0-5.0,12499410,99995280,0.000,0,8503,0.000,0
20220921142407,10.1.1.1,55922,10.1.1.2,5202,1,5.0-6.0,12499410,99995280,0.000,0,8503,0.000,0
20220921142408,10.1.1.1,55922,10.1.1.2,5202,1,6.0-7.0,12502350,100018800,0.000,0,8505,0.000,0
20220921142409,10.1.1.1,55922,10.1.1.2,5202,1,7.0-8.0,12497940,99983520,0.000,0,8502,0.000,0
20220921142410,10.1.1.1,55922,10.1.1.2,5202,1,8.0-9.0,12500880,100007040,0.000,0,8504,0.000,0
20220921142411,10.1.1.1,55922,10.1.1.2,5202,1,9.0-10.0,12500880,100007040,0.000,0,8504,0.000,0
20220921142412,10.1.1.1,55922,10.1.1.2,5202,1,10.0-11.0,12499410,99995280,0.000,0,8503,0.000,0
20220921142413,10.1.1.1,55922,10.1.1.2,5202,1,11.0-12.0,12499410,99995280,0.000,0,8503,0.000,0
20220921142414,10.1.1.1,55922,10.1.1.2,5202,1,12.0-13.0,12499410,99995280,0.000,0,8503,0.000,0
20220921142415,10.1.1.1,55922,10.1.1.2,5202,1,13.0-14.0,12499410,99995280,0.000,0,8503,0.000,0
20220921142416,10.1.1.1,55922,10.1.1.2,5202,1,14.0-15.0,12500880,100007040,0.000,0,8504,0.000,0
20220921142417,10.1.1.1,55922,10.1.1.2,5202,1,15.0-16.0,12500880,100007040,0.000,0,8504,0.000,0
20220921142418,10.1.1.1,55922,10.1.1.2,5202,1,16.0-17.0,12499410,99995280,0.000,0,8503,0.000,0
20220921142419,10.1.1.1,55922,10.1.1.2,5202,1,17.0-18.0,12499410,99995280,0.000,0,8503,0.000,0
20220921142420,10.1.1.1,55922,10.1.1.2,5202,1,18.0-19.0,12500880,100007040,0.000,0,8504,0.000,0
20220921142421,10.1.1.1,55922,10.1.1.2,5202,1,19.0-20.0,12500880,100007040,0.000,0,8504,0.000,0
20220921142421,10.1.1.1,55922,10.1.1.2,5202,1,0.0-20.0,250005840,100001255,0.000,0,-1,-0.000,0
Now I want to plot the Bitrate with pandas, but I don't know what each column means.

Related

tcpdump with -w -C -G and -z options

I'm trying to take continuous traces which are written to files that are limited by both duration (-G option) and size (-C option). The files are automatically named with the -w option, and finally the files are compressed with the -z gzip option. Altogether what I have is:
tcpdump -i eth0 -w /home/me/pcaps/MyTrace_%Y-%m-%d_%H%M%S.pcap -s 0 -C 100 -G 3600 -Z root -z gzip &
The problem is that with the -C option, the current file count is appended onto the name, so I wind up with files ending in: .pcap2.gz .pcap3.gz .pcap4.gz, etc. I would much prefer to have them end as: _2.pcap.gz _3.pcap.gz _4.pcap.gz, etc.
But if I remove .pcap from the -w option, I wind up with 2.gz 3.gz 4.gz
This could work if I could include options in the "-z" command like -z "gzip -S .pcap.gz" so that gzip itself appends the .pcap or if I could use an alias like pcap_gzip="gzip -S .pcap.gz" and then -z pcap_gzip, but neither option seems to be working, the latter producing this error: compress_savefile:execlp(gzip -S pcap.gz, /home/me/pcaps/MyTrace_2018-08-07_105308_27): No such file or directory
I encountered the same problem today, In CentOS6. I found your problem, but the answer did not work to me.
In fact, it only needs to be adjusted slightly, that is, the absolute path of the saved file name and the name of the script to be executed is written, for example
tcpdump -i em1 ... -s 0 -G 10 -w '/home/Svr01_std_%Y%m%d_%H%M%S.pcap' -Z root -z /home/pcapup2arcive.sh
I found out that although the alias doesn't work, I was able to put the same commands in a script and invoke the script via tcpdump -z.
pcap_gzip.sh:
#!/bin/bash
gzip -S .pcap.gz "$#"
Then:
tcpdump -i eth0 -w /home/me/pcaps/MyTrace_%Y-%m-%d_%H%M%S -s 0 -C 100 -G 3600 -Z root -z pcap_gzip.sh &

How to enable sshpass output to console

Using scp and interactively entering the password the file copy progress is sent to the console but there is no console output when using sshpass in a script to scp files.
$ sshpass -p [password] scp [file] root#[ip]:/[dir]
It seems sshpass is suppressing or hiding the console output of scp. Is there a way to enable the sshpass scp output to console?
After
sudo apt-get install expect
the file send-files.exp works as desired:
#!/usr/bin/expect -f
spawn scp -r $FILES $DEST
match_max 100000
expect "*?assword:*"
send -- "12345\r"
expect eof
Not exactly what was desired, but better than silence:
SSHPASS="12345" sshpass -e scp -v -r $FILES $DEST 2>&1 | grep -v debug1
Note that -e is considered a bit safer than -p.
Output:
Executing: program /usr/bin/ssh host servername, user username, command scp -v -t /src/path/dst_file.txt
OpenSSH_6.6.1, OpenSSL 1.0.1i-fips 6 Aug 2014
Authenticated to servername ([10.11.12.13]:22).
Sending file modes: C0600 590493 src_file.txt
Sink: C0600 590493 src_file.txt
Transferred: sent 594696, received 2600 bytes, in 0.1 seconds
Bytes per second: sent 8920671.8, received 39001.0
In this way:
output=$(sshpass -p $PASSWD scp -v $filename root#192.168.8.1:/root 2>&1)
echo "Output = $output"
you redirect the console output in variable output.
Or, if you only want to see the console output of scp command, you should add only -v command in your ssh pass cmd:
sshpass -p $PASSWD scp -v $filename root#192.168.8.1:/root

TAR over two hops

I need to create a tar and shipped it to my local folder.
If i can create tar file, i can easily get it on local folder using scp.
Here problem is at first step: Creating TAR on remote server. Server is accessible only through another remote server (bastion server).
Here is the command i'm using currently:
timestamp="20160226-085856"
ssh bastion_server -t ssh remote_server "sudo su -c \"cp -r /etc/nginx /home/ubuntu/backup/nginx_26Feb && cd /home/ubuntu/backup && tar -C /home/ubuntu/backup -cf backup_nginx-$timestamp.tar ./nginx_26Feb\" "
Here is the error i am getting:
su: invalid option -- 'r'
Usage: su [options] [LOGIN]
Any help here would be great.
Give it a try without the fancy sudo su -c. Using sudo -s should be enough:
ssh bastion_server -t ssh remote_server "sudo -s cp -r /etc/nginx \
/home/ubuntu/backup/nginx_26Feb && cd /home/ubuntu/backup && \
tar -C /home/ubuntu/backup -cf backup_nginx-$timestamp.tar ./nginx_26Feb"
Or rather set up proper two-hops ~/.ssh/config:
Host bastion
Hostname bastion_server
Host remote
Hostname remote_server
ProxyCommand ssh -W %h:%p bastion
and then just run
ssh remote sudo su -c "cp -r /etc/nginx /home/ubuntu/backup/nginx_26Feb \
&& cd /home/ubuntu/backup && tar -C /home/ubuntu/backup -cf \
backup_nginx-$timestamp.tar ./nginx_26Feb"
Without the fancy escaping and stuff.

find and replace with variables

I'm trying to find and replace with variables, but it doesn't work.
Here is the code. I need to append -C -w 10% -c 5% -p /u0 to append to the end of a matching line. I do not know how to suppress the (-) Any ideas? Thank you.
OLD=$(command[check_disk]=/usr/local/nagios/libexec/check_disk -w 10% -c 5% -p / -p /var -p /tmp -p /home -p /boot -p /usr -A -e)
NEW=$(command[check_disk]=/usr/local/nagios/libexec/check_disk -w 10% -c 5% -p / -p /var -p /tmp -p /home -p /boot -p /usr -A -e -C -w 10% -c 5% -p /u0)
sed -i "s/$OLD/$NEW/" /home/scripts/nrpe.cfg
Try this (assumes bash):
OLD='command[check_disk]=/usr/local/nagios/libexec/check_disk -w 10% -c 5% -p / -p /var -p /tmp -p /home -p /boot -p /usr -A -e'
NEW='command[check_disk]=/usr/local/nagios/libexec/check_disk -w 10% -c 5% -p / -p /var -p /tmp -p /home -p /boot -p /usr -A -e -C -w 10% -c 5% -p /u0'
oldEscaped=$(sed 's/[^^]/[&]/g; s/\^/\\^/g' <<<"$OLD")
newEscaped=$(sed 's/[\\&/]/\\&/g' <<<"$NEW")
sed -i "s/$oldEscaped/$newEscaped/" /home/scripts/nrpe.cfg
Your first problem was that you mistook $(...) for a string-quoting mechanism (it is not; it's used for command substitution (executing the enclosed command and replacing the construct with the command's output)).
To assign literal strings, simply use single quotes as above.
Your second problem was that you can't blindly pass strings to sed's s (string-substitution) command, because certain characters have special meaning to sed, so to use them literally they have to be escaped - the most obvious problem being the / instances in the strings, which get mistaken for the delimiters of the s/.../.../ command.
Therefore, 2 auxiliary sed commands are used to perform the requisite escaping:
sed 's/[^^]/[&]/g; s/\^/\\^/g' <<<"$OLD" escapes the old string so that none of its characters can be mistaken for the regex delimiter or special regular-expression characters.
sed 's/[\\&/]/\\&/g' <<<"$NEW" escapes the new string so that none of its characters can be mistaken for the regex delimiter or backreferences (such as &, or \1).
Finally, note that it's better not to use all-uppercase shell variable names such as $OLD, so as to avoid conflicts with environment variables.

Copy all keys from one db to another in redis

Instade of move I want to copy all my keys from a particular db to another.
Is it possible in redis if yes than how ?
If you can't use MIGRATE COPY because of your redis version (2.6) you might want to copy each key separately which takes longer but doesn't require you to login to the machines themselves and allows you to move data from one database to another.
Here's how I copy all keys from one database to another (but without preserving ttls)
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=0
target_host=localhost
target_port=6379
target_db=1
#copy all keys without preserving ttl!
redis-cli -h $source_host -p $source_port -n $source_db keys \* | while read key; do
echo "Copying $key"
redis-cli --raw -h $source_host -p $source_port -n $source_db DUMP "$key" \
| head -c -1 \
| redis-cli -x -h $target_host -p $target_port -n $target_db RESTORE "$key" 0
done
Keys are not going to be overwritten, in order to do that, delete those keys before copying or simply flush the whole target database before starting.
Copies all keys from database number 0 to database number 1 on localhost.
redis-cli --scan | xargs redis-cli migrate localhost 6379 '' 1 0 copy keys
If you use the same server/port you will get a timeout error but the keys seem to copy successfully anyway. GitHub Redis issue #1903
redis-cli -a $source_password -p $source_port -h $source_ip keys /*| while read key;
do echo "Copying $key";
redis-cli --raw -a $source_password -h $source_ip -p $source_port -n $dbname DUMP "$key"| head -c -1| redis-cli -x -a $destination_password -h $destination_IP -p $destination_port RESTORE "$key" 0;
Latest solution:
Use the RIOT open-source command line tool provided by Redislabs to copy the data.
Reference: https://developer.redis.com/riot/riot-redis/cookbook.html#_performing_migration
GitHub project link: https://github.com/redis-developer/riot
How to install: https://developer.redis.com/riot/riot-redis/
# Source Redis db
SH=test1-redis.com
SP=6379
# Target Redis db
TH=test1-redis.com
TP=6379
# Copy from db0 to db1 (standalone Redis db, Or cluster mode disabled)
#
riot-redis -h $SH -p $SP --db 0 replicate -h $TH -p $TP --db 1 --batch 10000 \
--scan-count 10000 \
--threads 4 \
--reader-threads 4 \
--reader-batch 500 \
--reader-queue 2000 \
--reader-pool 4
RIOT is quicker, supports multithreading, and works well with cross-environment Redis data copy ( AWS Elasticache, Redis OSS, and Redislabs ).
Not directly. I would suggest to use the always convenient redis-rdb-tools package (from Sripathi Krishnan) to extract the data from a normal rdb dump, and reinject it to another instance.
See https://github.com/sripathikrishnan/redis-rdb-tools
As far as I understand you need to copy keys from a particular DB (e.g 5 ) to a particular DB say 10. If that is the case you can use redis database dumper (https://github.com/r043v/rdd). Although as per documentation it has a switch (-d) to select a database for operation but didn't work for me, so what I did
1.) Edit the rdd.c file and look for int main(int argc,char argv) function
2.) Change the DB to as per your requirement
3.) compile the src by **make
4.) Dump all keys using ./rdd -o "save.rdd"
5.) Edit the rdd.c file again and change the DB
6.) Make again
7.) Import by using ./rdd "save.rdd" -o insert -s "IP" -p"Port"
I know this is old, but for those of you coming here form Google:
I just published a command line interface utility to npm and github that allows you to copy keys that match a given pattern (even *) from one Redis database to another.
You can find the utility here:
https://www.npmjs.com/package/redis-utils-cli
Try using dump to first dump all the keys and then restore the same
If migrating keys inside of the same redis engine, then you might use internal command MOVE for that (pipelining for more speed):
#!/bin/bash
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=4
target_db=0
total=$(redis-cli -n 4 keys \* | sed 's/^/MOVE /g' | sed 's/$/ '$target_db'/g' | wc -c)
#copy all keys without preserving ttl!
time redis-cli -h $source_host -p $source_port -n $source_db keys \* | \
sed 's/^/MOVE /g' | sed 's/$/ 0/g' | \
pv -s $total | \
redis-cli -h $source_host -p $source_port -n $source_db >/dev/null