How to craft specific packets on the host of Mininet to generate massive Packet-In messages - openflow

I am wondering that how to generate massive packet-in messages to the controller to test the response time of SDN controller in the environment of Mininet.
Can you give me some advice on it?

You could use iperf to send packets, like this:
$ iperf -c -F
You could specify the amount of time:
$IPERF_TIME (-t, --time)
The time in seconds to transmit for. Iperf normally works by repeatedly sending an array of len bytes for time seconds. Default is 10 seconds. See also the -l and -n options.
Here is a nice reference for iperf: https://iperf.fr/.
If you would like to use Scapy, try this:
from scapy.all import IP, TCP, send
data = "University of Network blah blah"
a = IP(dst="129.132.2.21")/TCP()/data
send(a)

Related

Redis mass insertion: protocol vs inline commands

For my task I need to load a bulk of data into Redis as soon as possible. It looks like this article is right about my case: https://redis.io/topics/mass-insert
The article starts from giving an example of using multiple inline SET commands with redis-cli. Then they proceed to generating Redis protocol and again use it with redis-cli. They don't explain the reasons or benefits of using Redis protocol.
Using of Redis protocol is a bit harder and it generates a bit more traffic. I wonder, what are the reasons to use Redis protocol rather than simple one-line commands? Probably despite the fact the data is larger, it is easier (and faster) for Redis to parse it?
Good point.
Only a small percentage of clients support non-blocking I/O, and not
all the clients are able to parse the replies in an efficient way in
order to maximize throughput. For all this reasons the preferred way
to mass import data into Redis is to generate a text file containing
the Redis protocol, in raw format, in order to call the commands
needed to insert the required data.
What I understood is that you emulate a client when you use Redis protocol directly, which would benefit from the highlighted points.
Based on the docs you provided, I tried these scripts:
test.rb
def gen_redis_proto(*cmd)
proto = ""
proto << "*"+cmd.length.to_s+"\r\n"
cmd.each{|arg|
proto << "$"+arg.to_s.bytesize.to_s+"\r\n"
proto << arg.to_s+"\r\n"
}
proto
end
(0...100000).each{|n|
STDOUT.write(gen_redis_proto("SET","Key#{n}","Value#{n}"))
}
test_no_protocol.rb
(0...100000).each{|n|
STDOUT.write("SET Key#{n} Value#{n}\r\n")
}
ruby test.rb > 100k_prot.txt
ruby test_no_protocol.rb > 100k_no_prot.txt
time cat 100k.txt | redis-cli --pipe
time cat 100k_no_prot.txt | redis-cli --pipe
I've got these results:
teixeira: ~/stackoverflow $ time cat 100k.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.168s
user 0m0.025s
sys 0m0.015s
(5 arquivo(s), 6,6Mb)
teixeira: ~/stackoverflow $ time cat 100k_no_prot.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.433s
user 0m0.026s
sys 0m0.012s

Not able to establish Oracle SQL session from within a BASH script

#!/bin/bash
#Oracle DB Info for NEXT
HOST="1.2.3.4"
PORT="5678"
SERVICE="MYDB"
DB_USER=$(whoami)
DB_PASS=$(base64 -d ~/.passwd)
DB_SCHEMA="my_db"
#Section for all of our functions.
function SQLConnection(){
sqlplus "$DB_USER"/"$DB_PASS"#"$HOST":"$PORT"/"$SERVICE"
}
function Connected(){
SQLConnection <<EOF
select sys_context('USERENV','SERVER_HOST') from dual;
EOF
}
function GetJMS(){
SQLConnection <<EOF
set echo on timing on lines 200 pages 100
select pd.destination from ${DB_SCHEMA}.pd_notification pd where pd.org_id = '$ORGID';
EOF
}
TODAY=$(date +"%A %B %d, %Y")
read -r -p $'\n\nWhat is the ORG ID? ' ORGID
read -r -p $'\n\nWhat is the REMOTE QUEUE MANAGER NAME? ' RQM
read -r -p $'\n\nWhat is the IP address of the REMOTE QUEUE MANAGER? ' CONN
read -r -p $'\n\nWhat is the PORT of the REMOTE QUEUE MANAGER? ' PORT
echo -en "* $(whoami)\n* $TODAY\n* MQ Setup $ORGID\n\nDEFINE +\n\tCHANNEL('$RQM.LQML') +\n\tCHLTYPE(SDR) +\n\tCONNAME('$CONN($PORT)') +\n\tXMITQ('BUF.2.$ORGID.XMQ')\n\tCHAUTH(TLS_RSA_WITH_AES_256_CBC_SHA256)\n\nDEFINE +\n\tCHANNEL('LQML.$RQM') +\n\tCHLTYPE(RCVR) +\n\tTRPTYPE(TCP)\n\nDEFINE +\n\tQLOCAL('$RQM') +\n\tTRIGDATA('LQML.$RQM') +\n\tINITQ('SYSTEM.CHANNEL.INITQ') +\n\tTRIGGER USAGE(XMITQ)\n\n" > ~/mqsetup.mqsc
CONNECTED=$(Connected | awk 'NR==16')
echo -en "\n\nHello From: $CONNECTED\n\n"
for JMSDESTINATION in $(GetJMS | awk 'NR>=16&&NR<=24{print $1}')
do
read -r -p $'\n\nWhich REMOTE QUEUE NAME matches with this ${JMSDESTINATION}?' RNAME
QDESC=$(echo "$JMSDESTINATION" | tr '.' ' ' | tr '[[:upper:]]' '[[:lower:]]')
echo -en "\n\nDEFINE +\n\tQR($JMSDESTINATION) +\n\t\tREPLACE DESCR('$ORGID $QDESC Queue') +\n\t\tREPLACE MAXDEPTH(5000) +\n\t\tXMITQ('BUF.2.$ORGID.XMQ') +\n\t\tRNAME('$RNAME') +\n\t\tRQMNAME('$RQM')" >> ~/mqsetup.mqsc
done
Here is the script I've built, hoping to automate the setup of IBM MQ Queues and Channels. My problem is that outside this script, I can establish an SQL Session without an issue, directly from the shell, provided I input the variables seen in the script. I can call the functions and everything returns just as I'd hope it would. When I run the exact same things from within the script, I get timeout errors ... the "Hello From" is blank, which tells me there is no DB connection.
I'm totally stumped as to why it all works great from outside the script, but inside it times out.
I appreciate the eyes and the help!
You're overwritng a variable value. You have this at the top of the script:
PORT="5678"
but then later on you do:
read -r -p $'\n\nWhat is the PORT of the REMOTE QUEUE MANAGER? ' PORT
which overwrites your 5678 value with whatever is entered there. That port may not be listening on the DB server at all, or may be doing something else, or if you don't enter a value it'll default to port 1521 when you connect. But either way the connection is going to fail, either quickly or slowly depending on the port state (e.g. slower maybe if a firewall blocks it).
If you test the connection by adding a Connected call before the read calls (as I initially did) then it seems to be working fine; but the connections after the reads don't work because port value it tries to connect to is now wrong.
Use a different name for the two variables, e.g. RQ_PORT for the second one - both in its read command and the subsequent creation of the ~/mqsetup.mqsc file.
You may also find it useful to add the -l flag to your SQL*Plus call so that if the connection fails for some reason it won't re-prompt for credentials, which in some circumstances can make the script appear to hang until you hit enter a few times.
Not directly relevant to the problem, but when automating anything like this I usually also use the -s flag to suppress the banners (which can vary between environments); and if you're only interested in capturing query output I'd usually set headings and/or pagination off, and feedback off, and generally set SQL*Plus up to generate as little noise as possible - it makes parsing out the interesting bits easier.

Redis benchmarking for HMSET, HGETALL with a data size

Can someone let me know how can I use redis-benchmark to do a benchmarking for HMSET, HGETALL with a fixed data size (-d option in redis-benchmark). I am using redis 3.2.5.
I have gone through this answer and tried the below command:-
root#cache-server1:~# redis-benchmark -h a.b.c.d -p XXXX hmset hgetall myhash rand_int rand_string -d 2048
====== hmset hgetall myhash rand_int rand_string -d 2048 ======
10000 requests completed in 0.11 seconds
50 parallel clients
3 bytes payload
keep alive: 1
99.64% <= 1 milliseconds
100.00% <= 1 milliseconds
89285.71 requests per second
But looking at the output it seems it is using only 3 bytes payload.
If it is not possible via redis-benchmark can someone suggest some other alternative?
The payload is only 3 bytes (the default) because the -d is taken as part of the command. The command must be the last argument, and all switches must precede it.
Besides that, you can't use redis-benchmark to run two custom commands. Also, the -d option is only applicable to predefined tests (the ones that run by default or with the -t option) and has no meaning if the user specifies the command used in the benchmark.
If you have a specific benchmarking flow that you want to test, the best thing you can do is mock it with any client that you're comfortable with.

How can I get the disk IO trace with actual input values?

I want to generate some trace file from disk IO, but the problem is I need the actual input data along with timestamp, logical address and access block size, etc.
I've been trying to solve the problem by using the "blktrace | blkparse" with "iozone" on the ubuntu VirtualBox environment, but it seems not working.
There is an option in blkparse for setting the output format to show the packet data, -f "%P", but it dose not print anything.
below is the command that i use:
$> sudo blktrace -a issue -d /dev/sda -o - | blkparse -i - -o ./temp/blktrace.sda.iozone -f "%-12C\t\t%p\t%d\t%S:%n:%N\t\t%P\n"
$> iozone -w -e -s 16M -f ./mnt/iozone.dummy -i 0
In the printing format "%-12C\t\t%p\t%d\t%S:%n:%N\t\t%P\n", all other things are printed well, but the "%P" is not printed at all.
Is there anyone who knows why the packet data is not displayed?
OR anyone who knows other way to get the disk IO packet data with actual input value?
As far as I know blktrace does not capture the actual data. It just capture the metadata. One way to capture real data is to write your own kernel module. Some students at FIU.edu did that in this paper:
"I/O deduplication: Utilizing content similarity to ..."
I would ask this question in linux-btrace mailing list as well:
http://vger.kernel.org/majordomo-info.html

How can I use the value of mp2t.af.pcr as a Tshark field?

I have a wireshark capture that contains an RTP multicast stream (plus some other incidental data).
Using a Tshark command like the following, I can produce a CSV of the RTP timestamp compared with the packet capture time:
tshark.exe -r "capture.pcap" -Eseparator=, -Tfields -e rtp.timestamp -e frame.time_epoch -d udp.port==5000,rtp
This decodes the UDP packets as RTP, and successfully prints out the two fields as expected.
Now, my question: The payload of the RTP stream is an MPEG2 Transport Stream, and I also want to print the PCR value (if there is one) alongside the packet and RTP timestamps.
In wireshark, I can see the PCR being decoded correctly, however using a command like the following:
tshark.exe -r "HBO HD CZ.pcap" -Eseparator=,-Tfields -e rtp.timestamp -e frame.time_epoch -e mp2t.af.pcr -d udp.port==5000,mp2t
...only prints out a "1" if there is a PCR oresent, not the actual value. I have also checked the .pcr_flag to confirm that these two are not exchanged, but still I see the same result.
The documentation seems to call mp2t.af.pcr a "Label", does this mean that Tshark is not able to use it as a field? Is there a way to generate a CSV with these values?
(What part of the documentation calls it a "Label"? That's a somewhat odd description of a named field.)
The problem is that the value that Wireshark displays after "base(XXX)*300 + ext(YYY)" is calculated and displayed, but the field itself isn't given an integral type and is instead given a type that doesn't have a value. Arguably, it should be an FT_UINT64 field and should be given a value, so that you can filter on it and can print the value in TShark.
Please file an enhancement request for this on the Wireshark Bugzilla.