tcp stream replay tool - testing

I'm looking for a tool for recording and replaying one side of a TCP stream for testing.
I see tools which record the entire TCP stream (both server and client) for testing firewalls and such, but what I'm looking for is a tool which would record just the traffic submitted by the client (with timing information) and then resubmit it to the server for testing.

Due to the way that TCP handles retransmissions, sequence numbers, SACK and windowing this could be a more difficult task than you imagine.
Typically people use tcpreplay for packet replay; however, it doesn't support synchronizing TCP sequence numbers. Since you need to have a bidirectional TCP stream, (and this requires synchronization of seq numbering) use one of the following options:
If this is a very interactive client / server protocol, you could use scapy to strip out the TCP contents of your stream, parse for timing and interactivity. Next use this information, open a new TCP socket to your server and deserialize that data into the new TCP socket. Parsing the original stream with scapy could be tricky, if you run into TCP retransmissions and windowing dynamics. Writing the bytes into a new TCP socket will not require dealing with sequence numbering yourself... the OS will take care of that.
If this is a simple stream and you could do without timing (or want to insert timing information manually), you can use wireshark to get the raw bytes from a TCP steam without worrying about parsing with scapy. After you have the raw bytes, write these bytes into a new TCP socket (accounting for interactivity as required). Writing the bytes into a new TCP socket will not require dealing with sequence numbering yourself... the OS will take care of that.
If your stream is strictly text (but not html or xml) commands, such as a telnet session, an Expect-like solution could be easier than the aforementioned parsing. In this solution, you would not open a TCP socket directly from your code, using expect to spawn a telnet (or whatever) session and replay the text commands with send / expect. Your expect library / underlying OS would take care of seq numbering.
If you're testing a web service, I suspect it would be much easier to simulate a real web client clicking through links with Selenium or Splinter. Your http library / underlying OS would take care of seq numbering in the new stream.

Take a look at WirePlay code.google.com/p/wireplay or github.com/abhisek/wireplay which promises to replay either client or server side of a captured TCP session with modification of all the SYN/ACK sequence numbers as required.
I don't know if there are any binary builds available, you'll need to compile it yourself.
Note I have not tried this myself yet, but am looking into it.

Yes, it is a difficult task to implement such a tool.
I started to implement this kind of tool two years ago and the tool is mature now.
Try it and maybe you will find that it is the tool that you are looking for.
https://github.com/wangbin579/tcpcopy

I wanted something similar so I worked with scapy for a bit and came up with a solution that worked for me. My goal was to replay the client portion of a captured pcap file. I was interested in getting responses from the server - not necessarily with timings. Below is my scapy solution - it is by no means tested or complete but it did what I wanted it to do. Hopefully it's a good example of how to replay a TCP stream using scapy.
from scapy.all import *
import sys
#NOTE - This script assumes that there is only 1 TCP stream in the PCAP file and that
# you wish to replay the role of the client
#acks
ACK = 0x10
#client closing the connection
RSTACK = 0x14
def replay(infile, inface):
recvSeqNum = 0
first = True
targetIp = None
#send will put the correct src ip and mac in
#this assumes that the client portion of the stream is being replayed
for p in rdpcap(infile):
if 'IP' in p and 'TCP' in p:
ip = p[IP]
eth = p[Ether]
tcp = p[TCP]
if targetIp == None:
#figure out the target ip we're interested in
targetIp = ip.dst
print(targetIp)
elif ip.dst != targetIp:
# don't replay a packet that isn't to our target ip
continue
# delete checksums so that they are recalculated
del ip.chksum
del tcp.chksum
if tcp.flags == ACK or tcp.flags == RSTACK:
tcp.ack = recvSeqNum+1
if first or tcp.flags == RSTACK:
# don't expect a response from these
sendp(p, iface=inface)
first=False
continue
rcv = srp1(p, iface=inface)
recvSeqNum = rcv[TCP].seq
def printUsage(prog):
print("%s <pcapPath> <interface>" % prog)
if __name__ == "__main__":
if 3 != len(sys.argv):
printUsage(sys.argv[0])
exit(1)
replay(sys.argv[1], sys.argv[2])

Record a packet capture of the full TCP client/server communication. Then, you can use tcpliveplay to replay just the client side of the communication to a real server. tcpliveplay will generate new sequence numbers, IP addresses, MAC addresses, etc, so the communication will flow properly.

Related

ReceiveBufferSize - Download from Server with High Latency

I have a very basic file download that is connecting from the UK to US. The file is about 10MB and the connection is fast, but the latency is 80ms.
Since we have high latency, is there any way to reduce the acknowledgement window at the TCP layer to reduce the chattiness that occurs?
' Download a Large PDF
Using client As New System.Net.WebClient()
Dim url As String = "doc url"
Dim beginTime As DateTime = DateTime.Now
client.Credentials = System.Net.CredentialCache.DefaultCredentials
client.DownloadFile(url, "TMP.ZIP")
logWriter.WriteLine("6 MB ZIP File" & "," & (DateTime.Now - beginTime).TotalMilliseconds)
End Using
is there any way to reduce the acknowledgement window at the TCP layer to reduce the chattiness that occurs?
On application level you don't have much control over what happens in the network layers of the communication, this is all handled by lower level API's. The .NET framework only provides you with some types that are built on top of these API's to ease the implementation process.
That being said, acknowledgements are what make TCP reliable, these acknowledgements can guarantee that all the data was send to the other side of the connection. You could switch to using UDP, which doesn't use acknowledgements, meaning you could never verify if the data was received successfully, which is great for real-time communication (to balance between speed and quality), but not for sending files, as we want the file to be fully transferred, and not just 96% of the total file. The only way to verify that we received the full file, is to have the receiver notify us that it actually received the packets.
For me this sound like an infrastructure/network architecture related issue. Therefor I personally would not attempt to try and fix this on application level. Using the WebClient class, and thus TCP, feels like the right protocol for your use-case. So in my eyes, you've did your job and implemented the file download correctly.
If you have the possibility, I would try to put a content delivery network (CDN) in between the server and client so that the CDN can offload the down- and upload to an edge server that is close to your physical location.
If this isn't possible you get be creative and, for example, have a second server in the UK that synchronizes itself with the files located on the server in the US. For time/money reasons I wouldn't do this and just take the latency as-is; 80ms is acceptable for me.

What is the correct method to receive UDP data from several clients synchronously?

I have 1 server and several (maybe up to 20) clients. All clients are sending UDP datagram at random time. Each datagram is quite short (about 10B), but I must make sure all the data from each client is received correctly.
If I let all clients send datagram to the same port, and client B sends it datagram at the exact time when the server is receiving data from client A, it seems the server will miss the data from client A.
So what's the correct method to do this job? Do I need to create a listener for each of the 20 clients?
When you bind a UDP socket to a port, the networking stack will allocate a buffer for a finite number of incoming UDP packets for you, so that (assuming you call recv() in a relatively timely manner), no incoming packets should get lost.
If you want see your buffer size in terminal, you can take a look at:
/proc/sys/net/core/rmem_default for recv
and
/proc/sys/net/core/wmem_default for send
I think the default buffer size on Linux is 131071B.
On Linux, you can change the UDP buffer size (e.g. to 26214400) by (as root):
sysctl -w net.core.rmem_max=26214400
You can also make it permanent by adding this line to /etc/sysctl.conf:
net.core.rmem_max=26214400
Since each packet is only 10B, shouldnt be a problem.
If you are still worried about packet loss you could implement a protocol where your client waits for a ACK from the server or it will resend. Many protocols use such a feature, but this is only possible if timing allows it. For example in streaming data it is not useful because there is no time to resend.
or consider using tcp ( if it is an option)

Handling Telnet negotiation

I'm trying to implement Telnet Client using C++ and QT as GUI.
I have no idea to handling the telnet negotiations.
Every telnet command is preceded by IAC, e.g.
IAC WILL SUPPRESS_GO_AHEAD
The following is how I handling the negotiation.
Search for IAC character in received buffer
According to the command and option, response to the request
My questions are described as follows:
It seems that the telnet server won't wait for a client response after a negotiation command is sent.
e.g. (send two or more commands without waiting for client reponse)
IAC WILL SUPPRESS_GO_AHEAD
IAC WILL ECHO
How should I handle such situation? Handle two requests or just the last one?
What the option values would be if I don't response the request? Are they set as default?
Why IAC character(255) won't be treated as data instead of command?
Yes, it is allowed to send out several negotiations for different options without synchronously waiting for a response after each of them.
Actually it's important for each side to try to continue (possibly after some timeout if you did decide to wait for a response) even if it didn't receive a reply, as there are legitimate situations according to the RFC when there shouldn't or mustn't be a reply and also the other side might just ignore the request for whatever reason and you don't have control over that.
You need to consider both negotiation requests the server sent, as they are both valid requests (you may choose to deny one or both, of course).
I suggest you handle both of them (whatever "handling" means in your case) as soon as you notice them, so as not to risk getting the server stuck if it decides to wait for your replies.
One possible way to go about it is presented by Daniel J. Bernstein in RFC 1143. It uses a finite state machine (FSM) and is quite robust against negotiation loops.
A compliant server (the same goes for a compliant client) defaults all negotiable options to WON'T and DON'T (i.e. disabled) at the start of the connection and doesn't consider them enabled until a request for DO or WILL was acknowledged by a WILL or DO reply, respectively.
Not all servers (or clients for that matter) behave properly, of course, but you cannot anticipate all ways a peer might misbehave, so just assume that all options are disabled until enabling them was requested and the reply was positive.
I'll assume here that what you're actually asking is how the server is going to send you a byte of 255 as data without you misinterpreting it as an IAC control sequence (and vice versa, how you should send a byte of 255 as data to the server without it misinterpreting it as a telnet command).
The answer is simply that instead of a single byte of 255, the server (and your client in the opposite direction) sends IAC followed by another byte of 255, so in effect doubling all values of 255 that are part of the data stream.
Upon receiving an IAC followed by 255 over the network, your client (and the server in the opposite direction) must replace that with a single data byte of 255 in the data stream it returns.
This is also covered in RFC 854.

UDP Client and Server Buffer Agreement

Hi I am writing a program that will send a file from client to server using UDP socket using different packet sizes for example 512B, 1KB and 2KB and i don't want use fixed buffer size in the receiver(server).I need some codes in Java that will allow both server and client to agree upon a packet size before transfer start. Many thanks
Don't you forget that UDP packets may be fragmented, duplicated and lost? There is a whole bunch of things to take care of, starting with lost packet retransmissions.
I hate to give a "don't do this" kind of answers, but for this one, just use TCP. And if you want some user-level "packets", you can have them with TCP also (prefix each one with its length, that's enough).

Boost ASIO iostream random delays on reading

I have a client talking to server with TCP via localhost. The server uses Boost ASIO iostream in blocking mode. It accepts the incoming connections, reads the request, sends response and closes the socket. The problem is - sometimes server have a random delay for 10-200 milliseconds on the first read via getline. I've set TCP_NODELAY flag on both server's and client's socket. What can be the reason for this delays? I know, that i should use select before reading from socket, but i expected that there shouldn't be such a great delay via localhost.
Here is the relevant part of server's code:
asio::io_service io_service;
ip::tcp::endpoint endpoint(bindAddress, 80);
ip::tcp::acceptor acceptor(io_service, endpoint);
for(;;)
{
ip::tcp::iostream stream;
acceptor.accept(*stream.rdbuf(), peer);
ip::tcp::no_delay no_delay(true);
stream.rdbuf()->set_option(no_delay);
string str;
getline(stream, str); // at this line i get random delays
//the main part of code
}
I have around 200 requests/second, delay happens several times per minute.
netstat -m shows, that there is enough buffers.
UPDATE:
It looks like the problem of client, not server: Apache HttpClient random delays under high requests/second
Answering this question for the sake of closing it out.
Apache HttpClient random delays under high requests/second
Apache's ab(1) also has "saw tooth"-like performance because it dispatches -c connections that it monitors via select(2), then once all connections have returned, it will dispatch another -c connections. The alternate (and better) approach would be to establish a new connection and readd the file-descriptor to ab(1)'s select(2) array to make sure -c connections are always active processing.
I've seen ab(1) give some very misleading results because one connection out of a thousand hung (still not a good thing, but it skews results very negatively when using it through a load balancer).