TCP Reliability testing - testing

I am working on a project where I need to test whether TCP reliability is in action or not.
For example I want to check something like this
1)I have a client which makes a request for a resource say using HTTP
2)Initially all the TCP 3 Way Hand shake would happen and a GET request would be placed.
3)The server would respond with appropriate response.
I am trying to tweak the contents from the Server side (I am taking care of the TCP stack).
In such a case I want to test if the server is sendign the packet reliably or not.
For eg:I want to simulate an environment where the Client doesnt ACK for the TCP Payload it recieeved hence forcing the server to resend the data after sometime.
Any tools to do test something like this ?
Thanks in advance

Related

Can make problem if send udp packet to closed server port?

I'm making log proxy program. It's not important logs. So I decided to send UDP packet using UDS on local. But my program can'k know whether server is or not. So my program will send to destination, doesn't care server binding.
I wonder to know this behavior could make problem. For example, full receive buffer. I guess OS would drop UDP packet because OS knows destination is not bound.
And Should I consider anything else?

JMeter and Connect Times for SSL Connections

For a benchmarking test, I have a very basic test setup wherein I have a single user looping for 100 times (loop delay 100ms) hitting an https endpoint (GET) with HttpClient4 implementation, keep-alive has been turned on.
In the test results, I have observed a pattern wherein every 5/6th request the connect metric is higher as if a full SSL handshake is occurring, check the image below. I am a bit confused with this, any ideas on whats going on here and why the connect times are higher every n request?
[UPDATE]
I was able to troubleshoot this issue a bit further today after turning on access logs on the load balancer (target of this test) and I can see a pattern wherein JMeter seems to be switching the ports on the client side every few requests - the frequency matches the pattern observed previously with the JMeter test results.
This should probably explain the elevated connect times, now the question is why JMeter switches the port?
This could be keep-alive, it certainly was for my issue. Firstly make sure it's enabled on the sampler. Then there's also this JMeter setting to say how long to keep connections alive for.
httpclient4.time_to_live
I've set to 120000 in jmeter.properties but looking at the docs user.properties file should be used. I know jmeter.properties with a setting of 120000 worked for me.
I set the value high to see if it is an http keep alive causing the port switch. Whatever you set it to you need to ensure the client you are emulating does the same.
As you get some quick results I would guess it is a short timer somewhere and not the server side not allowing keep alive at all. Wireshark can help you pin point this as it could be the server side resetting the connection after a certain time. The above config extends the client side time which may get the information you need, if not have a look at the server side equivalent which will vary depending on what services the endpoint.

UDP Health Check

So we have an application that makes udp calls and sends packets. However, since responses are given for UDP calls, how could we ensure that the service is up and the port is open and that things are getting stored?
The only thought we have right now is to send in test packets and ensure they are getting saved out to the db.
So my over all question is, is there a better, easier way to ensure that our udp calls are succeeding?
On the listening host, you can validate that the port is open with netstat. For example, if your application uses UDP port 68, you could run:
# Grep for :<port> from netstat output.
$ netstat -lnu | grep :68
udp 0 0 0.0.0.0:68 0.0.0.0:*
You could also send some test data to your application, and then check your database to verify that the fixture data made it into your database. That doesn't mean it always will be, just that it's working at the time of the test.
Ultimately, the problem is that UDP packets are best-effort, and not guaranteed. So unless you can configure your logging platform to send some sort of acknowledgment after data is received and/or written, then you can't guarantee anything. The very nature of UDP is that it leaves acknowledgments (if any) to the application layer.
We took a different approach and we are checking to make sure the calls made it to the db. Its easy enough to query a table and ensure records are in there. If none recent, we know something is wrong. CodeGnome had a good idea, just not the route we went. Thanks!

Why is SNMP usually run over UDP and not TCP/IP?

This morning, there were big problems at work because an SNMP trap didn't "go through" because SNMP is run over UDP. I remember from the networking class in college that UDP isn't guaranteed delivery like TCP/IP. And Wikipedia says that SNMP can be run over TCP/IP, but UDP is more common.
I get that some of the advantages of UDP over TCP/IP are speed, broadcasting, and multicasting. But it seems to me that guaranteed delivery is more important for network monitoring than broadcasting ability. Particularly when there are serious high-security needs. One of my coworkers told me that UDP packets are the first to be dropped when traffic gets heavy. That is yet another reason to prefer TCP/IP over UDP for network monitoring (IMO).
So why does SNMP use UDP? I can't figure it out and can't find a good reason on Google either.
UDP is actually expected to work better than TCP in lossy networks (or congested networks). TCP is far better at transferring large quantities of data, but when the network fails it's more likely that UDP will get through. (in fact, I recently did a study testing this and it found that SNMP over UDP succeeded far better than SNMP over TCP in lossy networks when the UDP timeout was set properly). Generally, TCP starts behaving poorly at about 5% packet loss and becomes completely useless at 33% (ish) and UDP will still succeed (eventually).
So the right thing to do, as always, is pick the right tool for the right job. If you're doing routine monitoring of lots of data, you might consider TCP. But be prepared to fall back to UDP for fixing problems. Most stacks these days can actually use both TCP and UDP.
As for sending TRAPs, yes TRAPs are unreliable because they're not acknowledged. However, SNMP INFORMs are an acknowledged version of a SNMP TRAP. Thus if you want to know that the notification receiver got the message, please use INFORMs. Note that TCP does not solve this problem as it only provides layer 3 level notification that the message was received. There is no assurance that the notification receiver actually got it. SNMP INFORMs do application level acknowledgement and are much more trustworthy than assuming a TCP ack indicates they got it.
If systems sent SNMP traps via TCP they could block waiting for the packets to be ACKed if there was a problem getting the traffic to the receiver. If a lot of traps were generated, it could use up the available sockets on the system and the system would lock up. With UDP that is not an issue because it is stateless. A similar problem took out BitBucket in January although it was syslog protocol rather than SNMP--basically, they were inadvertently using syslog over TCP due to a configuration error, the syslog server went down, and all of the servers locked up waiting for the syslog server to ACK their packets. If SNMP traps were sent over TCP, a similar problem could occur.
http://blog.bitbucket.org/2012/01/12/follow-up-on-our-downtime-last-week/
Check out O'Reilly's writings on SNMP: https://library.oreilly.com/book/9780596008406/essential-snmp/18.xhtml
One advantage of using UDP for SNMP traps is that you can direct UDP to a broadcast address, and then field them with multiple management stations on that subnet.
The use of traps with SNMP is considered unreliable. You really should not be relying on traps.
SNMP was designed to be used as a request/response protocol. The protocol details are simple (hence the name, "simple network management protocol"). And UDP is a very simple transport. Try implementing TCP on your basic agent - it's considerably more complex than a simple agent coded using UDP.
SNMP get/getnext operations have a retry mechanism - if a response is not received within timeout then the same request is sent up to a maximum number of tries.
Usually, when you're doing SNMP, you're on a company network, you're not doing this over the long haul. UDP can be more efficient. Let's look at (a gross oversimplification of) the conversation via TCP, then via UDP...
TCP version:
client sends SYN to server
server sends SYN/ACK to client
client sends ACK to server - socket is now established
client sends DATA to server
server sends ACK to client
server sends RESPONSE to client
client sends ACK to server
client sends FIN to server
server sends FIN/ACK to client
client sends ACK to server - socket is torn down
UDP version:
client sends request to server
server sends response to client
generally, the UDP version succeeds since it's on the same subnet, or not far away (i.e. on the company network).
However, if there is a problem with either the initial request or the response, it's up to the app to decide. A. can we get by with a missed packet? if so, who cares, just move on. B. do we need to make sure the message is sent? simple, just redo the whole thing... client sends request to server, server sends response to client. The application can provide a number just in case the recipient of the message receives both messages, he knows it's really the same message being sent again.
This same technique is why DNS is done over UDP. It's much lighter weight and generally it works the first time because you are supposed to be near your DNS resolver.

Where the datagrams are if a client does not listen to a UDP port?

Suppose a client sends a number of datagrams to a server through my application. If my application on the server side stops working and cannot receive any datagrams, but the client still continues to send more data grams to the server through UDP protocol, where are those datagrams going? Will they stay in the server's OS data buffer (or something?)
I ask this question because I want to know that if a client send 1000 datagrams (1K each) to a PC over the internet, will those 1000 datagrams go through the internet (consuming the bandwidth) even if no one is listening to those data?
If the answer is Yes, how should I stop this happening? I mean if a server stops functioning, how should I use UDP to get to know the fact and stops any further sending?
Thanks
I ask this question because I want to know that if a client send 1000 datagrams (1K each) to a PC over the internet, will those 1000 datagrams go through the internet (consuming the bandwidth) even if no one is listening to those data?
Yes
If the answer is Yes, how should I stop this happening? I mean if a server stops functioning, how should I use UDP to get to know the fact and stops any further sending?
You need a protocol level control loop i.e. you need to implement a protocol to take care of this situation. UDP isn't connection-oriented so it is up to the "application" that uses UDP to account for this failure-mode.
UDP itself do not provide facilities to determine if message is successfully received by a client or not. You need you TCP to establish reliable connection and after it sends data over UDP.
The lowest overhead solution would be a keep-alive type thing like jdupont suggested. You can also change to use tcp, which provides this facility for you.