This morning, there were big problems at work because an SNMP trap didn't "go through" because SNMP is run over UDP. I remember from the networking class in college that UDP isn't guaranteed delivery like TCP/IP. And Wikipedia says that SNMP can be run over TCP/IP, but UDP is more common.
I get that some of the advantages of UDP over TCP/IP are speed, broadcasting, and multicasting. But it seems to me that guaranteed delivery is more important for network monitoring than broadcasting ability. Particularly when there are serious high-security needs. One of my coworkers told me that UDP packets are the first to be dropped when traffic gets heavy. That is yet another reason to prefer TCP/IP over UDP for network monitoring (IMO).
So why does SNMP use UDP? I can't figure it out and can't find a good reason on Google either.
UDP is actually expected to work better than TCP in lossy networks (or congested networks). TCP is far better at transferring large quantities of data, but when the network fails it's more likely that UDP will get through. (in fact, I recently did a study testing this and it found that SNMP over UDP succeeded far better than SNMP over TCP in lossy networks when the UDP timeout was set properly). Generally, TCP starts behaving poorly at about 5% packet loss and becomes completely useless at 33% (ish) and UDP will still succeed (eventually).
So the right thing to do, as always, is pick the right tool for the right job. If you're doing routine monitoring of lots of data, you might consider TCP. But be prepared to fall back to UDP for fixing problems. Most stacks these days can actually use both TCP and UDP.
As for sending TRAPs, yes TRAPs are unreliable because they're not acknowledged. However, SNMP INFORMs are an acknowledged version of a SNMP TRAP. Thus if you want to know that the notification receiver got the message, please use INFORMs. Note that TCP does not solve this problem as it only provides layer 3 level notification that the message was received. There is no assurance that the notification receiver actually got it. SNMP INFORMs do application level acknowledgement and are much more trustworthy than assuming a TCP ack indicates they got it.
If systems sent SNMP traps via TCP they could block waiting for the packets to be ACKed if there was a problem getting the traffic to the receiver. If a lot of traps were generated, it could use up the available sockets on the system and the system would lock up. With UDP that is not an issue because it is stateless. A similar problem took out BitBucket in January although it was syslog protocol rather than SNMP--basically, they were inadvertently using syslog over TCP due to a configuration error, the syslog server went down, and all of the servers locked up waiting for the syslog server to ACK their packets. If SNMP traps were sent over TCP, a similar problem could occur.
http://blog.bitbucket.org/2012/01/12/follow-up-on-our-downtime-last-week/
Check out O'Reilly's writings on SNMP: https://library.oreilly.com/book/9780596008406/essential-snmp/18.xhtml
One advantage of using UDP for SNMP traps is that you can direct UDP to a broadcast address, and then field them with multiple management stations on that subnet.
The use of traps with SNMP is considered unreliable. You really should not be relying on traps.
SNMP was designed to be used as a request/response protocol. The protocol details are simple (hence the name, "simple network management protocol"). And UDP is a very simple transport. Try implementing TCP on your basic agent - it's considerably more complex than a simple agent coded using UDP.
SNMP get/getnext operations have a retry mechanism - if a response is not received within timeout then the same request is sent up to a maximum number of tries.
Usually, when you're doing SNMP, you're on a company network, you're not doing this over the long haul. UDP can be more efficient. Let's look at (a gross oversimplification of) the conversation via TCP, then via UDP...
TCP version:
client sends SYN to server
server sends SYN/ACK to client
client sends ACK to server - socket is now established
client sends DATA to server
server sends ACK to client
server sends RESPONSE to client
client sends ACK to server
client sends FIN to server
server sends FIN/ACK to client
client sends ACK to server - socket is torn down
UDP version:
client sends request to server
server sends response to client
generally, the UDP version succeeds since it's on the same subnet, or not far away (i.e. on the company network).
However, if there is a problem with either the initial request or the response, it's up to the app to decide. A. can we get by with a missed packet? if so, who cares, just move on. B. do we need to make sure the message is sent? simple, just redo the whole thing... client sends request to server, server sends response to client. The application can provide a number just in case the recipient of the message receives both messages, he knows it's really the same message being sent again.
This same technique is why DNS is done over UDP. It's much lighter weight and generally it works the first time because you are supposed to be near your DNS resolver.
Related
I have multiple applications (the producers) that produce messages to be processed by another application (the consumer). The messages will be sent through an ActiveMQ broker running on the same server. I don't have access to the applications' code, therefore the messages will be produced by executing a script (I currently don't know which language to use). The consumers will be Java application that will process the received messages.
I'm looking for an efficient transport that fits my use case. The VM transport cannot be used here. Also, I would like to avoid opening a TCP connection with the broker every time the producer script is executed (i.e. I would like to avoid using the TCP transport). I thought that UDP may be a good fit unless you know another transport which is more appropriate.
Thanks,
Mickael
There are pros and cons of both the TCP and UDP protocol
1)If ordering of messages and reliable-delivery of messages doesn't matter to you then UDP might be a nice choice,moreover in UDP it can also happen that duplicate messages are delievered to broker.
2)Using TCP offers reliable-delivery of messages along with ordering, but if you want to eliminate the stream Transport delay of TCP then you might think against it.
There are couple of others as well which you can retrospect based on your requirements
NIO protocol(USed in case of high traffic requirements)
HTTP protocol(In case you want to bypass firewalls)
Hope this helps!
Good luck!
If I was to implement a server to handle multiple clients connecting simultaneously would it be better to use TCP?
Not taking efficiency into account (I know know UDP is quicker, but unreliable).
In UDP you can't have sockets for each client connection?
Because in UDP the socket is identified by only the destination port number (right?).
In Java, I know it is easy to create a multi-threaded server to handle multiple clients at the same time in TCP. But can it be done in UDP? I imagine that it would be very complicated.
I'm just trying to get an understanding of UDP here (I don't want to actually implement anything).
It depends on what kind of server you are developing. If you need your clients to stay connected and ready to receive data from server(for example a push service) you should implement it using TCP. If you want to implement a simple request-response service, then UDP is better choice.
I’m writing a simple client-server app which for the time being will be for my own personal use. I’m using Winsock for the net communication. I have not done any networking for the last 10 years, so I am quite rusty. I’d like to use as little external code as possible, so I have written a home-made server discovery mechanism, as follows.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket. The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket. The client then uses this info to connect to the server (on a different socket). This mechanism should allow the server to bind its listening socket to the first port it can within the dynamic port range (49152-65535) and to the clients to discover where the server is and on which port it is listening.
The server part works fine: the server receives the broadcast messages and successfully sends its response.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket).
But the message never makes it to the client app. I’ve tried doing a recv() in blocking and non-blocking mode, and there is never any data available. ioctlsocket() always shows no data is available, even though I know the packet got it to the machine.
The server succeeds on doing a recv() on broadcasted data. But the client fails on doing a recv() of the server’s response which is addressed to its discovery socket.
The question is very vague: what gotchas should I watch for in this scenario? Why would recv() fail to get a packet which has actually arrived to the machine? The sockets are UDP, so the fact that they are not connected is irrelevant. Or is it?
Many thanks in advance.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket.
The message doesn't need to contain anything. Just broadcast an empty message from the 'discovery socket'. recvfrom() will tell the server where it came from, and it can just reply directly.
The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket.
Fair enough, although actually the server could just broadcast its own TCP listening port every 5 seconds or whatever.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket). But the message never makes it to the client app
If it got to the host it must get to the application. You must have got the ports mixed up somehow. Simplify it as above and retry.
Well, it was one of those stupid situations: Windows Firewall was active, besides the other firewall, and silently dropping packets. Deactivating it solved the problem.
But I still don’t understand how it works, as it was allowing the server to receive packets sent through broadcasting. And when I got at my wits end and set the server to answer back through a broadcast, THOSE packets got dropped.
Two days of frustration. I hope someone profits from my experience.
Can anyone tell be where to use the UDP protocol except live streaming of music/video? What are default usecases for UDP?
UDP is also good for broadcast, such as service discovery - finding that newly plugged in printer.
Also of note is that broadcast is anonymous, you don't need to specify target hosts, as such it can form the foundation of a convenient plug-and-play or high-availability network.
UDP is stateless and is good for applications that have large numbers of clients connecting to a server such as time servers or DNS. The fact that no connection has to established and maintained reduces the memory required by the server. There is no handshaking involved and so this reduces the traffic on the network. On the downside, if the information transferred requires multiple packets there is no transmission control to ensure that all packets arrive and in the correct order - but in games packets lost are probably better than late or disordered.
Anything else where you need performance but can survive if a packet gets lost along the way. Multiplayer games come to mind, for example.
A very common use case is DNS, since the overhead of creating a TCP connection would by far outweight the actual payload.
Additional use cases are NTP (network time service) and most video games.
I use UDP to add chat capabilities to our applications. No need to create a server. It is also useful to dispatch events to all users of our applications.
Suppose a client sends a number of datagrams to a server through my application. If my application on the server side stops working and cannot receive any datagrams, but the client still continues to send more data grams to the server through UDP protocol, where are those datagrams going? Will they stay in the server's OS data buffer (or something?)
I ask this question because I want to know that if a client send 1000 datagrams (1K each) to a PC over the internet, will those 1000 datagrams go through the internet (consuming the bandwidth) even if no one is listening to those data?
If the answer is Yes, how should I stop this happening? I mean if a server stops functioning, how should I use UDP to get to know the fact and stops any further sending?
Thanks
I ask this question because I want to know that if a client send 1000 datagrams (1K each) to a PC over the internet, will those 1000 datagrams go through the internet (consuming the bandwidth) even if no one is listening to those data?
Yes
If the answer is Yes, how should I stop this happening? I mean if a server stops functioning, how should I use UDP to get to know the fact and stops any further sending?
You need a protocol level control loop i.e. you need to implement a protocol to take care of this situation. UDP isn't connection-oriented so it is up to the "application" that uses UDP to account for this failure-mode.
UDP itself do not provide facilities to determine if message is successfully received by a client or not. You need you TCP to establish reliable connection and after it sends data over UDP.
The lowest overhead solution would be a keep-alive type thing like jdupont suggested. You can also change to use tcp, which provides this facility for you.