What is an example UDP server timeout? - udp

UDP:
Datagrams – Packets are sent individually and are checked for
integrity only if they arrive. Packets have definite boundaries which
are honored upon receipt, meaning a read operation at the receiver
socket will yield an entire message as it was originally sent.
If a datagram can be split into multiple packets, it seems that there must be some timeout that the IP or UDP layer use to discard packets if not all packets for a datagram are received.
Is there a timeout? What is an example timeout?

Related

Can IPv6 multicasting work when one or more receivers are unable to bind to the program's well-known port?

Consider a simple IPv6 multicast application:
A "talker" program periodically sends out IPv6 UDP packets to a well-known multicast-group, sending them to a well-known port.
Zero or more "listener" programs bind themselves to that well-known port and join the well-known multicast group, and they all receive the UDP packets.
That all works pretty well, except in the case where one or more of the listener-programs is unable to bind to the well-known UDP port because a socket in some other (unrelated) program has already bound to that UDP port (and didn't set the SO_REUSEADDR and/or SO_REUSEPORT options to allow it to be shared with anyone else). AFAICT in that case, the listener program is simply out of luck, there is nothing it can do to receive the multicast data, short of asking the user to terminate the interfering program in order to free up the port.
Or is there? For example, is there some technique or approach that would allow a multicast listener to receive all the incoming multicast packets for a given multicast-group, regardless of which UDP port they are being sent to?
If you want to receive all multicast traffic regardless of port, you'd need to use raw sockets to get the complete IP datagram. You could then directly inspect the IP header, check if it's using UDP, then check the UDP header before reading the application layer data. Note that methods of doing this are OS specific and typically require administrative privileges.
Regarding SO_REUSEADDR and SO_REUSEPORT, apps that do this will allow multiple programs to receive multicast packets sent to a given port. However, if you also need to receive unicast packets this method has issues. Incoming unicast packets may be set to both sockets, may always be sent to one specific socket, or sent to each in an alternating fashion. This also differs based on the OS.

Converting from UDP Datagram to UDS datagram

I have a couple of questions regarding Unix Domain Sockets. We currently have an application that has a receiver service receiving datagram packets from multiple client processes on the same machine using the loopback address. We want to convert it over to using Unix Domain Sockets. Here are my questions:
The receiver process may be down when senders are running, in UDP, the packets were just dropped. Is the behavior of UDS the same or do the senders receive an error (may also have started before the receiver and therefor the UDS path may not have been bound)?
The receiver may go down and be restarted. Since it has to unlink the path before binding, do the running senders packets make it to the receiver or do they need to reset?
If the receiver is down for an extended period, do the oldest packets get dropped or will it fill blocking the senders?

Should an IPv6 UDP socket that is set up to receive multicast packets also be able to receive unicast packets?

I've got a little client program that listens on an IPv6 multicast group (e.g. ff12::blah:blah%en0) for multicast packets that are sent out by a server. It works well.
The server would also like to sometimes send a unicast packet to my client (since if the packet is only relevant to one client there is no point in bothering all the other members of the multicast group with it). So my server just does a sendto() to my client's IP address and the port that the client's IPv6 multicast socket is listening on.
If my client is running under MacOS/X, this works fine; the unicast packet is received by the same socket that receives the multicast packets. Under Windows, OTOH, the client never receives the unicast packet (even though it does receive the multicast packets without any problems).
My question is, is it expected that a multicast-listener IPv6 UDP socket should also be able to receive unicast packets on that same port (in which case perhaps I'm doing something wrong, or have Windows misconfigured)? Or is this something that "just happens to work" under MacOS/X but isn't guaranteed, so the fact that it doesn't work for me under Windows just means I had the wrong expectations?
It should work fine. As long as you bind to IN6ADDR_ANY, then join the multicast groups, you should be able to send and receive unicast packets with no problem.
It's important to bind to IN6ADDR_ANY (or INADDR_ANY for IPv4) when using multicast. If you bind to a specific interface, this breaks multicast on Linux systems.

Lwip send udp packets larger than MTU, but my pc can not reassemble them

I use lwip-1.4.1 and stm32f407.
my lwip can send udp packets to pc, but my pc would fail to reassemble when the udp packets are larger than MTU.
I use wireshark to check these packet context, but the fragmented udp packets from my lwip look as good as normal fragmented udp packets.
The following link is the record from wireshark:
https://dl.dropboxusercontent.com/u/1321251/test.pcapng
Thanks
IP will reassemble fragmented packets, but UDP can only deliver entire datagrams, so it relies on all the fragments having arrived. If they don't, the datagram must be dropped. For that reason it customary to restrict UDP datagrams to the MTU, or less, and indeed most unwise to do anything else.

icmp packets appearing with udp packets

I am sending udp packets from one pc to other. I am observing the traffic on wireshark on the pc where I am receiving udp packets. One interesting thing I see is icmp packets appearing suddenly. Then they disappear and again appear in a cyclic manner. What can be the reason for this. Am I doing some thing wrong. And what bad effects can it have on my udp reception performance.
Please also find the attached wireshark figure taken from the destination pc.
The ICMP packets are created by the other host because the UDP port is not open. The ICMP packet includes the first X bytes of the packet that was dropped so the sender can read out which session was affected.