Cannot receive UDP when different sockets bind to same port - udp

Came across an interesting observation in a SO post, where there are two client processes (client is across NAT) which both bind locally to same port (reusing port), use UDP socket to send data in one process, and receive in another.
It turns out, the receive process could not receive the data.
Client Process (Send) --- Port 5000 ---> NAT --Port 5333 (say) -> Server
This works
Server ----Port(5333)---> NAT ---Port??---> Client Process (Recv)
This doesnt work
It seems if a single client process with same socket is used for both send and receive, it receives the data from server.
Why this behavior? If both client send and receive processes were bind to the same port, things should have worked?
Why is different process causing this behavior? Looks like due to different processes, different port are used despite port reuse?

Related

Can IPv6 multicasting work when one or more receivers are unable to bind to the program's well-known port?

Consider a simple IPv6 multicast application:
A "talker" program periodically sends out IPv6 UDP packets to a well-known multicast-group, sending them to a well-known port.
Zero or more "listener" programs bind themselves to that well-known port and join the well-known multicast group, and they all receive the UDP packets.
That all works pretty well, except in the case where one or more of the listener-programs is unable to bind to the well-known UDP port because a socket in some other (unrelated) program has already bound to that UDP port (and didn't set the SO_REUSEADDR and/or SO_REUSEPORT options to allow it to be shared with anyone else). AFAICT in that case, the listener program is simply out of luck, there is nothing it can do to receive the multicast data, short of asking the user to terminate the interfering program in order to free up the port.
Or is there? For example, is there some technique or approach that would allow a multicast listener to receive all the incoming multicast packets for a given multicast-group, regardless of which UDP port they are being sent to?
If you want to receive all multicast traffic regardless of port, you'd need to use raw sockets to get the complete IP datagram. You could then directly inspect the IP header, check if it's using UDP, then check the UDP header before reading the application layer data. Note that methods of doing this are OS specific and typically require administrative privileges.
Regarding SO_REUSEADDR and SO_REUSEPORT, apps that do this will allow multiple programs to receive multicast packets sent to a given port. However, if you also need to receive unicast packets this method has issues. Incoming unicast packets may be set to both sockets, may always be sent to one specific socket, or sent to each in an alternating fashion. This also differs based on the OS.

Apache NiFi TCP Client/Server

Can I simulate a TCP client/server interaction using Apache NiFi processors alone or do I have to write code for this? The processors to be considered here are ListenTCP, PutTCP, and GetTCP. In particular, I want to simulate and show a POC for sending HL7 messages from a TCP client to a TCP server. Anyone done this before using NiFi? Any help would be appreciated. Thanks.
ListenTCP starts a server socket waiting for incoming TCP connections. Your client can make connections to the hostname where NiFi is running and the port specified in ListenTCP. If your client needs to send multiple pieces of data over a single connection, then it must send new-lines in between each message. You can simulate a client in NiFi by using PutTCP and pointing it at the same host/port where ListenTCP is running.
UPDATE - Here is an example of the flow:

For how long a router keeps records in the NAT and can they be reused forwarding requests from other hosts?

There is an answer explaining in simple terms how a router works translating requests from the local network to outside and back (https://superuser.com/questions/105838/how-does-router-know-where-to-forward-packet) what is not clear - for how long a record in the NAT is kept?
For example, if I send a UDP request to 25.34.11.56:3874 and my local endpoint is 192.168.1.21:54389 the router rewrites the request packet and adds a record to the NAT. Let's say the external endpoint will be 68.55.32.89:34535. Then the computer which received my request responds to the 68.55.32.89:34535 and the packet it forwarded to the local 192.168.1.21:54389 in accordance with the NAT record. What happens after that to the records?
What if the 25.34.11.56:3874 decides to send a request to my external endpoint 68.55.32.89:34535 after 10 or 100 minutes? Will it still be forwarded by the router to the 192.168.1.21:54389?
Let's say there is another remote computer with the endpoint 55.43.77.98:8765. What will happen if this computer sends a request to my external endpoint 68.55.32.89:34535? Will it be forwarded to the local 192.168.1.21:54389 or will it be filtered out by the router because the remote endpoint does not match 25.34.11.56:3874 which was initially used for the first request and for the NAT record?
It depends.
According to Section 4.3 of RFC 4787, the UDP timeout of a NAT should not be smaller than 2 minutes (120 seconds), except for selected, well-known ports. In practice, however, routers tend to use smaller timeouts. For example, OpenWRT 14.07 uses a timeout of just 60 seconds.
For TCP, the timeouts can be much larger, since TCP connections are usually terminated by an explicit FIN/FIN-ACK exchange. For established TCP connections, Section 5 of RFC 5382 specifies a timeout of no less than 2 hours 4 minutes (7204 seconds), and OpenWRT uses 7440 seconds.
Concerning your second question, most NATs maintain mappings that are specific to a pair of endpoints (socket addresses). If a host A inside the NAT sends a datagram to socket adress B, then the mapping will only apply to communication between A and B — a different host C outside the NAT will not be able to use that particular mapping to send data to A. (Some so-called full cone NATs allow that, but they are fairly rare.)

recv() fails on UDP

I’m writing a simple client-server app which for the time being will be for my own personal use. I’m using Winsock for the net communication. I have not done any networking for the last 10 years, so I am quite rusty. I’d like to use as little external code as possible, so I have written a home-made server discovery mechanism, as follows.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket. The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket. The client then uses this info to connect to the server (on a different socket). This mechanism should allow the server to bind its listening socket to the first port it can within the dynamic port range (49152-65535) and to the clients to discover where the server is and on which port it is listening.
The server part works fine: the server receives the broadcast messages and successfully sends its response.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket).
But the message never makes it to the client app. I’ve tried doing a recv() in blocking and non-blocking mode, and there is never any data available. ioctlsocket() always shows no data is available, even though I know the packet got it to the machine.
The server succeeds on doing a recv() on broadcasted data. But the client fails on doing a recv() of the server’s response which is addressed to its discovery socket.
The question is very vague: what gotchas should I watch for in this scenario? Why would recv() fail to get a packet which has actually arrived to the machine? The sockets are UDP, so the fact that they are not connected is irrelevant. Or is it?
Many thanks in advance.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket.
The message doesn't need to contain anything. Just broadcast an empty message from the 'discovery socket'. recvfrom() will tell the server where it came from, and it can just reply directly.
The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket.
Fair enough, although actually the server could just broadcast its own TCP listening port every 5 seconds or whatever.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket). But the message never makes it to the client app
If it got to the host it must get to the application. You must have got the ports mixed up somehow. Simplify it as above and retry.
Well, it was one of those stupid situations: Windows Firewall was active, besides the other firewall, and silently dropping packets. Deactivating it solved the problem.
But I still don’t understand how it works, as it was allowing the server to receive packets sent through broadcasting. And when I got at my wits end and set the server to answer back through a broadcast, THOSE packets got dropped.
Two days of frustration. I hope someone profits from my experience.

State of preexisting connections when using file descriptor passing?

I'm playing around with a webserver, using a unix socket and sendmsg / recvmsg to pass the socket file descriptor to a new server process without losing any requests. While testing it with ab I found that client connections would linger, and apachebench (ab) would show the error: "apr_poll: The timeout specified has expired (70007)".
I suspected that there was a change to the address of the file descriptor that would render open connections useless, however making sure the connections were closed at the end of every request didn't make a difference, a couple of the requests would fail.
Is there some extra oddity at the socket level or is ab just being weird? Is there anything else I should take into account?
Edit: Using PHP as a client to make requests also stalls during the cycle.
It does make sense if you have a master server which is listening on a socket (accepting incoming connections) and you have multiple worker processes.
You can select a suitable/free worker (for example, based on the number of TCP connections every worker is using) and pass the descriptor of the incoming connection from the master to the worker. This helps to avoid the "thundering herd" when multiple workers listen on the common endpoint.
That's equivalent to trying to send a telephone over a telephone line. It doesn't make any sense. A socket fd identifies the endpoint of a connection. If another host wants a connection it will have to mke its own. You can't give it one of yours.