How to set local endpoint when using boost asio udp socket - boost-asio

I have 3 network interfaces on pc and want to make sure that when I do udp socket send, it sends via a specific network interface ( I have the ip address to use when sending data).
Here is the code.
udp::socket send_socket(io_service,boost::asio::ip::udp::v6());
udp::endpoint local_end_point(boost::asio::ip::address::from_string(LocalIpAddress),1111);
// send_socket.bind(local_end_point);
send_socket.send_to(const_buffer,senderEndpoint,0,error);
The above code works but I dont have control over what network interface the data would be sent through. If I uncomment the send_socket.bind line, I stop receiving any data on the other end.
Is there any other way to bind a socket to a particular network interface?

The bind() function ties your socket to a specific network interface for sending AND receiving. If you stop receiving data on the other end it's likely because the other end is not routable via the NIC that you've specified in the bind call.

Related

(WinSock), What should I fill in WSARecvFrom source ip address for server?

So, I'm implementing udp packet communication between client and server.
I've already done sending udp data to client from server using WSASendTo
For client side, the parameter sockaddr* ipFrom in WSARecvFrom is server's endpoint. Everything works good.
But the problem is that the server uses only one udp socket, so I don't know what should I fill in sockaddr* ipFrom parameter in WSARecvFrom. Since each client has different ip, I can't specify the source ip address.
I tried to just fill nullptr in this parameter but it didn't work. And I tried fill it with server endpoint, and it worked.
But as you know, using server endpoint for ipFrom doesn't make sense because that is not source address, so I'm curious what address/endpoint should I fill in sockaddr* ipFrom.
EDIT : By the way, I binded udp socket as server ip/port before using it.
the problem is that the server uses only one udp socket, so I don't know what should I fill in sockaddr* ipFrom parameter in WSARecvFrom.
Nothing. That is not your responsibility to do in that call.
But as you know, using server endpoint for ipFrom doesn't make sense because that is not source address, so I'm curious what address/endpoint should I fill in sockaddr* ipFrom.
None.
You seem to have a misunderstanding of what the ipFrom parameter is meant for. It is an output parameter, not an input parameter. So you don't need to supply any data to it at all. WSARecvFrom() receives an incoming packet. If you want to know who the sender of that packet is (ie, to send a subsequent reply back using WSASendTo()), you supply a pointer to an allocated sockaddr_... and then WSARecvFrom() fills it with the sender's info. You don't fill the sockaddr_... yourself at all.
Just like when the client receives a packet from the server and WSARecvFrom() reports the server's info, the same is true when the server receives a packet from a client and WSARecvFrom() reports the client's info.

UDP server and connected sockets

[edit]
Seems my question was asked nearly 10 years ago here...
Emulating accept() for UDP (timing-issue in setting up demultiplexed UDP sockets)
...with no clean and scalable solution. I think this could be solved handily by supporting listen() and accept() for UDP, just as connect() is now.
[/edit]
In a followup to this question...
Can you bind() and connect() both ends of a UDP connection
...is there any mechanism to simultaneously bind() and connect()?
The reason I ask is that a multi-threaded UDP server may wish to move a new "session" to its own descriptor for scalability purposes. The intent is to prevent the listener descriptor from becoming a bottleneck, similar to the rationale behind SO_REUSEPORT.
However, a bind() call with a new descriptor will take over the port from the listener descriptor until the connect() call is made. That provides a window of opportunity, albeit briefly, for ingress datagrams to get delivered to the new descriptor queue.
This window is also a problem for UDP servers wanting to employ DTLS. It's recoverable if the clients retry, but not having to would be preferable.
connect() on UDP does not provide connection demultiplexing.
connect() does two things:
Sets a default address for transmit functions that don't accept a destination address (send(), write(), etc)
Sets a filter on incoming datagrams.
It's important to note that the incoming filter simply discards datagrams that do not match. It does not forward them elsewhere. If there are multiple UDP sockets bound to the same address, some OSes will pick one (maybe random, maybe last created) for each datagram (demultiplexing is totally broken) and some will deliver all datagrams to all of them (demultiplexing succeeds but is incredibly inefficient). Both of these are "the wrong thing". Even an OS that lets you pick between the two behaviors via a socket option is still doing things differently from the way you wanted. The time between bind() and connect() is just the smallest piece of this puzzle of unwanted behavior.
To handle UDP with multiple peers, use a single socket in connectionless mode. To have multiple threads processing received packets in parallel, you can either
call recvfrom on multiple threads which process the data (this works because datagram sockets preserve message boundaries, you'd never do this with a stream socket such as TCP), or
call recvfrom on a single thread, which doesn't do any processing, just queues the message to the thread responsible for processing it.
Even if you had an OS that gave you an option for dispatching incoming UDP based on designated peer addresses (connection emulation), doing that dispatching inside the OS is still not going to be any more efficient than doing it in the server application, and a user-space dispatcher tuned for your traffic patterns is probably going to perform substantially better than a one-size-fits-all dispatcher provided by the OS.
For example, a DNS (DHCP) server is going to transact with a lot of different hosts, nearly all running on port 53 (67-68) at the remote end. So hashing based on the remote port would be useless, you need to hash on the host. Conversely, a cache server supporting a web application server cluster is going to transact with a handful of hosts, and a large number of different ports. Here hashing on remote port will be better.
Do the connection association yourself, don't use socket connection emulation.
The issue you described is the one I encountered some time ago doing TCP-like listen/accept mechanism for UDP.
In my case the solution (which turned out to be bad as I will describe later) was to create one UDP socket to receive any incoming datagrams and when one arrives making this particular socket connected to sender (via recvfrom() with MSG_PEEK and connect()) and returning it to new thread. Moreover, new not connected UDP socket was created for next incoming datagrams. This way the new thread (and dedicated socket) did recv() on the socket and was handling only this particular channel from now on, while the main one was waiting for new datagrams coming from other peers.
Everything had worked well until the incoming datagram rate was higher. The problem was that while the main socket was transitioning to connected state, it was buffering not one but a few more datagrams (coming from many peers) and thus thread created to handle the particular sender was reading in effect a few more datagrams not intended to it.
I could not find solution (e.g. creating new connected socket (instead connecting the main one) and pass the received datagram on main socket to its receive buffer for futher recv()). Eventually, I ended up with N threads, each one having one "listening" socket (with use of SO_REUSEPORT) with datagram scattering done on OS level.

How to get handle on addr of client which lost connection?

I have a UDP server implemented using the template in the documentation, which can be found here: https://docs.python.org/3/library/asyncio-protocol.html#udp-echo-server-protocol
I would like to know the addr of the client which lost connection. The connection_lost callback only has a single parameter, exc for the exception.
Edit: Following the downvotes I want to highlight that its not a very noob-friendly part of the module naming a callback in the datagram ServerProtocol class 'connection_made'.
The Python API designers need to document this properly.
It looks like connection_made() is called when you create the socket and connect it, which in turn only happens if you specify a non-None Remote_addr.
To understand all that, first you need to understand what connect() does to a UDP socket at the Berkeley Sockets API level:
It conditions the socket so that write() andsend()can be used as well assendto()`, both of which will only send to the connected target address.
It conditions the socket to filter out all datagrams that did not originate at the connect target.
It does not create a wire connection of any kind. Nothing is received by the peer or sent on the wire in any way.
You can connect() a UDP socket multiple times, either to a different address or to null, which completely undoes (1) and (2).
So, I can only imagine that the connection_lost() callback is called when (4) happens, which it isn't in your code.
Whatever it does, if anything, it certainly can't be used to detect when a client disconnects, as there is no such event in UDP.

Set the time to live (TTL) of IP packets for outgoing UDP datagrams on Arduino Ethernet

I'm using an Arduino Ethernet to send UDP datagrams to a remote host. The code I use to send a single datagram is:
Udp.begin(localPort);
...
Udp.beginPacket(remoteIP, remotePort);
Udp.write(data);
Udp.endPacket();
My issue is that I need to customize the TTL of the outgoing UDP/IP packet, but none of Udp.begin, Udp.beginPacket, Udp.write and Udp.endPacket provide a parameter to set such option.
I know that the TTL field belongs to the IP header but it seems you don't handle raw IP packets using Arduino's Ethernet / socket / w5100 libraries.
I looked into the definitions of the above functions, expecially in EthernetUDP::beginPacket where I was wondering to find something useful being it called just before I pass the payload of the message, but I got stuck since it contains not much more than a call to startUDP() (socket.cpp), and the latter deals with methods of the W5100 class that are not clear to me.
Do someone know if there is a somehow high-level facility to set the TTL of a packet, or should one go deeper into the libraries to achieve that?
Finally I found a solution. The WIZnet W5100 socket provide registers that describe the socket's behaviour as documented in W5100 Datasheet Version 1.1.6. One of these registers is Socket 0 IP TTL (S0_TTL) (address 0x0416). I see that those registers are written in the startUDP function (in socket.cpp) in order to set the socket's destination IP address and port:
W5100.writeSnDIPR(s, addr);
W5100.writeSnDPORT(s, port);
so I appended there a call to
W5100.writeSnTTL(s, (uint8_t) 255); // set TTL to 255
and it indeed worked, i.e. the sketch got compiled. Such method is undocumented, I figured it out looking at the other register-writing methods and finding on the web that exists a couple of projects that make use of it.
I also wrote this patch to provide the override Udp.beginPacket(remoteIP, remotePort, ttl) to the Ethernet libraries that come with Arduino 1.0.1 - 2012.05.21.

multiple UDP ports

I have situation where I have to handle multiple live UDP streams in the server.
I have two options (as I think)
Single Socket :
1) Listen at single port on the server and receive the data from all clients on the same port and create threads for each client to process the data till the client stop sending.
Here only one port is used to receive the data and number of threads used to process the data.
Multiple Sockets :
2) Client will request open port from the server to send the data and the application will send the open port to the client and opens a new thread listening at the port to receive and process the data.Here for each client will have unique port to send the data.
I already implemented a way to know which packet is coming from which client in UDP.
I have 1000+ clients and 60KB data per second I am receiving.
Is there any performance issues using the above methods
or Is here any efficient way to handle this type of task in C ?
Thanks,
Raghu
With that many clients, having one thread per client is very inefficient since lots and lots of context switches must be performed.
Also, the number of ports you can open per IP is limited (port is a 16 bit number).
Therefore "Single Socket" will be far more efficient. But you can also use "Multipe Sokets" with just a single thread using the asynchronous API. If you can identify the client using the package's payload, then there is no need to have a port per client.