P2P Networking. UDP vs TCP - udp

I write P2P system based on Kadelmia approach. My question is related to type of transport to use: UDP or TCP.
The Kadelmia documentation defines UDP, but my concern is payload size. As far as I know, UDP grantees delivery of 548 bytes. But there are messages, which are defined by documentation, with length greater then 548 bytes (for example response on FIND_NODE). The question: should I use TCP instead of UDP?

with length greater then 548 bytes
That's the RFC-defined MTU for ipv4, but in practice almost all nodes support more, at least 1400 and some cases can be covered by fragmentation too. For IPv6 the guaranteed MTU is higher.
The question: should I use TCP instead of UDP?
You should use UDP, see this Q&A for reasons. If you need to transfer larger data at the end of a lookup you can still use TCP as next layer protocol, but that is beyond the scope of kademlia's routing algorithm.
for example response on FIND_NODE
Assuming 256bit node IDs (32 bytes) and 18byte contacts (IPv6) you can fit 10 ID, address pairs into 548 bytes with a few bytes to spare for headers. It's crammed but doable.

Related

UDP maximum packet size through loopback interface

I want to use UDP to do Interprocesses communication, I know the maximum size for a UDP datagram is ~64K. but whether is that true for loopback?
TL;DR: 64k, if you don't care about IP fragmentation. MTU if you do.
Even though UDP max packet size is 64k, the actual transfer size is governed by the interface's MTU.
Any packet larger than the MTU (including IP and UDP overhead) will be fragmented to multiple layer-2 packets.
So, while you can always send a 64k UDP packet, you may end up with more than one IP packet, which add latency and increase the chance of packet drop (if one fragment is lost - all the datagram is lost).
The good news is that you don't have to worry about fragmentation yourself - the kernel will take care of fragmentation and reassembly so your application will see a single datagram of 64k in size.
Since you're asking about the loopback interface, packet drop is not an issue, so you're only bound by UDP's size field.
If you would like to avoid IP fragmentation, you need to query the interface's MTU and make sure your datagrams are smaller (again, including IP and UDP overhead).
On my Mac the loopback interface default MTU is 16384:
$ ifconfig lo0
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
...
On Linux, you can get/set an interface MTU programmatically with SIOCGIFMTU/SIOCSIFMTU ioctl (man 7 netdevice).

Combining UDP packets?

Is there any benifit to combining several UDP packets into one as opposed to sending them all one right after the other? I know that if the large packet gets courrupted then i loose all of them, but is there possibly some upside to sending them all in one? such as a lower chance of the large one being lost?
That would be at the discretion of the sending application.
Note that your large packet is limited by the MTU of the underlying network. e.g. the theoretical size of a UDP packet is 64k, but an ethernet frame is only ~1500 bytes. So I suspect this is not a practical feature.
Generally networking channels will be limited on the rate of packets that can be sent per second. Thus if you want to send millions of messages per second you generally want to combine into a smaller number of packets to run without major packet loss.
As an over generalisation, Windows doesn't like > 10,000 packets per second for UDP, but you can saturate a gigabit network with large MTU packets.
Is there any benifit to combining several UDP packets into one as opposed to sending them all one right after the other?
One can save on UDP header which is 8 bytes per datagram hence reducing the amount of data sent over the wire. Just make sure you don't send more then MTU sans IP and UDP header sizes to avoid fragmenting on IP layer.
Also, the standard POSIX socket API requires one send/sendto/sendmsg() system call to send or receive one datagram, so by sending fewer datagrams one does fewer system call reducing the overall latency (an order of a few microseconds per call). Linux kernels starting from 3.0 provide sendmsg() and recvmmsg() functions to send and receive multiple datagrams in one system call.
I know that if the large packet gets courrupted then i loose all of the
True. However, if the protocol can't cope with UDP datagram loss at all it may not matter that much - as soon as one datagram is lost it's broken anyway.
It is important for situations where packet size is small (less than 100 byte). The IP/UDP header is at least 28 bytes.
Imagine you have streaming connection to a server, each packet contains 50 bytes and your software sends packets with rate 1000 packet per second.
The actual payload is 1000 * 50 bytes = 50000 bytes. Headers overhead 1000 * 28 = 28000 bytes Total bytes : 50000 + 28000 = 87000 ==> 87 KBps
Imagine you can combine each 3 UDP packets into one packet:
Headers overhead 1000 / 3 * 28 = 9333 Total bytes : 50000 + 9333 ===> 60 KBps
This -in some applications- saves good portion of the bandwidth.

WebRTC Overhead

I want to know, how much overhead WebRTC produces when sending data over datachannels.
I know that Websockets have 2 - 14 Bytes overhead for each frame. Does WebRTC use more Overhead? I cannot find some useful information on the web. Its clear for me, that Datachannels can not be used for now. How much overhead is used with Mediastreams?
Thanks
At the application layer, you can think of DataChannel as sending and
receiving over SCTP. In the PPID (Payload Protocol Identifier) field of the
SCTP header, Datachannel sets value 0x51 for indicating that it's sending UTF-8
data and 0x52 for binary data.
Yes, you are right. RTCDataChannel uses SCTP over DTLS and UDP. DTLS is used for
security. However, SCTP has problems traversing most NAT/Firewall setups.
Hence, to overcome that, SCTP is tunneled through UDP. So the overall overhead
to send data would be overhead of:
SCTP + DTLS + UDP + IP
and that is:
28 bytes + 20-40 bytes + 8 bytes + 20 - 40 bytes
So, the overhead would be rougly about 120 bytes. The maximum size of the SCTP
packet that a WebRTC client can send is 1280 bytes. So at max, you can send
roughly 1160 bytes of data per SCTP packet.
WebRTC uses RTP to send its media. RTP runs over UDP.
Besides the usual IP and UDP headers, there are two additional headers:
The RTP header itself starts from 12 bytes and can grow from there, depending on what gets used.
The payload header - the header that is used for each data packet of the specific codec being used. This one depends on the codec itself.
RTP is designed to have as little overhead as possible over its payload due to the basic reasoning that you want to achieve better media quality, which means dedicating as many bits as possible to the media itself.
Here's a screenshot of 2 peer.js instances (babylon.js front end) sending exactly 3 bytes every 16ms (~60 per second).
The profiler shows 30,000 bits / second:
30,000 bits / 8 bits per byte / 60 per second = 62.5 bytes, so after the 3 bytes I'm sending it's ~59.5 bytes according to the profiler.
I'm not sure if something is not counted on the incoming, because it is only profiling half that, 15k bits / second

UDP not big enough for my message, how to improve?

i am writing my programming for ar drone using UDP, but the drone sometimes hangs, i think maybe the UDP is not big enough, is there any way to improve the situation if still using UDP?
UDP datagram sizes are limited by:
The outgoing socket send buffer. Defaults for this vary wildly, from 8k in Windows to 53k or more in other systems.
The Path MTU or Maximum Transmission Unit of the entire network path between you and the receiver. This is typically no more than 1260 bytes.
The safest size for a UDP datagram is 534 bytes as this will never be fragmented by the transport. If you're planning on sending datagrams larger than this you need to consider a different transport, usually TCP.

Guarantee TCP Packet Size

We use an embedded device to send packets from a serial port over a serial-to-Ethernet converter to a server. One manufacturer we use, Moxa, will always send the packets in the same manner which they are constructed. Meaning, if we construct a packet size of 255, it will always send the packet in a 255 length. The other manufacturer, Tibbo, if we send the packet size 255, it will break the packet up if it is greater than 128. This is the answer I received from the Tibbo engineers at the time:
"From what I understand and what the
engineers said, even if the other
devices provide you with the right
packet size now does not guarantee
that when implemented in other
networks the same will happen. This
is the reason why we feel that packet
size based data transfer through TCP
is not reliable as it was not the way
TCP was designed to be used."
I understand that this may not be how TCP was designed to be used, but if I create a packet of 255 bytes and TCP allows it, then how is this outside of how TCP works? I understand that at some point the packet may get broken up but if the server is expecting a certain packet size and Moxa's offering does not have the same problem as the Tibbo device.
So, is it possible to guarantee a reasonable TCP packet size?
No. TCP is not a packet protocol, it is a stream protocol. It guarantees that the bytes you send will all arrive, and in the right order, but nothing else. In particular, TCP does not give you any kind of message or packet boundaries. If you want such things, they need to be implemented at a higher level by your protocol.