To send an image with size 10kb for every 100 ms., should I use TCP or UDP? - udp

a single block of secret information in 10 consecutive images of size 10 Kbytes generated every 100 ms. In order for Hermes to operate normally, all 10 images should be received within 5 seconds.

You should use TCP:
You need to receive all 10 images, which is 100K in size. This will not fit in a single IPv4 UDP datagram, so using UDP means you will have to send and keep track of at least 2 datagrams. Most source recommend much small UDP packets, so that means reassembling many smaller packets. This is not necessary in TCP.
With UDP, you will need to take care of sending acknowledgements and retransmitting one or more datagrams if the acknowledgement is not received. TCP takes care of this for you.
If your application cannot tolerate data loss, it is almost always better to start with TCP.

Related

How do I detect the ideal UDP payload size?

I heard a UDP payload of 508 bytes will be safe from fragments. I heard the real MTU is 1500 but people should use a payload of 1400 because headers will eat the rest of the bytes, I heard many packets will be fragmented so using around 64K is fine. But I want to forget about all of these and programmatically detect what's gets me good latency and throughput from my local machine to my server.
I was thinking about implementing something like a sliding window that TCP has. I'll send a few UDP packets then more and more until packets are lost. I'm not exactly sure how to tell if a packet was delayed VS lost and I'm not sure how to slide by down without going to far back. Is there an algorithm typically used for this? If I know the average hop between my machine and server or the average ping is there a way to estimate the maximum delay time of a packet?

The most reliable and efficient udp packet size?

Would sending lots a small packets by UDP take more resources (cpu, compression by zlib, etc...). I read here that sending one big packet of ~65kBYTEs by UDP would probably fail so I'm thought that sending lots of smaller packets would succeed more often, but then comes the computational overhead of using more processing power (or at least thats what I'm assuming). The question is basically this; what is the best scenario for sending the maximum successful packets and keeping computation down to a minimum? Is there a specific size that works most of the time? I'm using Erlang for a server and Enet for the client (written in c++). Using Zlib compression also and I send the same packets to every client (broadcasting is the term I guess).
The maximum size of UDP payload that, most of the time, will not cause ip fragmentation is
MTU size of the host handling the PDU (most of the case it will be 1500) -
size of the IP header (20 bytes) -
size of UDP header (8 bytes)
1500 MTU - 20 IP hdr - 8 UDP hdr = 1472 bytes
#EJP talked about 534 bytes but I would fix it to 508. This is the number of bytes that FOR SURE will not cause fragmentation, because the minimum MTU size that an host can set is 576 and IP header max size can be 60 bytes (508 = 576 MTU - 60 IP - 8 UDP)
By the way i'd try to go with 1472 bytes because 1500 is a standard-enough value.
Use 1492 instead of 1500 for calculation if you're passing through a PPPoE connection.
Would sending lots a small packets by UDP take more resources ?
Yes, it would, definitely! I just did an experiment with a streaming app. The app sends 2000 frames of data each second, precisely timed. The data payload for each frame is 24 bytes. I used UDP with sendto() to send this data to a listener app on another node.
What I found was interesting. This level of activity took my sending CPU to its knees! I went from having about 64% free CPU time, to having about 5%! That was disastrous for my application, so I had to fix that. I decided to experiment with variations.
First, I simply commented out the sendto() call, to see what the packet assembly overhead looked like. About a 1% hit on CPU time. Not bad. OK... must be the sendto() call!
Then, I did a quick fakeout test... I called the sendto() API only once in every 10 iterations, but I padded the data record to 10 times its previous length, to simulate the effect of assembling a collection of smaller records into a larger one, sent less often. The results were quite satisfactory: 7% CPU hit, as compared to 59% previously. It would seem that, at least on my *NIX-like system, the operation of sending a packet is costly just in the overhead of making the call.
Just in case anyone doubts whether the test was working properly, I verified all the results with Wireshark observation of the actual UDP transmissions to confirm all was working as it should.
Conclusion: it uses MUCH less CPU time to send larger packets less often, then the same amount of data in the form of smaller packets sent more frequently. Admittedly, I do not know what happens if UDP starts fragging your overly-large UDP datagram... I mean, I don't know how much CPU overhead this adds. I will try to find out (I'd like to know myself) and update this answer.
534 bytes. That is required to be transmitted without fragmentation. It can still be lost altogether of course. The overheads due to retransmission of lost packets and the network overheads themselves are several orders of magnitude more significant than any CPU cost.
You're probably using the wrong protocol. UDP is almost always a poor choice for data you care about transmitting. You wind up layering sequencing, retry, and integrity logic atop it, and then you have TCP.

Combining UDP packets?

Is there any benifit to combining several UDP packets into one as opposed to sending them all one right after the other? I know that if the large packet gets courrupted then i loose all of them, but is there possibly some upside to sending them all in one? such as a lower chance of the large one being lost?
That would be at the discretion of the sending application.
Note that your large packet is limited by the MTU of the underlying network. e.g. the theoretical size of a UDP packet is 64k, but an ethernet frame is only ~1500 bytes. So I suspect this is not a practical feature.
Generally networking channels will be limited on the rate of packets that can be sent per second. Thus if you want to send millions of messages per second you generally want to combine into a smaller number of packets to run without major packet loss.
As an over generalisation, Windows doesn't like > 10,000 packets per second for UDP, but you can saturate a gigabit network with large MTU packets.
Is there any benifit to combining several UDP packets into one as opposed to sending them all one right after the other?
One can save on UDP header which is 8 bytes per datagram hence reducing the amount of data sent over the wire. Just make sure you don't send more then MTU sans IP and UDP header sizes to avoid fragmenting on IP layer.
Also, the standard POSIX socket API requires one send/sendto/sendmsg() system call to send or receive one datagram, so by sending fewer datagrams one does fewer system call reducing the overall latency (an order of a few microseconds per call). Linux kernels starting from 3.0 provide sendmsg() and recvmmsg() functions to send and receive multiple datagrams in one system call.
I know that if the large packet gets courrupted then i loose all of the
True. However, if the protocol can't cope with UDP datagram loss at all it may not matter that much - as soon as one datagram is lost it's broken anyway.
It is important for situations where packet size is small (less than 100 byte). The IP/UDP header is at least 28 bytes.
Imagine you have streaming connection to a server, each packet contains 50 bytes and your software sends packets with rate 1000 packet per second.
The actual payload is 1000 * 50 bytes = 50000 bytes. Headers overhead 1000 * 28 = 28000 bytes Total bytes : 50000 + 28000 = 87000 ==> 87 KBps
Imagine you can combine each 3 UDP packets into one packet:
Headers overhead 1000 / 3 * 28 = 9333 Total bytes : 50000 + 9333 ===> 60 KBps
This -in some applications- saves good portion of the bandwidth.

WebRTC Overhead

I want to know, how much overhead WebRTC produces when sending data over datachannels.
I know that Websockets have 2 - 14 Bytes overhead for each frame. Does WebRTC use more Overhead? I cannot find some useful information on the web. Its clear for me, that Datachannels can not be used for now. How much overhead is used with Mediastreams?
Thanks
At the application layer, you can think of DataChannel as sending and
receiving over SCTP. In the PPID (Payload Protocol Identifier) field of the
SCTP header, Datachannel sets value 0x51 for indicating that it's sending UTF-8
data and 0x52 for binary data.
Yes, you are right. RTCDataChannel uses SCTP over DTLS and UDP. DTLS is used for
security. However, SCTP has problems traversing most NAT/Firewall setups.
Hence, to overcome that, SCTP is tunneled through UDP. So the overall overhead
to send data would be overhead of:
SCTP + DTLS + UDP + IP
and that is:
28 bytes + 20-40 bytes + 8 bytes + 20 - 40 bytes
So, the overhead would be rougly about 120 bytes. The maximum size of the SCTP
packet that a WebRTC client can send is 1280 bytes. So at max, you can send
roughly 1160 bytes of data per SCTP packet.
WebRTC uses RTP to send its media. RTP runs over UDP.
Besides the usual IP and UDP headers, there are two additional headers:
The RTP header itself starts from 12 bytes and can grow from there, depending on what gets used.
The payload header - the header that is used for each data packet of the specific codec being used. This one depends on the codec itself.
RTP is designed to have as little overhead as possible over its payload due to the basic reasoning that you want to achieve better media quality, which means dedicating as many bits as possible to the media itself.
Here's a screenshot of 2 peer.js instances (babylon.js front end) sending exactly 3 bytes every 16ms (~60 per second).
The profiler shows 30,000 bits / second:
30,000 bits / 8 bits per byte / 60 per second = 62.5 bytes, so after the 3 bytes I'm sending it's ~59.5 bytes according to the profiler.
I'm not sure if something is not counted on the incoming, because it is only profiling half that, 15k bits / second

UDP Packet size and fragements

Let's say I am trying to send data using udp socket. If the data is big then I think the data is going to be divided into several packets and sent to the destination.
At the destination, if there is more than one incoming packets then how to I combined those separated packets into the original packet? Do I need to have a data structure that save all the incoming udp based on the sender ? Thanks in advance..
If you are simply sending the data in one datagram, using a single send() call, then the fragmentation and reassembly will be done for you, by the transport layer. All you need to do is supply a large enough buffer to recv(), and if all the fragments have arrived, then they will be reassembled and presented to you as a single datagram.
Basically, this is the service that UDP provides you (where a "datagram" is a single block of data sent by a single send() call):
The datagram may not arrive at all;
The datagram may arrive out-of-order with respect to other datagrams;
The datagram may arrive more than once;
If the datagram does arrive, it will be complete and correct1.
However, if you are performing the division of the data into several UDP datagrams yourself, at the application layer, then you will of course be responsible for reassembling it too.
1. Correct with the probability implied by the UDP checksum, anyway.
You should use TCP for this. TCP is for structured data that needs to arrive in a certain order without being dropped.
On the other hand, UDP is used when the packet becomes irrelevant after ~500 ms. This is used in games, telephony, and so on.
If your problem requires UDP, then you need to handle any lost, duplicate, or out-of-order packets yourself, or at least write code that is resilient to that possibility.
http://en.wikipedia.org/wiki/User_Datagram_Protocol
If you can't afford lost packets, then TCP is probably a better option than UDP, since it provides that guarantee out of the box.