WebRTC Overhead - webrtc

I want to know, how much overhead WebRTC produces when sending data over datachannels.
I know that Websockets have 2 - 14 Bytes overhead for each frame. Does WebRTC use more Overhead? I cannot find some useful information on the web. Its clear for me, that Datachannels can not be used for now. How much overhead is used with Mediastreams?
Thanks

At the application layer, you can think of DataChannel as sending and
receiving over SCTP. In the PPID (Payload Protocol Identifier) field of the
SCTP header, Datachannel sets value 0x51 for indicating that it's sending UTF-8
data and 0x52 for binary data.
Yes, you are right. RTCDataChannel uses SCTP over DTLS and UDP. DTLS is used for
security. However, SCTP has problems traversing most NAT/Firewall setups.
Hence, to overcome that, SCTP is tunneled through UDP. So the overall overhead
to send data would be overhead of:
SCTP + DTLS + UDP + IP
and that is:
28 bytes + 20-40 bytes + 8 bytes + 20 - 40 bytes
So, the overhead would be rougly about 120 bytes. The maximum size of the SCTP
packet that a WebRTC client can send is 1280 bytes. So at max, you can send
roughly 1160 bytes of data per SCTP packet.

WebRTC uses RTP to send its media. RTP runs over UDP.
Besides the usual IP and UDP headers, there are two additional headers:
The RTP header itself starts from 12 bytes and can grow from there, depending on what gets used.
The payload header - the header that is used for each data packet of the specific codec being used. This one depends on the codec itself.
RTP is designed to have as little overhead as possible over its payload due to the basic reasoning that you want to achieve better media quality, which means dedicating as many bits as possible to the media itself.

Here's a screenshot of 2 peer.js instances (babylon.js front end) sending exactly 3 bytes every 16ms (~60 per second).
The profiler shows 30,000 bits / second:
30,000 bits / 8 bits per byte / 60 per second = 62.5 bytes, so after the 3 bytes I'm sending it's ~59.5 bytes according to the profiler.
I'm not sure if something is not counted on the incoming, because it is only profiling half that, 15k bits / second

Related

OMNeT++ 5G video stream: send a full encoded frame or fragment it into several packets to stay within MTU?

I want to simulate a simple client/server application streaming 2-3 4k 25 fps videos within 5G network using OMNeT++ stack. After capturing all incoming video flows with opencv and encoding with h264 codec I have roughly 20 kilobytes for each frame encoded as bytes -- uint8_t. Of course capturing for each of flow happens in a separate thread.
Now I want to send it to some clients over 5G using UDP protocol. If you look almost at every open source implementation of video streaming, the process of transmission is presented very simple:
uint_8* buffer; // encoded frame
int size; // buffer size
send(clientSocket, buffer, size, 0); // send to client
and on the client side a loop with an appropriate recv pulls. The same basically happens in every OMNeT++ simualtion.
Of course a UDP packet with around 20Kb payload will be fragmented on its way to pass good old IPv4 1500 bytes MTU at the backhaul part of a standard 5G architecture. So here comes my question: could I benefit something if I try to reduce my maximal UDP payload to, let's say, 1280 bytes to avoid IP fragmentation and to fit IPv6 minimum reassembly buffer size?
I'm afraid that if I just cluelessly send the encoded frame as in the code above, some fragmented packets may be lost and my decoder (h264 as well) on the client side can fail to decode the frame. Well however the same could happen if I send mine fragmented 1280 bytes packets... so here the question is pretty general, considering the fact that it happens in OMNeT++ simulation but with real video files: is there any advantage of controlling the packet size before sending or you just can cluelessly send any less than 64 Kb UDP datagram and just chill?

UDP communication in BitTorrent

So I am seeding on BitTorrent and there appears to be two-fold communication showing up on Wireshark. From peers around the world I receive UDP packets with 20 bytes of data. In response my BitTorrent sends UDP packets with around 1438 bytes of data.
This uTorrent protocol suggested here does not seem to have anything as small as the 28 byte (20 bytes data 8 bytes header) UDP packets, likewise this link isn't helpful.
What is the formal communication mechanism or protocol at play here? Is it possible to analyze those 1438-byte packets or in more detail in order to get a snippet of the file being sent? Or the structure of the 20 bytes of data being sent from my peers?
This uTorrent protocol suggested here does not seem to have anything as small as the 28 byte (20 bytes data 8 bytes header) UDP packets
The µTP header is 20 bytes. So those most likely are ACK messages. Wireshark should support decoding those packets, at least if you captured a connection from the beginning.

RTP Packet maximum size?

Im trying to figure out which is the maximum size of a RTP packet. I know that the minimum header size is 12 bytes, but i dont find anything about the payload.
It is possible that the maximum size of the RTP packet is the same as the UDP payload maximum size? I mean, that i have only a RTP packet with a huge payload. Is this possible and, in this case, there is any recommended size for the RTP packet for not doing this?
For example im encapsulating MP3 frames in RTP. Do I make an RTP frame with 1 MP3 frame, 2, or how many?
I hope you understand my question :)
It is possible that the maximum size of the RTP packet is the same as the UDP payload maximum size?
The RTP standard does not set a maximum size so you're free to do this.
(Jumbo packets often have issues of their own with transport, but that's generally to do with the lower layer protocols playing up.)
It is possible that the maximum size of the RTP packet is the same as the UDP payload maximum size? I mean, that i have only a RTP packet with a huge payload. Is this possible and, in this case, there is any recommended size for the RTP packet for not doing this?
Yes, you could create a 1446 byte long payload and put it in a 12 byte RTP packet (1458 bytes) on a network with an MTU of 1500 bytes.
By the time you include an 8 byte UDP header + 20 byte IP header + 14 byte Ethernet header you've 42 bytes of overhead which takes you to 1500 bytes.
In practice if you're transporting this over the internet, this traffic is getting encapsulated or carried across varied transport layers, you'd probably want to keep it below 1400 bytes to be on the safe side.
For example im encapsulating MP3 frames in RTP. Do I make an RTP frame with 1 MP3 frame, 2, or how many?
RTP has a one-to-one mapping from one unique source to one RTP stream (Unless the streams are mixed/muxed together into one stream) so each of the MP3 sources would be put into their own RTP stream, each with it's own unique Synchronization Source Identifier (SSRC) to differentiate between each stream.
For more info on RTP there's the RFC 3550 itself, there's a great book by Colin Perkins called "RTP: Audio and Video for the Internet", I'vea fair bit on RTP on my blog and I've also created a Python library for creating RTP packets.

The most reliable and efficient udp packet size?

Would sending lots a small packets by UDP take more resources (cpu, compression by zlib, etc...). I read here that sending one big packet of ~65kBYTEs by UDP would probably fail so I'm thought that sending lots of smaller packets would succeed more often, but then comes the computational overhead of using more processing power (or at least thats what I'm assuming). The question is basically this; what is the best scenario for sending the maximum successful packets and keeping computation down to a minimum? Is there a specific size that works most of the time? I'm using Erlang for a server and Enet for the client (written in c++). Using Zlib compression also and I send the same packets to every client (broadcasting is the term I guess).
The maximum size of UDP payload that, most of the time, will not cause ip fragmentation is
MTU size of the host handling the PDU (most of the case it will be 1500) -
size of the IP header (20 bytes) -
size of UDP header (8 bytes)
1500 MTU - 20 IP hdr - 8 UDP hdr = 1472 bytes
#EJP talked about 534 bytes but I would fix it to 508. This is the number of bytes that FOR SURE will not cause fragmentation, because the minimum MTU size that an host can set is 576 and IP header max size can be 60 bytes (508 = 576 MTU - 60 IP - 8 UDP)
By the way i'd try to go with 1472 bytes because 1500 is a standard-enough value.
Use 1492 instead of 1500 for calculation if you're passing through a PPPoE connection.
Would sending lots a small packets by UDP take more resources ?
Yes, it would, definitely! I just did an experiment with a streaming app. The app sends 2000 frames of data each second, precisely timed. The data payload for each frame is 24 bytes. I used UDP with sendto() to send this data to a listener app on another node.
What I found was interesting. This level of activity took my sending CPU to its knees! I went from having about 64% free CPU time, to having about 5%! That was disastrous for my application, so I had to fix that. I decided to experiment with variations.
First, I simply commented out the sendto() call, to see what the packet assembly overhead looked like. About a 1% hit on CPU time. Not bad. OK... must be the sendto() call!
Then, I did a quick fakeout test... I called the sendto() API only once in every 10 iterations, but I padded the data record to 10 times its previous length, to simulate the effect of assembling a collection of smaller records into a larger one, sent less often. The results were quite satisfactory: 7% CPU hit, as compared to 59% previously. It would seem that, at least on my *NIX-like system, the operation of sending a packet is costly just in the overhead of making the call.
Just in case anyone doubts whether the test was working properly, I verified all the results with Wireshark observation of the actual UDP transmissions to confirm all was working as it should.
Conclusion: it uses MUCH less CPU time to send larger packets less often, then the same amount of data in the form of smaller packets sent more frequently. Admittedly, I do not know what happens if UDP starts fragging your overly-large UDP datagram... I mean, I don't know how much CPU overhead this adds. I will try to find out (I'd like to know myself) and update this answer.
534 bytes. That is required to be transmitted without fragmentation. It can still be lost altogether of course. The overheads due to retransmission of lost packets and the network overheads themselves are several orders of magnitude more significant than any CPU cost.
You're probably using the wrong protocol. UDP is almost always a poor choice for data you care about transmitting. You wind up layering sequencing, retry, and integrity logic atop it, and then you have TCP.

Combining UDP packets?

Is there any benifit to combining several UDP packets into one as opposed to sending them all one right after the other? I know that if the large packet gets courrupted then i loose all of them, but is there possibly some upside to sending them all in one? such as a lower chance of the large one being lost?
That would be at the discretion of the sending application.
Note that your large packet is limited by the MTU of the underlying network. e.g. the theoretical size of a UDP packet is 64k, but an ethernet frame is only ~1500 bytes. So I suspect this is not a practical feature.
Generally networking channels will be limited on the rate of packets that can be sent per second. Thus if you want to send millions of messages per second you generally want to combine into a smaller number of packets to run without major packet loss.
As an over generalisation, Windows doesn't like > 10,000 packets per second for UDP, but you can saturate a gigabit network with large MTU packets.
Is there any benifit to combining several UDP packets into one as opposed to sending them all one right after the other?
One can save on UDP header which is 8 bytes per datagram hence reducing the amount of data sent over the wire. Just make sure you don't send more then MTU sans IP and UDP header sizes to avoid fragmenting on IP layer.
Also, the standard POSIX socket API requires one send/sendto/sendmsg() system call to send or receive one datagram, so by sending fewer datagrams one does fewer system call reducing the overall latency (an order of a few microseconds per call). Linux kernels starting from 3.0 provide sendmsg() and recvmmsg() functions to send and receive multiple datagrams in one system call.
I know that if the large packet gets courrupted then i loose all of the
True. However, if the protocol can't cope with UDP datagram loss at all it may not matter that much - as soon as one datagram is lost it's broken anyway.
It is important for situations where packet size is small (less than 100 byte). The IP/UDP header is at least 28 bytes.
Imagine you have streaming connection to a server, each packet contains 50 bytes and your software sends packets with rate 1000 packet per second.
The actual payload is 1000 * 50 bytes = 50000 bytes. Headers overhead 1000 * 28 = 28000 bytes Total bytes : 50000 + 28000 = 87000 ==> 87 KBps
Imagine you can combine each 3 UDP packets into one packet:
Headers overhead 1000 / 3 * 28 = 9333 Total bytes : 50000 + 9333 ===> 60 KBps
This -in some applications- saves good portion of the bandwidth.