webrtc voice packetization size - webrtc

I was wondering how can I change voice packets size in webrtc application? I am using opus and I am trying to change packet size from 20 ms to 40 ms. I thought I can achieve it through changing ptime in sdp. However when I captured packets with wireshark there is not difference between packets with ptime=20 and ptime=40 and the difference between consecutive time stamps is always 960. I would expect the difference to be 1920 for 40ms ptime. I imagine I am completely wrong in my assumptions, is there any way to actually change packet sizes?

Related

How do I detect the ideal UDP payload size?

I heard a UDP payload of 508 bytes will be safe from fragments. I heard the real MTU is 1500 but people should use a payload of 1400 because headers will eat the rest of the bytes, I heard many packets will be fragmented so using around 64K is fine. But I want to forget about all of these and programmatically detect what's gets me good latency and throughput from my local machine to my server.
I was thinking about implementing something like a sliding window that TCP has. I'll send a few UDP packets then more and more until packets are lost. I'm not exactly sure how to tell if a packet was delayed VS lost and I'm not sure how to slide by down without going to far back. Is there an algorithm typically used for this? If I know the average hop between my machine and server or the average ping is there a way to estimate the maximum delay time of a packet?

WebRTC lowest possible latency

I have a simple UDP streaming protocol that takes RAW H264 video frames and sends them instantly from server side to the client side.
Using this protocol I can get near network RTT latency (no packet resending and I don't care about packet loss), so if I have 20 ms latency from server to the client I can make a video frame to be ready from encoder output to the client side (ready to be decoded) in... let's say 30 ms.
My question is:
Is WebRTC (over UDP) capable of going down to this kind of latencies?
Not taking into account encoding and decoding times, what is the
lowest latency possible I can get with WebRTC for the protocol layer?
I don't know if this kind of latencies will require my own protocol to be more deeply developed or I may go to something more generic like WebRTC for my video server development in order to instantly be supported by every web browser.
WebRTC can have the same low latency as regular SIP/RTP stacks.
WebRTC stack vendors does their best to reduce delay.
For recording and sending out there is no any delay. The stack will send the packets immediately once received from the recorder device and compressed with the selected codec. Some codec's (and some codec settings) might introduce some delay here to enable some features such as FEC.
Regarding the receiver side:
In optimal circumstances the stack should not delay the playback of the packets, so they can be display as soon as they arrive.
However in sub-optimal circumstances (with network delays or packet loss) the stack will introduce a jitter buffer. The lower is the network quality, the higher will be the jitter buffer length.
So, to achieve the lowest delay, you might have to do the followings:
choose a codec with the smallest processing time
remove FEC and disable any other settings which might cause additional delays
remove the jitter buffer (most WebRTC stacks doesn't have a setting for this so you might have to modify the code yourself, but it is an easy modification, because you just need to deactivate a part of the code)
WebRTC uses RTP as the underlying media transport which has only a small additional header at the beginning of the payload compared to plain UDP. This means it should be on par with what you achieve with plain UDP. RTP is heavily used in latency critical environments like real time audio and video (its the media transport in SIP, H.323, XMPP) and thus you can expect the latency to be sufficient for this purpose.

Can WebRTC Stream the data I need?

My app is attempting to stream real time proprietary data between two users.
The requirements for the data to be considered real time is that the delay between sending and receiving is less than 200ms.
The data is also packetized, I need to send a packet every 20ms.
Each packet is 300 bytes in size.
Can i stream real time data at 15kbps with a latency of less than 200ms?
Many thanks in advance
The ping depends on how far your other endpoint is and what you bandwidth is.
I experienced a maximum ping of 70ms to a friend of mine between Germany and Austria, so 200ms seems to be realistic.

The most reliable and efficient udp packet size?

Would sending lots a small packets by UDP take more resources (cpu, compression by zlib, etc...). I read here that sending one big packet of ~65kBYTEs by UDP would probably fail so I'm thought that sending lots of smaller packets would succeed more often, but then comes the computational overhead of using more processing power (or at least thats what I'm assuming). The question is basically this; what is the best scenario for sending the maximum successful packets and keeping computation down to a minimum? Is there a specific size that works most of the time? I'm using Erlang for a server and Enet for the client (written in c++). Using Zlib compression also and I send the same packets to every client (broadcasting is the term I guess).
The maximum size of UDP payload that, most of the time, will not cause ip fragmentation is
MTU size of the host handling the PDU (most of the case it will be 1500) -
size of the IP header (20 bytes) -
size of UDP header (8 bytes)
1500 MTU - 20 IP hdr - 8 UDP hdr = 1472 bytes
#EJP talked about 534 bytes but I would fix it to 508. This is the number of bytes that FOR SURE will not cause fragmentation, because the minimum MTU size that an host can set is 576 and IP header max size can be 60 bytes (508 = 576 MTU - 60 IP - 8 UDP)
By the way i'd try to go with 1472 bytes because 1500 is a standard-enough value.
Use 1492 instead of 1500 for calculation if you're passing through a PPPoE connection.
Would sending lots a small packets by UDP take more resources ?
Yes, it would, definitely! I just did an experiment with a streaming app. The app sends 2000 frames of data each second, precisely timed. The data payload for each frame is 24 bytes. I used UDP with sendto() to send this data to a listener app on another node.
What I found was interesting. This level of activity took my sending CPU to its knees! I went from having about 64% free CPU time, to having about 5%! That was disastrous for my application, so I had to fix that. I decided to experiment with variations.
First, I simply commented out the sendto() call, to see what the packet assembly overhead looked like. About a 1% hit on CPU time. Not bad. OK... must be the sendto() call!
Then, I did a quick fakeout test... I called the sendto() API only once in every 10 iterations, but I padded the data record to 10 times its previous length, to simulate the effect of assembling a collection of smaller records into a larger one, sent less often. The results were quite satisfactory: 7% CPU hit, as compared to 59% previously. It would seem that, at least on my *NIX-like system, the operation of sending a packet is costly just in the overhead of making the call.
Just in case anyone doubts whether the test was working properly, I verified all the results with Wireshark observation of the actual UDP transmissions to confirm all was working as it should.
Conclusion: it uses MUCH less CPU time to send larger packets less often, then the same amount of data in the form of smaller packets sent more frequently. Admittedly, I do not know what happens if UDP starts fragging your overly-large UDP datagram... I mean, I don't know how much CPU overhead this adds. I will try to find out (I'd like to know myself) and update this answer.
534 bytes. That is required to be transmitted without fragmentation. It can still be lost altogether of course. The overheads due to retransmission of lost packets and the network overheads themselves are several orders of magnitude more significant than any CPU cost.
You're probably using the wrong protocol. UDP is almost always a poor choice for data you care about transmitting. You wind up layering sequencing, retry, and integrity logic atop it, and then you have TCP.

Do headers on mobile requests and responses count as part of the bandwidth?

I am building an Arduino-based device that needs to send data over the internet to a remote server. It needs to do this as frequently as possible but also use as little bandwidth as possible. It will probably work over GSM/EDGE (cellular networking).
The data I'd like to send is about 40 bytes in size - really minimal. I'd like to send this packet to the server about once a minute, but also receive a packet of around that size in response once in a while.
The bandwidth on my server is no problem - the bandwidth on the device's internet connection is, i.e. the cellular data.
Do headers on mobile requests and responses count as part of the bandwidth?
Yes, the total packet size is all data that is sent. Assuming a TCP packet you lose 20 bytes right from the start. If you get intimate with Wireshark you can see exactly what's happening.