Why do the audio and video RTT values differ? - webrtc

Why do the audio and video RTT values differ? Are they just pings through the RTCP channel?
I would assume they should be the same or roughly the same.

The round trip times are based on the RTCP sender and receiver report and calculated as defined in https://datatracker.ietf.org/doc/html/rfc3550#section-6.4.1
Why they would differ is a good question. The timestamps will differ and video sender reports are a lot more frequent but the resulting values should still be roughly the same. What differences are you observing?

Related

What is the relation between Serialization and streaming?

Always when I find some articles or videos are talking about stream they're necessairly talking about serialization?
what is the relation between those? or to be specific,
Could we say that the data stream always needs serialization or could we find some data stream without serialization?
Firstly, it useful to have a reminder of serial vs parallel communication: if we take a simple example of transmitting a byte, in the parallel case all 8 bits are sent at the same time and in the serial case the 8 bits are sent one by one and the byte built again on the receiving side.
For your video domain example, If you imagine a frame of a video as being a large collection of bytes, lets say 720 by 1280 pixels and each pixel is represented by a byte, then we need 921,600 bytes to represent the frame.
If you are streaming the video you need to send each frame (plus overhead which we'll ignore here for simplicity) from the server to the client device, hence you need to send the 921,600 bytes for each frame.
If you had a very (very!) large parallel connections that could transmit 921,600 bytes in parallel between the server and the client in a single communication then this would be easy to understand.
However, this is almost always not the case, even for much smaller data structures, so serialisation is the name generally given to the process of taking the 921,600 bytes and breaking them down into the size which you can transmit - and that size is often one bit at a time.
Generally a video will be broken down into packets and the packets transmitted to the client. The packets themselves are just collections of bytes also and if the connection allows only a single bit of information to be transmitted at a time, then the packet needs to be broken down and sent 'serially' one bit at a time.
To complicate things, as is commonly the case in computer science and communications, the terms can mean different things in different contexts.
For example you may see it mentioned that you can either stream or 'serialise an object' in some client server communication. What this generally means is that you can either send the raw data 'stream' and let the client be responsible for how to interpret it, or you can use a framework or underlying mechanism which will take an object, convert it into a format that can be transmitted serially, and then reconstruct it on the other end and give it to the client. In fact the actually communication is serial in both cases (if it is using a serial communication channel) so the terms are being used in a different way here.

Can WebRTC Stream the data I need?

My app is attempting to stream real time proprietary data between two users.
The requirements for the data to be considered real time is that the delay between sending and receiving is less than 200ms.
The data is also packetized, I need to send a packet every 20ms.
Each packet is 300 bytes in size.
Can i stream real time data at 15kbps with a latency of less than 200ms?
Many thanks in advance
The ping depends on how far your other endpoint is and what you bandwidth is.
I experienced a maximum ping of 70ms to a friend of mine between Germany and Austria, so 200ms seems to be realistic.

webrtc voice packetization size

I was wondering how can I change voice packets size in webrtc application? I am using opus and I am trying to change packet size from 20 ms to 40 ms. I thought I can achieve it through changing ptime in sdp. However when I captured packets with wireshark there is not difference between packets with ptime=20 and ptime=40 and the difference between consecutive time stamps is always 960. I would expect the difference to be 1920 for 40ms ptime. I imagine I am completely wrong in my assumptions, is there any way to actually change packet sizes?

How Chrome/Firefox handles SRTCP report comming from WebRtc connection?

SRTCP tracks the number of sent and lost bytes and packets, last received sequence number, inter-arrival jitter for each SRTP packet, and other SRTP statistics.
Does mentioned browsers do something with SRTCP reports when dealing with audio stream, for example adjust bitrate on the fly if network conditions are changed ?
Given that Chrome does adjust bitrate and resolution of VP8 on the fly in a connection, I would assume that OPUS configurations are changed in the connection as well.
You can see the feed back on the sending audio in this image. The bitrate obviously drops slightly when using opus. However, I would imagine that video bitrate would be the first changed in a video call as changing it would have the greater effect.
Obviously, one cannot change the bitrate on a codec that only supports constant bitrates.
All the other stats are a combination of what the RTCP reports give(packetsLost, Rtt, bits sent, etc.) and googles monitoring of the inputs/outputs(audio level, echo cancellation, etc.).
NOTE: this is taken from a session created by AppRtc in chrome.

Dealing with duplicated data in MPEG2-TS

I'm using a setup in which I receive duplicated datagrams using UDP-based video streaming with VLC. I wanted to know if there's some field in MPEG-TS (ISO/IEC 13818-1) which I can use for detecting duplicated data and therefore discard it until it reaches the aplication layer.
The problem is that duplicated frames reach the top of the TCP/IP (Application Layer) stack and consequently create conflict with streaming. Continuity Counters (CC) of duplicated data are the same therefore the receiver thinks that there's a gap and skips.
I found the answer, I can rely on the RTP protocol which use sequence numbers and timestamp.