What is the difference between inbound-rtp & remote-inbound-rtp in the results we get from webrtc getstats? - webrtc

I have been trying to figure out a way to calculate the following:
Bandwidth, Latency, Current Upload, and Download speed.
And have been confused with the values I am getting for the INBOUND-RTP, OUTBOUND-RTP, & REMOTE-INBOUND-RTP.
In my head, I was thinking about inbound-rtp as a collection of stats for all incoming data.
which apparently is wrong, since different stats for that type always stays Zero
The current setup uses chrome as a 2 connecting Clients, and a Media Server, with client instances running on "localhost"

The terminology used on MDN is a bit terse, so here's a rephrasing that I hope is helpful to solve your problem! Block quotes taken from MDN & clarified below. For an even terser description, also see the W3C definitions.
outbound-rtp
An RTCOutboundRtpStreamStats object giving statistics about an outbound RTP stream being sent from the RTCPeerConnection.
This stats report is based on your outgoing data stream to your peers. This is the measurement taken from the perspective of just that oubound RTP stream, which is why information that involves your peers (round trip time, jitter, etc.) is missing, because those can only be measured with an understanding of the peer's processing of your stream.
inbound-rtp
Statistics about an inbound RTP stream that's currently in use by this RTCPeerConnection, in an RTCInboundRtpStreamStats object.
By contrast to the Outbound RTP statistics, this stats report contains data about the inbound data stream you are receiving from your peer(s). Notice that if you do not have any connected peers your call to getStats does not include this report type at all.
remote-inbound-rtp
Contains statistics about the remote endpoint's inbound RTP stream; that stream corresponds to the local endpoint's outbound RTP stream. Using this RTCRemoteInboundRtpStreamStats object, you can learn how the well the remote peer is receiving data.
This stats report provides details about your outbound rtp stream from the perspective of the remote connection. That is to say that this stats report provides an analysis about your outbound-rtp stream from the perspective of the remote server that is handling your stream on the other side.

I'm on the MDN writing team at Mozilla and happened upon this just now. I've taken some of the information from this conversation and applied it back to the article about RTCStatsType. There's more to improve there still, but I wanted to thank you for that insight!
Always feel free to sign up for an MDN account and update any content you see that's inaccurate or incomplete! Or you can file an issue and we'll see what we can do.

Related

ApiRTC - Media always sent to the cloud, even with meshOnlyEnabled

As a follow-up to my previous post (ApiRTC - Behaviour with meshModeEnabled and meshOnlyEnabled)
Hello,
You say that SFU is necessary for any activity that requires centralizing all the streams (recording, bandwidth optimization,...). However, in MESH mode, the files/media exchanged still manage to be recorded on the Apizee media server even though I don't go through the SFU. How is this possible ?
Can this behaviour be disabled so that the exchanged documents never leave the MESH stream ?
I have not found anything about this in the documentation.
By the way, the documentation often mentions the term "MCU", does this mean that ApiRTC also uses an MCU server in addition to the SFU ?
Thanks in advance.
apirtc
Can this behaviour be disabled so that the exchanged documents never
leave the MESH stream ?
Concerning a recording of all the streams in the conversation (via the startRecording method of the Conversation object see https://apirtc.github.io/references/apirtc-js/Conversation.html#startRecording__anchor):
--> The composition of multiple streams into one video file is done server-side by the SFU (v4.4.8).
Concerning the files (through conversation.pushData method):
--> We manage the file transfer through uploading the file on a storage and share the URI to all parties of a conversation. P2P transfer is not available (v4.4.8)
To exchange data in a P2P mode, you can use the Conversation.sendData method to send raw data across all participants.
Regarding your question about the MCU, no, ApiRTC doesnt use any MCU server to date (v. 4.4.8). The document refers to MCU for very specific on-premise deployment, not supported for ApiRTC users.
Cheers,
Romain

How to play video from MediaLive through UDP?

On AWS, how do you play video from MediaLive through the UDP output group?
For my use case, I'm building a live stream pipeline that takes an MPEG-2 transport stream from MediaLive, processes it through a UDP server (configured as an output group), and consumed by a web client that plays on HTML5 video.
The problem is: the data is flowing, but the video isn't rendering.
Previously, my output group was set to AWS MediaPackage, but because I need the ability to read and update frames over live stream, I'm trying to feed through UDP.
Is setting the output group to UDP the right approach?
The documentation is a bit sparse here. I'm wondering if there are resources or examples where others were able to play video this way as oppose to HLS/DASH.
Thanks for your post. Yes the UDP or RTP output would be the right choice of output from MediaLive. Appropriate routing rules will need to be used on any intermediary routers or firewalls to ensure that the UDP traffic can reach the client.
You wrote that 'the data is flowing, but the video isn't rendering.' This suggests an issue with the web client.
I suggest adding another identical UDP output to your UDP server and sending its output to a computer (or AWS Workspace) running a copy of VLC player. Decoding that new stream will give you a confidence monitor on the output of the entire workflow up to that point. This will help isolate the problem.
You could achieve the same result with a packet capture or TS stream analyzer if you want to go that route instead. If you go this route, I recommend trying to play back one of the packet captures locally with the web client.

Clear WebRTC Data Channel queue

I have been trying to use WebRTC Data Channel for a game, however, I am unable to consistently send live player data without hitting the queue size limit (8KB) after 50-70 secs of playing.
Sine the data is required to be real-time, I have no use for data that comes out of order. I have initialized the data channel with the following attributes:
negotiated: true,
id: id,
ordered: true,
maxRetransmits: 0,
maxPacketLifetime: 66
The MDN Docs said that the buffer cannot be altered in any way.
Is there anyway I can consistently send data without exceeding the buffer space? I don't mind purging the buffer space as it only contains data that has been clogged up over time.
NOTE: The data is transmitting until the buffer size exceeds the 8KB space.
EDIT: I forgot to add that this issue is only occurring when the two sides are on different networks. When both are within the same LAN, there is no buffering (since higher bandwidth, I presume). I tried to add multiple Data Channels (8 in parallel). However, this only increased the time before the failure occurred again. All 8 buffers were full. I also tried creating a new channel each time the buffer was close to being full and switched to the new DC while closing the previous one that was full, but I found out the hard way (reading Note in MDN Docs) that the buffer space is not released immediately, rather tries to transmit all data in the buffer taking away precious bandwidth.
Thanks in advance.
The maxRetransmits value is ignored if the maxPacketLifetime value is set; thus, you've configured your channel to resend packets for up to 66ms. For your application, it is probably better to use a pure unreliable channel by setting maxPacketLifetime to 0.
As Sean said, there is no way to flush the queue. What you can do is to drop packets before sending them if the channel is congested:
if(dc.bufferedAmount > 0)
return;
dc.send(data);
Finally, you should realise that buffering may happen in the network as well as at the sender: any router can buffer packets when it is congested, and many routers have very large buffers (this is called BufferBloat). The WebRTC stack should prevent you from buffering too much data in the network, but if WebRTC's behaviour is not aggressive enough for your needs, you will need to add explicit feedback from the sender to the receiver in order to avoid having too many packets in flight.
I don't believe you can flush the outbound buffer, you will probably need to watch the bufferedAmount and adjust what you are sending if it grows.
Maybe handle the retransmissions yourselves and discard old data if needed? WebRTC doesn't surface the SACKs from SCTP. So I think you will need to implement something yourself.
It's an interesting problem. Would love to hear the WebRTC W3C WorkGroup takes on it if exposing more info would make things easier for you.

ReceiveBufferSize - Download from Server with High Latency

I have a very basic file download that is connecting from the UK to US. The file is about 10MB and the connection is fast, but the latency is 80ms.
Since we have high latency, is there any way to reduce the acknowledgement window at the TCP layer to reduce the chattiness that occurs?
' Download a Large PDF
Using client As New System.Net.WebClient()
Dim url As String = "doc url"
Dim beginTime As DateTime = DateTime.Now
client.Credentials = System.Net.CredentialCache.DefaultCredentials
client.DownloadFile(url, "TMP.ZIP")
logWriter.WriteLine("6 MB ZIP File" & "," & (DateTime.Now - beginTime).TotalMilliseconds)
End Using
is there any way to reduce the acknowledgement window at the TCP layer to reduce the chattiness that occurs?
On application level you don't have much control over what happens in the network layers of the communication, this is all handled by lower level API's. The .NET framework only provides you with some types that are built on top of these API's to ease the implementation process.
That being said, acknowledgements are what make TCP reliable, these acknowledgements can guarantee that all the data was send to the other side of the connection. You could switch to using UDP, which doesn't use acknowledgements, meaning you could never verify if the data was received successfully, which is great for real-time communication (to balance between speed and quality), but not for sending files, as we want the file to be fully transferred, and not just 96% of the total file. The only way to verify that we received the full file, is to have the receiver notify us that it actually received the packets.
For me this sound like an infrastructure/network architecture related issue. Therefor I personally would not attempt to try and fix this on application level. Using the WebClient class, and thus TCP, feels like the right protocol for your use-case. So in my eyes, you've did your job and implemented the file download correctly.
If you have the possibility, I would try to put a content delivery network (CDN) in between the server and client so that the CDN can offload the down- and upload to an edge server that is close to your physical location.
If this isn't possible you get be creative and, for example, have a second server in the UK that synchronizes itself with the files located on the server in the US. For time/money reasons I wouldn't do this and just take the latency as-is; 80ms is acceptable for me.

WebRTC lowest possible latency

I have a simple UDP streaming protocol that takes RAW H264 video frames and sends them instantly from server side to the client side.
Using this protocol I can get near network RTT latency (no packet resending and I don't care about packet loss), so if I have 20 ms latency from server to the client I can make a video frame to be ready from encoder output to the client side (ready to be decoded) in... let's say 30 ms.
My question is:
Is WebRTC (over UDP) capable of going down to this kind of latencies?
Not taking into account encoding and decoding times, what is the
lowest latency possible I can get with WebRTC for the protocol layer?
I don't know if this kind of latencies will require my own protocol to be more deeply developed or I may go to something more generic like WebRTC for my video server development in order to instantly be supported by every web browser.
WebRTC can have the same low latency as regular SIP/RTP stacks.
WebRTC stack vendors does their best to reduce delay.
For recording and sending out there is no any delay. The stack will send the packets immediately once received from the recorder device and compressed with the selected codec. Some codec's (and some codec settings) might introduce some delay here to enable some features such as FEC.
Regarding the receiver side:
In optimal circumstances the stack should not delay the playback of the packets, so they can be display as soon as they arrive.
However in sub-optimal circumstances (with network delays or packet loss) the stack will introduce a jitter buffer. The lower is the network quality, the higher will be the jitter buffer length.
So, to achieve the lowest delay, you might have to do the followings:
choose a codec with the smallest processing time
remove FEC and disable any other settings which might cause additional delays
remove the jitter buffer (most WebRTC stacks doesn't have a setting for this so you might have to modify the code yourself, but it is an easy modification, because you just need to deactivate a part of the code)
WebRTC uses RTP as the underlying media transport which has only a small additional header at the beginning of the payload compared to plain UDP. This means it should be on par with what you achieve with plain UDP. RTP is heavily used in latency critical environments like real time audio and video (its the media transport in SIP, H.323, XMPP) and thus you can expect the latency to be sufficient for this purpose.