WebRTC has an experimental statistic googCaptureStartNtpTimeMs in the ssrc recv report of WebRTC stats. I believe it is defined as the NTP time when the stream started on the sender's side in milliseconds.
I'm want to sync my clients with the same NTP clock WebRTC uses to generate that googCaptureStartNtpTimeMs.
I've been googling around and searching/reading WebRTC source and I cannot find which NTP server(s) are used to generate this stat. I'm assuming all NTP servers cannot possibly be in sync, so I need to figure out exactly which NTP server WebRTC is using so I can sync my clients with their googCaptureStartNtpTimeMs timestamp.
I see a few options:
pool.ntp.org
time.google.com
time1.google.com
ntp.google.com
Searching for these in the WebRTC source yields no results. pool.ntp.org seems promising because it will help find an NTP server close (low latency) to where the request was made from. But, if the NTP pool is not in sync with NTP WebRTC, it will be useless to me unless I can get the offset between NTP pool and NTP WebRTC.
My question boils down to:
Are all NTP servers standardized and in sync with each other?
If not, which NTP server(s) does WebRTC use to generate googCaptureStartNtpTimeMs
googCaptureStartNtpTimeMs has nothing to do with NTP servers. It's in the local clock. "Ntp" in the name comes from the format in which time is passed in RTCP SenderReport packets, and doesn't imply that some NTP server synchronization is going on.
Your understanding of the metric definition is a little off.
It's a local (receiver's) time when the track started sending.
The clock synchronization is done internally by WebRTC using RTT estimate and RTCP SenderReport packets. Note, to have RTT estimate, the peer has to be sending something, and it takes some time (up to 20 seconds for audio tracks, based on RTCP SR sending period). Before all the estimates are available, the metric will report -1.
Related
From
Link: www.w3.org/TR/webrtc/#dom-rtcbundlepolicy
Content: 4.2.5 RTCBundlePolicy Enum
"If the remote endpoint is bundle-aware, all media tracks and data channels are bundled onto the same transport."
When is an endpoint bundle-aware and when not? And what does bundle-aware means?
To establish a p2p connection, WebRTC will allocate and do STUN network checks on up to 3 ports (multiplied by ways they can be reached) on either end, and as they're discovered (which takes time), ask JS to trickle-exchange info on each of these "ICE candidates" across a signaling channel, once for video, once for audio, and once for data (if you have it).
WebRTC does this mostly to support connecting to non-browser legacy devices, because all modern browsers support BUNDLE, which is when all but one candidate end up being thrown away, and all media gets bundled over that single port.
WebRTC even has a "max-compat" mode that goes even further, allocating a port for each piece of media, just in case the other endpoint is really old.
WebRTC doesn't know the other endpoint is a browser until it receives an "answer" from it, but if you know, you can specify "max-bundle" and save a couple of milliseconds.
I am trying to find the best way to broadcast a camara and send the stream to 200 connections.
If I use web-rtc, I am limited with the CPU power. I've tried to use a server as a gateway, but the number connection maximum I can perform is 60. And 120 with 2 servers.
I can't use web socket to send stream because, the TCP protocol create latency.
Last solution : use RTMP protocol, but there is 5s-10s of latency.
My question: Is there a solution to stream a camera to many clients (200/300) in real-time ?
Just using webrtc would not work as I assume the device the the camera will need a huge bandwidth. The best way is to use an SFU. This will send the video to to the server to then broadcast it to every peer. It is normally able to handle 200 connections if only video is used.
I've implemented such a server using mediasoup. It also allows you to balance the load over several cpu's and multiple servers.
Here is a simple project where this library is used.
There are also other solutions like Janus gateway or kurento server. Although I haven't used them.
SECOND SOLUTION
I found This github repository which allows video forwarding peer to peer even for large audiences. Basically forwarding the stream to other peers which will also forward their received stream. I assume that there will be a little more latency as the video could be relayed through many peers.
WebRTC DataChannels use SCTP. Looking at the graph of bits received from chrome://webrtc-internals, there is a regular sending of a small amount of data. Is this the SCTP heartbeat?
From what I understand, this is the ICE heartbeat.
I am just elaborating Sam's answer.
WebRTC DataChannel uses Stream Control Transport Protocol (SCTP) for sending
and receiving arbitrary data. Since, WebRTC requires that all WebRTC traffic be
encrypted, DTLS is used. However, most routers and NAT devices don't handle this
protocol well. Hence, SCTP is tunneled over DTLS and UDP. Now, even when two
peers are exchanging arbitrary data, it is happening over UDP. Hence, I too
believe that it is not a SCTP heartbeat.
As you might know, RTCPeerConnection uses ICE for resolving connectivity issues between
peers. ICE uses STUN keep-alives to check the connectivity status between
the peers. Currently, I believe chrome sends out STUN Binding Request every 450 ms to perform connectivity checks, but there is an ongoing discussion on extending that time interval.
Can anyone tell be where to use the UDP protocol except live streaming of music/video? What are default usecases for UDP?
UDP is also good for broadcast, such as service discovery - finding that newly plugged in printer.
Also of note is that broadcast is anonymous, you don't need to specify target hosts, as such it can form the foundation of a convenient plug-and-play or high-availability network.
UDP is stateless and is good for applications that have large numbers of clients connecting to a server such as time servers or DNS. The fact that no connection has to established and maintained reduces the memory required by the server. There is no handshaking involved and so this reduces the traffic on the network. On the downside, if the information transferred requires multiple packets there is no transmission control to ensure that all packets arrive and in the correct order - but in games packets lost are probably better than late or disordered.
Anything else where you need performance but can survive if a packet gets lost along the way. Multiplayer games come to mind, for example.
A very common use case is DNS, since the overhead of creating a TCP connection would by far outweight the actual payload.
Additional use cases are NTP (network time service) and most video games.
I use UDP to add chat capabilities to our applications. No need to create a server. It is also useful to dispatch events to all users of our applications.
This morning, there were big problems at work because an SNMP trap didn't "go through" because SNMP is run over UDP. I remember from the networking class in college that UDP isn't guaranteed delivery like TCP/IP. And Wikipedia says that SNMP can be run over TCP/IP, but UDP is more common.
I get that some of the advantages of UDP over TCP/IP are speed, broadcasting, and multicasting. But it seems to me that guaranteed delivery is more important for network monitoring than broadcasting ability. Particularly when there are serious high-security needs. One of my coworkers told me that UDP packets are the first to be dropped when traffic gets heavy. That is yet another reason to prefer TCP/IP over UDP for network monitoring (IMO).
So why does SNMP use UDP? I can't figure it out and can't find a good reason on Google either.
UDP is actually expected to work better than TCP in lossy networks (or congested networks). TCP is far better at transferring large quantities of data, but when the network fails it's more likely that UDP will get through. (in fact, I recently did a study testing this and it found that SNMP over UDP succeeded far better than SNMP over TCP in lossy networks when the UDP timeout was set properly). Generally, TCP starts behaving poorly at about 5% packet loss and becomes completely useless at 33% (ish) and UDP will still succeed (eventually).
So the right thing to do, as always, is pick the right tool for the right job. If you're doing routine monitoring of lots of data, you might consider TCP. But be prepared to fall back to UDP for fixing problems. Most stacks these days can actually use both TCP and UDP.
As for sending TRAPs, yes TRAPs are unreliable because they're not acknowledged. However, SNMP INFORMs are an acknowledged version of a SNMP TRAP. Thus if you want to know that the notification receiver got the message, please use INFORMs. Note that TCP does not solve this problem as it only provides layer 3 level notification that the message was received. There is no assurance that the notification receiver actually got it. SNMP INFORMs do application level acknowledgement and are much more trustworthy than assuming a TCP ack indicates they got it.
If systems sent SNMP traps via TCP they could block waiting for the packets to be ACKed if there was a problem getting the traffic to the receiver. If a lot of traps were generated, it could use up the available sockets on the system and the system would lock up. With UDP that is not an issue because it is stateless. A similar problem took out BitBucket in January although it was syslog protocol rather than SNMP--basically, they were inadvertently using syslog over TCP due to a configuration error, the syslog server went down, and all of the servers locked up waiting for the syslog server to ACK their packets. If SNMP traps were sent over TCP, a similar problem could occur.
http://blog.bitbucket.org/2012/01/12/follow-up-on-our-downtime-last-week/
Check out O'Reilly's writings on SNMP: https://library.oreilly.com/book/9780596008406/essential-snmp/18.xhtml
One advantage of using UDP for SNMP traps is that you can direct UDP to a broadcast address, and then field them with multiple management stations on that subnet.
The use of traps with SNMP is considered unreliable. You really should not be relying on traps.
SNMP was designed to be used as a request/response protocol. The protocol details are simple (hence the name, "simple network management protocol"). And UDP is a very simple transport. Try implementing TCP on your basic agent - it's considerably more complex than a simple agent coded using UDP.
SNMP get/getnext operations have a retry mechanism - if a response is not received within timeout then the same request is sent up to a maximum number of tries.
Usually, when you're doing SNMP, you're on a company network, you're not doing this over the long haul. UDP can be more efficient. Let's look at (a gross oversimplification of) the conversation via TCP, then via UDP...
TCP version:
client sends SYN to server
server sends SYN/ACK to client
client sends ACK to server - socket is now established
client sends DATA to server
server sends ACK to client
server sends RESPONSE to client
client sends ACK to server
client sends FIN to server
server sends FIN/ACK to client
client sends ACK to server - socket is torn down
UDP version:
client sends request to server
server sends response to client
generally, the UDP version succeeds since it's on the same subnet, or not far away (i.e. on the company network).
However, if there is a problem with either the initial request or the response, it's up to the app to decide. A. can we get by with a missed packet? if so, who cares, just move on. B. do we need to make sure the message is sent? simple, just redo the whole thing... client sends request to server, server sends response to client. The application can provide a number just in case the recipient of the message receives both messages, he knows it's really the same message being sent again.
This same technique is why DNS is done over UDP. It's much lighter weight and generally it works the first time because you are supposed to be near your DNS resolver.