is it possible to implement one-to-many video broadcast with peerjs or is their any other better JavaScript library that I can use if peerjs can't do it
Peerjs is just a wrapper for the native RTCPeerconnection in the browser. And there are also others like simple-peer for example.
Though you can implement one-to-many broadcasting with all of them.
note: you can't expect to be able to broadcast audio+video to a big amount of people at the same time that way (like 50 for example).
Related
I have an assignment to display, into a Hololens 2 (Unity Project), two video feeds (stereo camera) coming from a LattéPanda. For now, I successfully manage to do the demo from the Mixed-Reality WebRTC project locally, but I have some difficulties with the remote streaming.
The problem is how to make my application based on the Mixed-Reality C# Core 3.1 connect to my NodeDSS signaler since the demo uses a NamedPipeSignaler class that can't reach out to localhost? So I look up the classes they provided in the hope of getting the required method to implement, also with the interaction it needs to do with the PeerConnection object. It started to be a little complicated, so we look up other solutions.
One of the solutions we found was the OWT-Server (Open WebRTC Toolkit) which seems to give us already dockerize application to videocast on its own. However, the documentation doesn't specify much other than we need to link the docker image to an "application", which is not clear what it is supposed to do. We don't have any way to specify the STUN/TURN server, nor the signaler IP address for that matter.
So my goal at this point is very simple: just make one feed appear into my Unity project. The LattéPanda's only objective right now is to cast the video without caring much for any interaction (for now): it won't receive or even need to listen to any feed coming back ever, and for now, there is no need to interact with other tools. I've been searching for about 2 weeks now and my GoogleFu is not that good apparently. Is there any tool that could achieve this?
A little disclaimer: I do believe I still lack an understanding of the Signaling process. It seems that WebRTC does not enforce any standard in that regard. What I understand is the communication protocol (WebSocket, HTTP/2) is not standardized, only the messaging is (what message needs to be sent/handle).
EDIT
To be clear, the LattéPanda currently runs a console application written in C# Core 3.1. The reason is, like I said, that the LattéPanda should not display any of its feed to a monitor connected to it, nor received/handle any feed from outside. We can see it like a surveillance camera that outputs its feed through WebRTC and doesn't need to receive any feed.
Does Google WebRTC Native implementation has support for SFU?
Does Google WebRTC Native implementation support for integrating custom/hardware encoder/decoder?
Not without alteration.
Internally WebRTC's internal audio/video pipelines are directly tied to encoder/decoders.
PeerConnectionFactory allows you to provide a video decoder/encoder factory, so you can short circuit the logic here, and grab the encoded frames, mock up a stream, and feed them directly into it as a relay, creating a new PeerConnection and setting those streams onto it.
The audio end is more difficult. There isn't a codec factory, so you will have to short circuit the logic there probably by alteration of libwebrtc.
The final question is RTCP termination, and how to override the mechanisms for quality/bandwidth control to not create a "One goes out, they all go out." situation.
Since libwebrtc will be the SFU, it will receive RTCP feedback from its remote peer for the content it is proxying, and vice versa.
For a 1-1 situation, it needs to be able to forward the RTCP feedback to the remote peer.
For multipoint, it needs to perform some logic to determine if one of the peers is problematic, and stop sending it video, switch off its video feed, or attempt to switch to a lower bitrate video stream. Basically it needs to act as a conduit that attempts to predict why/how packet loss is occurring, and keep as many audio/video feeds operating normally at at the highest possible quality for each peer.
How exactly to hijack the RTCP feedback mechanisms in libwebrtc, I think that again will likely require some customization/hooks into libwebrtc
I think it will be easier to try with GStreamer implementation of WebRTC. Although it is still in "Bad Plugins" it is way easier to get or provide encoded audio and video. Actually it is implemented in that in mind - to make implementation of MFU and SFU easier.
I am new to WebRTC technology.
I want to create a video chat / video conferencing with a transmitter and many followers (more 1000).
Example:
I read a lot of documentations :
https://medium.com/linagora-engineering/scalability-in-video-conferencing-part-1-276f52b4acac
https://webrtcglossary.com/sfu/
But I still don't know what is the best solution (in my case) between Selective Forwarding Unit (SFU) and Multiploint Control Unit (MCU).
Can you help me to understand?
I think the best way is MCU but I am not sure.
Second question:
Can you suggest some sources and links that can help me to set up such an architecture. Currently my project works perfectly in Peer To Peer (Mesh) but it is not the right solution. I have absolutely no idea how to set this up.
Thank you so much
It is possible to implement this using an SFU. The more peers are connected, the more you would need processing power to handle those new peers. This could be done by using more threads and/or forwarding requests to another machine.
With mediasoup it is possible have control over this. With this tool you have routers where peers can connect to to get the stream. A router works on a worker which has a limited amount of receiving peers (depending on cpu capacity). Now to allow more peers you can forward the stream to other routers which can expand the total capacity.
useful links:
https://mediasoup.org/documentation/v3/scalability/#one-to-many-broadcasting
https://mediasoup.org/documentation/v3/mediasoup/design/#architecture
https://mediasoup.discourse.group/t/scalability-in-mediasoup-example/793/2?u=dirvann
I am planning to use Puppeteer for WebRTC call. I hope it should be easy. I am not sure how do I collect statistics like WebRTC call is passed or failed, how many media packets (UDP packets exchanged), stun / turn pass fail, media parameters like jitter, delay etc.
Can somebody please help me to understand, using Puppeteer how can one collect WebRTC related statistics.
There is a well known WebRTC test engine based on selenium and selenium grid called KITE. For references, and quick start you can check the simple KITE-AppRTC-Test implementation to see how they are collecting the stats, and show them. You might want to run the demos as well because it seems to have the results you are looking for.
Among many other approaches one might be -
Collect WebRTC connection metrics by calling getStats API. What you see in chrome://webrtc-internals is a visual representation of this getStats API that collects getStats snapshots in regular interval, and showing them after some post-processing.
Collect getStats data from puppeteer page.evaluate, send it to server and then analyse the data realtime or at the end of call based on your use case.
There are quite good amount of opensource work done by WebRTC experts on how you can collect WebRTC data, send them to server and represent them
https://github.com/fippo/webrtc-externals
https://github.com/fippo/webrtc-dump-importer
https://github.com/fippo/dump-webrtc-event-log
I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.