I've created a webrtc p2p app using socket io for signalling.
This work perfectly with one connection pair. When there are more than one connection pair, the video works well but the audio isn't hear on the other side.
I've deployed TURN server, it works as intended.
I'm not able to isolate the issue as in where does this issue lie.
I've been searching on google with no luck. It's been 3 days now
It would be great if anyone would point me to a right direction
Related
I am trying out the WebRTC examples of MuazKhan. It is working perfectly fine when the broadcaster is on AWS/Azure(or any other network) and the receiver is through my phones 4G network. But as soon as I switch to my broadband from 4G, the video stream is unable to connect and I the video player keeps on trying. Therefore I assumed that the problem is of the router NATting and will be resolved if I use a tried-n-tested TURN service such as Xirsys.
Sadly, even after using their TURN servers, I still am blocked with my broadband connection issue as mentioned above. Here are a few queries that I wanted to discuss:-
This issue seems to be due to NATting through my broadbands routers. Shouldn't using the TURN server solve it?
How can I verify if even the TURN servers are getting used and its not just the STUN servers.
Can this issue be due to the Signalling Server?
Do i need to enable some specific protocols in my router to make this work?
What else could be responsible for the issue that I should debug?
Suppose I have 2 peers exchanging video with webRTC. Now I need both of the streams to be saved as video files in the central server. Is is possible to do it realtime? (Storing/Uploading the video from peers is not an option).
I thought of making a 3 node webRTC connection, with the 3rd node being the server. This way, I can screen record the 3rd node's stream or save it using some other way. But I am not sure about the reliability/feasibility of the implementation.
This is for a mobile application, and I would avoid any method that involves uploading/saving.
PS: I'm using Agora.io for the purpose of video-conference.
in my opinion
you can do it like the record demo:https://webrtc.github.io/samples/src/content/getusermedia/record/.
record each stream to blobs and push them to your server with websocket.
then convert the blobs to a webm file or just add in a video
Agora doesn't offer on-premise recording out of the box but they do provide thee code for you to be able to launch your own on-premise recording using your own server. Agora has the code and instructions to deploy on GitHub: https://github.com/AgoraIO/Basic-Recording
The way it works, once you have set up the Agora Recording SDK, the client would trigger the recording to start, via user interaction (button tap) or some other event (i.e. peer-joined or stream-subscribed) this will trigger the recording service to join the channel and record the streams. _The service outputs the video file once recording has stopped.
you need a WebRTC media server.
WebRTC media servers makes it possible to support more complex
scenarios WebRTC media servers are servers that act as WebRTC clients
but run on the server side. They are termination points for the media
where we’d like to take action. Popular tasks done on WebRTC media
servers include:
Group calling Recording Broadcast and live streaming Gateway to other
networks/protocols Server-side machine learning Cloud rendering
(gaming or 3D) The adventurous and strong hearted will go and develop
their own WebRTC media server. Most would pick a commercial service or
an open source one. For the latter, check out these tips for choosing
WebRTC open source media server framework.
In many cases, the thing developers are looking for is support for
group calling, something that almost always requires a media server.
In that case, you need to decide if you’d go with the classing (and
now somewhat old) MCU mixing model or with the more accepted and
modern SFU routing model. You will also need to think a lot about the
sizing of your WebRTC media server.
For recording WebRTC sessions, you can either do that on the client
side or the server side. In both cases you’ll be needing a server, but
what that server is and how it works will be very different in each
case.
If it is broadcasting you’re after, then you need to think about the
broadcast size of your WebRTC session.
link:https://bloggeek.me/webrtc-server/
Is there a simple guide from where I can start creating a stun / turn and signaling server ?
I spend over a week searching for those things and couldn't find any guide where I can say:
okay, I am on the right track now - this is clear.
So far, everything is so abstract without any examples.
This is what I'm trying to achieve: a simple video stream on my local network where I'll have a server with installed usb camera on it, and an application on my iis which will connect to the usb camera and stream it to the clients, and every time when a client opens the application, will see the video stream from the server camera.
Note: since I want to use it on my local network do I really need a stun/turn server, or is there a guide that shows how to avoid it ?
Media streamed over dedicated servers HTTP/HTTPS rarely needs a NAT traversal solution. Instead, just have your web server with camera attached, on the public Internet or behind your NAT with port-forwarding enabled.
There are LOTS of streaming media solutions available as open source, free downloads, or commercially sold. A good list is here:
https://en.wikipedia.org/wiki/List_of_streaming_media_systems
I was going through this PubNub WebRTC demo. https://kevingleason.me/SimpleRTC/minivid.html
Which works fine within same network (same browser or different devices across same network). But I tried using it over internet, I am able to connect a call but can not see anything but a black screen. This is the source for same tutorial
https://github.com/pubnub/SimpleRTC
I have gone through many such application, such as AndroidRTC
and I face same problem (black screen after connection over internet). I am unable to figure out why, any help is appreciated.
You need some sort of signaling mechanism (PubNub, Firebase, or your own software [nodejs seems the preferred choice these days]) to get the webRTC API communicating P2P on your local network. To get webRTC to work from one network to another you need a STUN server/service. Google provides free stun servers (stun:stun.l.google.com:19302). To get webRTC to traverse strict firewall settings and complicated networks you need a TURN server/service like xirsys.com.
This article covers it all ...
http://www.html5rocks.com/en/tutorials/webrtc/infrastructure/
I've got WebRTC peer to peer working but when I want to broadcast a single camera to multiple clients obviously peer to peer isn't suitable.
I've found solutions like
http://lynckia.com
and
http://www.medooze.com/products/mcu/webrtc-support.aspx
But the first I can't get setup (and it seems to have cross browser issues)
the second just feels like we're hitting a nail with a nuclear missile.
All I need is a relay, I don't need to decode / recode streams.
I just need
The Broadcaster to connect to the server (peer to peer)
The clients to connect to the server (peer to peer)
The server to relay the stream from the broadcaster to the clients.
Is there any software out there that offers this solution that I've missed? is there an alternative working and scalable alternative?
Thanks
Jitsi Video Bridge works pretty much exactly how you describe.
On your server you can run Janus, to which your broadcaster can provide a stream via RTP.
Have a look at an example configuration file.
After writing a configuration file which defines how the server receives stream from the broadcaster, you should be able to launch janus in the background via a command line interface tool:
$ janus --daemon --config=config_file.conf
Also, see streaming test demo.
Note: I have not tested this thoroughly.
Have a look at this github-repo inspired from muaz khan's WebRTC p2p scalable broadcast. This can work great on LAN. On internet, I am not sure how well it can work as of now though we are improving it on the go.
If you just want to broadcast from a peer to a set of peers, if they don't care about the latency, the best solution is to covert WebRTC to live streaming, without transcoding just muxing:
Peer(Publisher) ---WebRTC--> Server --RTMP/HLS/DASH--> Peers/Players
If this works good for you, SRS is able to covert WebRTC to live streaming.
Because live streaming allows you to use CDN or TCP to deliver the streams, and the latency is about 3~5s, so this solution is only available when Peers/Players never need to communicate to the Peer(Publisher).
If you want all those peers to talk to each other, it's very complex and need a WebRTC SFU cluster to do this, there will be a huge number of streams. For example, if allows 100 peers to talk to each other, there will be 100x100 = 10k streams.
It's too complicated, so I don't think there is good open-source solution right now(at 2022.02).