How to setup STUN server in a video chat app built using simple peer? - webrtc

I was recently working on a project which requires video chatting. I used simple peer to setup a video call and use socket.io for signalling purposes. I then deployed my application. I realised when two peers on the same network join the call, the app works fine. But if two peers on different networks join the call, then I got an error stating process not defined and the call does not connects.
I read about this online and then figured out that I also have two configure a STUN and/or TURN server to extract ICE candidates and their public IP.
Can anyone please tell me how to setup a STUN server in my simple peer application? I have also read somewhere that google provides some free STUN servers to use but I dont know how to actually integrate them in my simple peer application.

When you create the RTCPeerConnection in your application, provide a configuration that includes iceServers.
This is the reference.
Example:
myPeerConnection = new RTCPeerConnection({
iceServers: [
{
urls: "stun:stunserver.example.org"
}
]
});
You can find a list of free STUN servers here.
You may also want to configure TURN servers to cover more complex NAT scenarios.

Related

WebRTC make a connection between two different devices

Well I am studying about Web-RTC from its official documentation. I need to integrate live streaming in my website but after seeing this and studying about all its documentation. I only learn about streaming on locally like on same browser and same page. But this is not what i want. I want to start stream from my admin panel(This part has been completed) and broadcast that stream whoever has access to my website whatever the website is and whatever device. Whoever open my website if i m streaming then he should see that and if some of u have worked on live streaming then u should tell me. It will be a great help for me. All i did until now is to make a connection between two peers on same page. Now i want to make global peer to peer connection
I have done this implementation using simple-peer. Basically a wrapper for webRTC.
As soon as a new user connects a new webrtc connection should be made between the receiver and the caller. The receiver is firstly initialized and then sends a message to the sender to start the connection. This first connection is all done by your own server you should write.
Here is a working example. And here is the demo. Any connected devices will be automatically connected to the call. Multiple users supported. You'll find all the webrtc code in /public/js/main.js
You have to do signaling which mean that you have to exchange the PEER CONNECTIONS over the server and which required you to build a server page and client page so both of you can exchange the peer connection.
here is the complete procedure of exchanging the peer connection over the server.
find the heading
RTCPeerConnection plus servers
https://www.html5rocks.com/en/tutorials/webrtc/basics/

How can I implement own webrtc server in my project?

I want to implement webrtc server in my project. I want to make my own webrtc server and deploy it in amazon server. How can I achieve this?
WebRTC is a peer-to-peer protocol so you don't need a server for this.
You will need a signaling server for session negotiation. How you'll implement this depends on the technology that you'll use - client side: polling, ajax, websockets, stomp etc and server side.
For STUN/TURN you can deploy an existing server or use RFC and develop your own from scratch.
#Adrian Ber is correct, you need a signalling server such as this one:
https://github.com/peers/peerjs-server
You can set one of these up on AWS
You'll also need some code on the client side. There is a matching javascript client library (which does most of the work) here: http://peerjs.com/
There are some examples on the peerjs web site - they either need to be run on your local machine or on https servers (browsers will no longer allow camera access over http)
Ignore the people saying that WebRTC is peer to peer only. There is no reason why you can't implement an application, run it on a server, and treat it as a 'peer' for the sake of webRTC when it is actually a server.
That said, we've looked into pulling the WebRTC implementation out of Chrome, but it is a huge task. Depending on what you want to do, you will likely only need to support a subset of WebRTC functionality (Data channel / unreliable for example if you're doing a multiplayer web game).
There might be a few implementations out there that have cropped up now, but last I checked there wasn't anything of note.

Multipoint peer with WebRTC

I'm developing a web application that want to share the posts that one user posted in one topic argument via webRTC.
The problem is that webRTC allow only one-to-one peer connection.
What can i do for solve this problem using webRTC?
I need that my information is transfered broadcast to the other peer that are connected in the same room.
For this you need some signaling server like easyrtc, which packages some really awesome functionalities. One of those is multy-party videocalls.
If you want to read more about, and see some cool demos check https://easyrtc.com/ and https://demo.easyrtc.com/demos/
After you establish the connection via signalling server, you can open the DataChannel and make the transfer that you said..

Can I simplify WebRTC signalling for computers on the same private network?

WebRTC signalling is driving me crazy. My use-case is quite simple: a bidirectional audio intercom between a kiosk and to a control room webapp. Both computers are on the same network. Neither has internet access, all machines have known static IPs.
Everything I read wants me to use STUN/TURN/ICE servers. The acronyms for this is endless, contributing to my migraine but if this were a standard application, I'd just open a port, tell the other client about it (I can do this via the webapp if I need to) and have the other connect.
Can I do this with WebRTC? Without running a dozen signalling servers?
For the sake of examples, how would you connect a browser running on 192.168.0.101 to one running on 192.168.0.102?
STUN/TURN is different from signaling.
STUN/TURN in WebRTC are used to gather ICE candidates. Signaling is used to transmit between these two PCs the session description (offer and answer).
You can use free STUN server (like stun.l.google.com or stun.services.mozilla.org). There are also free TURN servers, but not too many (these are resource expensive). One is numb.vigenie.ca.
Now there's no signaling server, because these are custom and can be done in many ways. Here's an article that I wrote. I ended up using Stomp now on client side and Spring on server side.
I guess you can tamper with SDP and inject the ICE candidates statically, but you'll still need to exchange SDP (and that's dinamycally generated each session) between these two PCs somehow. Even though, taking into account that the configuration will not change, I guess you can exchange it once (through the means of copy-paste :) ), stored it somewhere and use it every time.
If your end-points have static IPs then you can ignore STUN, TURN and ICE, which are just power-tools to drill holes in firewalls. Most people aren't that lucky.
Due to how WebRTC is structured, end-points do need a way to exchange call setup information (SDP) like media ports and key information ahead of time. How you get that information from A to B and back to A, is entirely up to you ("signaling server" is just a fancy word for this), but most people use something like a web socket server, the tic-tac-toe of client-initiated communication.
I think the simplest way to make this work on a private network without an internet connection is to install a basic web socket server on one of the machines.
As an example I recommend the very simple https://github.com/emannion/webrtc-web-socket which worked on my private network without an internet connection.
Follow the instructions to install the web socket server on e.g. 192.168.1.101, then have both end-points connect to 192.168.0.101:1337 with Chrome or Firefox. Share camera on both ends in the basic demo web UI, and hit Connect and you should be good to go.
If you need to do this entirely without any server, then this answer to a related question at least highlights the information you'd need to send across (in a cut'n'paste demo).

WebRTC HowTo PeerConnection via LAN with 2 Browsers

since few days I'm trying to build a basic webRTC Videochat. I've got some Demos running localy, even via LAN. But now I want to build one by my one at the really basics without so much overload some Demos come with.
But I still don't get a complete peer connection.
Eg. this example seems to be broken, because I can't "createSignalingChannel();" w3.org/TR/webrtc/#simple-example
Some other examples (https://webrtc-experiment.appspot.com/) want me to link their scripts, but I wont do this, because I want to understand the magic of the peer connection and how to get a handshake between 2 browsers.
I also explored examples with the Google App Engine but thats not what I want.
I want to run it in really easy JS and HTML just on the minimum of what is neccessary.
Here is my code:
https://github.com/mexx91/basicVideoRTC EDIT: Should work now
So what will I have to add to get an handshake and peer connection, so that I can send eg. the mediaStream to eachother.
Thanks a lot!
createSignalingChannel() is only pseudo-code to illustrate the existence of a separate channel. You need for the initial connection handling a separate message channel.
You can achieve that with hosted services like Pusher, Brightcontext or PubNub, or you can host your own backend with open-source projects like socket.io or SignalR.
Then you just need to send the offers, answers and iceCandidates through your separate channel.
List of Realtime Services: http://www.leggetter.co.uk/real-time-web-technologies-guide
Imagine a video conferencing web-app, which users A and B originally access from some webserver. Suppose that web app supports presence, so the web server knows who's currently on-line. Imahine the UI allows A to try and place a video call to B. Via say XMLHttpRequest(), A's browser informs the server this is wanted, and B's javascript pops up something saying that A wants to call B. No WebRTC has happened at all yet. But at this stage, A can indirecttly communicated with B by sending messages using e.g. XMLHttpeRequest. In WebRTC parlance, this is the "signalling channel". So, A and B can both interact with their ICE agents to discover candidate addresses, and SDP descriptions, and send these to each ot6her, via the server, over this signallinh channel. E.g. the web app on A calls a WebRTC API to get its ICE candidates, and packages these up as it sees fit, to send to B. B's reader receives this message from the server (e.g over a WebSocket or long poll) and hyence it can unpack this, and format as needed to send to the ICE agent on B, using the RTCPeerConnection object. Similalrly, SDP offer/answer can be sent betweent he two apps, and passe through into the ICE agnet in the browsers, to get agreed media formats etc. At that stage, media connections can get set uo by the browser (meida streams are added to the RTCPeerConnection initially (which aren't communicating, but whihc have attributes that can be queried to describe the codec etc, and when the API is asked to create an SDP description, it does that using these attributes, but adjust the IP address and port based on how the ICE agent on each local browser has figured out what addresses can reach that local browser / port (NAT traversal).