Why is a signaling server needed for WebRTC? - webrtc

WebRTC is a protocol that defines the transport method for media data between peer-to-peer. Understood. Also it works on top of RTP/UDP. This also understood.
While getting the discussion about signalling server it is mentioned that it is required to do compatibility check/channel initiation... and so on works.
My Question is: having said above,
1) Does it mean that a signaling server is mandatory?
2) Does WebRTC not have the intelligence to talk directly to the other peer without a signaling server?
3) Every article related with WebRTC starts with the statement that "It is between browser to browser communication?", does it mean, WebRTC can not be used between a) Embedded device with camera [Without Browser], b) Browser somewhere else.
4) Also, what is the gain if WebRTC is used compared to the legacy way of streaming into the browser? [I honestly don't know the legacy way].
I know it is a theoretical question. Though, i see this kind of question probably in different context floats around in the internet. Hope this question gives some architecture-level answers. Thanks.

WebRTC doesn't solve discovery (nor should it).
WebRTC knows how to talk directly to another peer without a signaling server, but it doesn't know how to discover another peer. Discovery is an inherent problem, so I'm a bit baffled that people expect WebRTC to solve it for them.
Think about it: How are you going to call me? How are you going to direct your computer to initiate contact with me and not a billion other people? By GPS coordinates? email address? static IP? irc? instant message? facebook? telephone number?
Also, how will I know when you call? Will my computer "ring"? There are hundreds of ways to solve this with regular web technology, so WebRTC would be doing you a disservice if it dictated a specific way. The context of your application will likely inform the best means of contact. Maybe I encounter you in some online forum or virtual room in an online game?
Technically speaking, you don't strictly need a signaling server with WebRTC, as long as you have other means to get an SDP offer (a piece of text) to your peer, and receive the reciprocal SDP answer in return, be it by phone text, IM, irc, email, or carrier pigeon. Try this in Chrome or Firefox: https://jsfiddle.net/nnc13tw2 - click "Offer" (wait up to 20 seconds), send the output to your friend who pastes it into the same field on their end and hits Enter, and have them send back the answer, which you paste in the answer field and hit Enter. You should now be connected, and no connecting server was ever involved.
Why the jsfiddle works: It packages all ICE candidates in the SDP, which can take a few seconds, to give you everything you need in one go.
Some advanced features, like altering the number of video sources mid-call etc. also require signaling, but once a call has been established, an app could use its own data channels for any further signaling needs between the peers.
Stackoverflow now demands that I include code to link to jsfiddle, so
I might as well include it here as well (though if you're on Chrome use the fiddle above, as camera access doesn't seem to work in snippets):
var config = { iceServers: [{ urls: "stun:stun.l.google.com:19302" }]};
var dc, pc = new RTCPeerConnection(config);
pc.onaddstream = e => v2.srcObject = e.stream;
pc.ondatachannel = e => dcInit(dc = e.channel);
v2.onloadedmetadata = e => log("Connected!");
var haveGum = navigator.mediaDevices.getUserMedia({video:true, audio:true})
.then(stream => pc.addStream(v1.srcObject = stream))
.catch(failed);
function dcInit() {
dc.onopen = () => log("Chat!");
dc.onmessage = e => log(e.data);
}
function createOffer() {
button.disabled = true;
dcInit(dc = pc.createDataChannel("chat"));
haveGum.then(() => pc.createOffer()).then(d => pc.setLocalDescription(d)).catch(failed);
pc.onicecandidate = e => {
if (e.candidate) return;
offer.value = pc.localDescription.sdp;
offer.select();
answer.placeholder = "Paste answer here";
};
};
offer.onkeypress = e => {
if (!enterPressed(e) || pc.signalingState != "stable") return;
button.disabled = offer.disabled = true;
var desc = new RTCSessionDescription({ type:"offer", sdp:offer.value });
pc.setRemoteDescription(desc)
.then(() => pc.createAnswer()).then(d => pc.setLocalDescription(d))
.catch(failed);
pc.onicecandidate = e => {
if (e.candidate) return;
answer.focus();
answer.value = pc.localDescription.sdp;
answer.select();
};
};
answer.onkeypress = e => {
if (!enterPressed(e) || pc.signalingState != "have-local-offer") return;
answer.disabled = true;
var desc = new RTCSessionDescription({ type:"answer", sdp:answer.value });
pc.setRemoteDescription(desc).catch(failed);
};
chat.onkeypress = e => {
if (!enterPressed(e)) return;
dc.send(chat.value);
log(chat.value);
chat.value = "";
};
var enterPressed = e => e.keyCode == 13;
var log = msg => div.innerHTML += "<p>" + msg + "</p>";
var failed = e => log(e);
<video id="v1" height="120" width="160" autoplay muted></video>
<video id="v2" height="120" width="160" autoplay></video><br>
<button id="button" onclick="createOffer()">Offer:</button>
<textarea id="offer" placeholder="Paste offer here"></textarea><br>
Answer: <textarea id="answer"></textarea><br><div id="div"></div>
Chat: <input id="chat"></input><br>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>

Yes, signalling is mandatory so that ICE candidates and the like are exchange so that the peer connection knows who its peer is
No, how would it know its peer without some sort of exchange?
No, it does not mean that. I have done numerous experiments working with raspis, and other native devices that I stream video to a browser page through a WebRTC peer connection.
What are you talking about? You meaning the gain of using WebRTC vs Flash and a Central server? WebRTC is peer to peer and if you couple that with GetUserMedia and Html5, you get rid of the need for flash and a central media server to handle all the media exchanges.

You need a signalling server in order to be able to establish a connection between two arbitrary peers; it is a simple reality of the internet architecture in use today.
In order to contact another peer on the web, you need to first know its IP address. There's the first problem already. You need to know what the IP address of your peer is. How are you going to get this information from peer A to peer B without the people sitting at these computers calling each other via phone and dictating IP addressees? To do this, each peer discovers its own address first, then sends it to the other peer. This opens two more problems: how does a peer discover what its outwards facing IP address is (which may be significantly different than its own IP), and how does it communicate this to the other peer of yet unknown address?
This is where a signalling server comes in. Both peers have a connection to the signalling server, before they have a connection to each other. So they use the signalling server to relay messages on their behalf until they have negotiated a direct way to talk. It would be possible to negotiate a connection without 3rd party help on local subnets; but this scenario is probably rare enough that I'm not even sure the spec is addressing it.
As for 3): WebRTC can be implemented on any device, it's just a protocol; it's not tied exclusively to browsers.
As for 4): the "legacy" way of streaming anything from one browser to another always involved a relay server in the middle. This server has big CPU and bandwidth requirements and is an expensive bottleneck. WebRTC enables direct P2P connections without middleman except for a lightweight signalling server. Also, there wasn't really an open standard before; most of the time you'd be paying some money to Adobe in one way or another.

Actually it is possible, but not usable.
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<title>webrtc</title>
</head>
<body>
<script>
let channel = null
const connection = new RTCPeerConnection({ iceServers: [{ urls: 'stun:stun.l.google.com:19302' }] }); // ice (stun and turn) are optional
connection.ondatachannel = (event) => {
console.log('ondatachannel')
channel = event.channel
// channel.onopen = event => console.log('onopen', event);
// channel.onmessage = event => console.log('onmessage', event);
channel.onmessage = (event) => alert(event.data)
}
connection.onconnectionstatechange = (event) => (document.getElementById('connectionState').innerText = connection.connectionState) // console.log('onconnectionstatechange', connection.connectionState)
connection.oniceconnectionstatechange = (event) =>
(document.getElementById('iceConnectionState').innerText = connection.iceConnectionState) // console.log('oniceconnectionstatechange', connection.iceConnectionState)
async function step_1_initiator_create_offer() {
channel = connection.createDataChannel('data')
// channel.onopen = event => console.log('onopen', event)
// channel.onmessage = event => console.log('onmessage', event)
channel.onmessage = (event) => alert(event.data)
connection.onicecandidate = (event) => {
// console.log('onicecandidate', event)
if (!event.candidate) {
document.getElementById('createdOffer').value = JSON.stringify(connection.localDescription)
document.getElementById('createdOffer').hidden = false
}
}
const offer = await connection.createOffer()
await connection.setLocalDescription(offer)
}
async function step_2_accept_remote_offer() {
const offer = JSON.parse(document.getElementById('remoteOffer').value)
await connection.setRemoteDescription(offer)
}
async function step_3_create_answer() {
connection.onicecandidate = (event) => {
// console.log('onicecandidate', event)
if (!event.candidate) {
document.getElementById('createdAnswer').value = JSON.stringify(connection.localDescription)
document.getElementById('createdAnswer').hidden = false
}
}
const answer = await connection.createAnswer()
await connection.setLocalDescription(answer)
}
async function step_4_accept_answer() {
const answer = JSON.parse(document.getElementById('remoteAnswer').value)
await connection.setRemoteDescription(answer)
}
async function send_text() {
const text = document.getElementById('text').value
channel.send(text)
}
</script>
<table width="100%" border="1">
<tr>
<th>#</th>
<th>initiator</th>
<th>peer</th>
</tr>
<tr>
<td>step 1</td>
<td>
<input type="button" value="create offer" onclick="step_1_initiator_create_offer()" />
<input id="createdOffer" type="text" hidden />
</td>
<td></td>
</tr>
<tr>
<td>step 2</td>
<td></td>
<td>
<input id="remoteOffer" type="text" placeholder="offer from initiator" />
<input type="button" value="accept offer" onclick="step_2_accept_remote_offer()" />
</td>
</tr>
<tr>
<td>step 3</td>
<td></td>
<td>
<input type="button" value="create answer" onclick="step_3_create_answer()" />
<input id="createdAnswer" type="text" hidden />
</td>
</tr>
<tr>
<td>step 4</td>
<td>
<input id="remoteAnswer" type="text" placeholder="answer from peer" />
<input type="button" value="accept answer" onclick="step_4_accept_answer()" />
</td>
<td></td>
</tr>
</table>
<hr />
<input id="text" type="text" />
<input type="button" value="send" onclick="send_text()" />
<hr />
<table border="1">
<tr>
<th colspan="2">connection</th>
</tr>
<tr>
<th>connectionState</th>
<td id="connectionState">unknown</td>
</tr>
<tr>
<th>iceConnectionState</th>
<td id="iceConnectionState">unknown</td>
</tr>
</table>
</body>
</html>
Source: https://mac-blog.org.ua/webrtc-one-to-one-without-signaling-server
Demo

Most of the answer has been covered, just thought I'd add a little something. When Google first created webRTC and open sourced it 4 years ago, it did so strictly on it's own without any signaling capabilities.
However, recently Google purchased Firebase, so I would wager that soon they will be open sourcing a complete end-to-end solution for WebRTC so all of us can have an even easier time implementing it.
Speaking of Firebase, I tried it out and it's not bad, got the basic job done: http://antonvolt.com/prototype2/

Related

How do I Securely store an Instagram Access Token and Automatically Refresh that Token When it Expires?

Firstly, I am still a fairly new Web Developer so what I'm asking my seem stupid, but I am hoping that I can learn something from this since it's my first time doing a project like this.
I am writing some code for a dynamic photo gallery for a Shopify site that renders the 8 most recent Instagram posts from the store's Instagram account. I have written the code to where the app works perfectly, but there are two issues. Firstly, the Instagram access token expires fairly quickly, making me have to manually update it. Secondly, I would like to store the access token in a secure way, but Im not sure how to do this.
For the access token expiring, it is possible to have it automatically regenerate and insert itself into the code? Or does it have to be manually updated every time. I would like it to be as hands off as possible, because it would be quite annoying to have to manually update it.
For storing the access token, I have done some research and have found that storing sensitive in an .env file is a good way to do this, however I dont have too much knowledge about this and dont know where to start. I have also found that storing the access tokens in a private Shopify app is a good solution, but I do not know anything about Shopify app development. Any help would be much appreciated.
Attached below is the Javascript code that I have. The API token is not displayed:
var token = 'API_TOKEN';
var fields = 'id,media_type,media_url,thumbnail_url,timestamp,permalink,caption';
var limit = 8;
$.ajax({
type: "GET",
url: 'https://graph.instagram.com/me/media?fields='+ fields +'&access_token='+ token +'&limit='+ limit,
crossDomain: true,
success: function(response) {
console.log(response);
$.each(response.data, function(index, obj) {
console.log(obj);
if (obj.media_type == "VIDEO")
{
$(".instagram-gallery")
.append(
`<div class='custom-block'>
<p class="caption">${obj.caption}</p>
<div class="overlay">
<a href="${obj.permalink}" target="_blank">
<img class='instagram-image' src="${obj.thumbnail_url}">
</a>
</div>
</div>`
);
}
else if (obj.media_type == "IMAGE" || obj.media_type == "CAROUSEL_ALBUM")
{
$(".instagram-gallery")
.append(
`<div class='custom-block'>
<p class="caption">${obj.caption}</p>
<div class="overlay">
<a href="${obj.permalink}" target="_blank">
<img class='instagram-image' src="${obj.media_url}">
</a>
</div>
</div>`
);
}
})
},
dataType: "jsonp" //set to JSONP, is a callback
});

Browser keeps on accessing the camera with red dot even after stopping the stream aafter establishing the peer connection using webRTC

let localStream;
let peerConnection;
navigator.mediaDevices.getUserMedia({
audio: true,
video: true
}).then(function(stream) {
createPeerConnection();
localStream = stream;
peerConnection.addStream(localStream);
});
so when stopping the stream it stops the video
localStream.getTracks().forEach(track => track.stop());
But the browser tab says that it is accessing the camera or microphone with a red dot besides it. I just do not want to reload the page in order to stop that.
Note: this happens when after establishing a peer connection using webRTC and after disconnecting the peers the camera light stays on.
Is there any way to do that. Thanks for your help in advance.
you can use boolean value or condition in which tab access camera after track.stop() you can set the value to false then the camera will not be acessed anymore. (p.s you can try that if its works)
<!DOCTYPE html>
<html>
<head>
<title>Web Client</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<div id="callerIDContainer">
<button onclick="call_user();">Call User</button>
</div>
<div class="video-container">
<video autoplay muted class="local-video" id="local-video"></video>
</div>
<div>
<button onclick="hangup();">Hangup</button>
</div>
</body>
<script >
var localStream;
var accessRequired=true
function call_user() //your function
{
if(accessRequired)
{
navigator.mediaDevices.getUserMedia({
audio: true,
video: true
}).then(function(stream) {
localStream = stream;
const localVideo = document.getElementById("local-video");
if (localVideo) {
localVideo.srcObject = localStream;
}
});
}
}
function hangup(){
localStream.getTracks().forEach(track => track.stop()).then(()=>{accessRequired=false});
}
</script>
</html>
try this call user then hangup it is working
The sample code in your question looks like it uses gUM() to create an audio-only stream ({video: false, audio:true}).
It would be strange if using .stop() on all the tracks on your audio-only stream also stopped the video track on some other stream. If you want to turn off your camera's on-the-air light you'll need to stop the video track you used in peerConnection.addTrack(videoTrack). You probably also need to tear down the call using peerConnection.close().
I had same issue with webRTC and React. I have stopped tracks of remote stream but I forgot to stop local stream :
window.localStream.getTracks().forEach((track) => {
track.stop();
});

Cannot get JSON response from API

I have a RESTful API web service. I'm programming a simple pure Javascript client app to interact with the APIs. I also use Knockout framework. My Javscript code:
self.movies = ko.observableArray();
$.get(self.moviessURI, function(data){
var obj = JSON.parse(data);
for (var i=0; i < obj.movies.length; i++) {
self.movies.push(obj.movies[i]);
}
}, "json");
My HTML code:
<table class="table table-striped">
<tr><td><b>Title</b></td><td><b>VideoID</b></td></tr>
<!-- ko foreach: movies -->
<tr>
<td><p><b data-bind="text: title"></b></p></td><td><p data-bind="text: videoId"></p></td>
</tr>
<!-- /ko -->
</table>
After running, I have a request status 200 on the web service, but my client app does not display anything. I also used Postman to test the API and its working.
Where did I do wrong? Thanks for any help!

Google SignIn State

I'm trying to build a Google signin button into my website. I'm trying to avoid using their built-in button. The code below works to sign in a user, but I can't figure out how to make my webpage remember that they're signed in when the user refreshes the page, or leaves the site and comes back.
Using Chrome's developer tools, I can see that there's an entry for https://accounts.google.com under both Local Storage and Session Storage. They seem to more or less contain the same information, including the user's validated token.
What I don't understand is how to get the gapi.auth2.init() function to recognize and use this token. The documentation doesn't seem to cover it.
<html>
<head>
<title>Login Test</title>
<script src="http://code.jquery.com/jquery-2.1.4.js"></script>
<script src="https://apis.google.com/js/platform.js?onload=renderButton" async defer></script>
</head>
<script>
var googleUser = {};
function renderButton() {
gapi.load('auth2', function(){
auth2 = gapi.auth2.init({
client_id: 'MY_CREDENTIALS.apps.googleusercontent.com',
});
attachSignin(document.getElementById('customBtn'));
});
};
function attachSignin(element) {
auth2.attachClickHandler(element, {},
function(googleUser) {
document.getElementById('name').innerText = "Signed in: " +
googleUser.getBasicProfile().getName();
}, function(error) {
alert(JSON.stringify(error, undefined, 2));
}
);
}
</script>
<body>
<div id="gSignInWrapper">
<span class="label">Sign in with:</span>
<input type="button" id="customBtn" value="Google"></input>
</div>
<p id="name"></p>
</body>
</html>
You can use listeners. This is the relevant part:
// Listen for sign-in state changes.
auth2.isSignedIn.listen(signinChanged);
// Listen for changes to current user.
auth2.currentUser.listen(userChanged);
You can also get up to date values by
var isSignedIn = auth2.isSignedIn.get();
var currentUser = auth2.currentUser.get();
To strictly detect returning users only you can do:
var auth2 = gapi.auth2.init(CONFIG);
auth2.then(function() {
// at this point initial authentication is done.
var currentUser = auth2.currentUser.get();
});
When it comes to your code I would do:
auth2 = gapi.auth2.init(CONFIG);
auth2.currentUser.listen(onUserChange);
auth2.attachClickHandler(element, {});
This way all changes in sign-in state are passed to onUserChange (this includes returning users, new sign-ins from attachClickHandler, new sign-ins from different tab).

Play framework 2.04 browser request is hanging once multipartFormData maxlength has been reached. How to solve it?

My server side code is the following (just for the sake of testing it):
def upload = Action(parse.maxLength(maxLength = 10*1024, parser.multipartFormData)) {
implicit request =>
Logger.info("data: " + request.body.dataParts)
Logger.info("file: " + request.body.file("picture"))
Logger.info("req: " + request.contentType)
Logger.info("req body: " + request.body)
Ok("File has been uploaded")
}
My client side code is a simple form that has an input of type file.
#helper.form(action = routes.Application.upload, 'enctype -> "multipart/form-data") {
<p>
<input type="text" name="name" />
</p>
<p>
<input id="imageFile" type="file" name="picture" accept="image/*" />
</p>
<p>
<input type="submit" value="Save" />
</p>
}
The problem is that if you try to upload files that are bigger than 10KB the browser will hang waiting for the server to finish even though it appears that the server has finished consuming the request. How to solve it?
Unfortunately, there seems to be a problem related to this in Play 2.0.4 and the browser will hang waiting for the file to finish uploading even though the request body has been consumed in the server side. A discussion regarding the issue can be found here and it was reported here (play doesn't finish consuming request if maxlength is reached).
Fortunately, this has been resolved in Play 2.1 and the first release candidate is already available. So, the best thing to do would be to migrate your app to Play 2.1.