I'm using PubNub's WebRTC library to start a WebRTC session and show the local video stream. How can I end the session, unsubscribe and remove the stream again?
var phone = window.phone = PHONE({
number : 12345,
publish_key : '...',
subscribe_key : '...',
ssl: true
});
var ctrl = window.ctrl = CONTROLLER(phone);
phone.ready(function(){
ctrl.addLocalStream(document.getElementById("vid-thumb"))
});
WebRTC Session Hangup
session.hangup()
End the session right now.
The 'ended' callback will fire for both connected parties on either side of the call.
$("#hangup").click(function(){
// End the call
session.hangup();
});
WebRTC Phone Hangup
phone.hangup()
There are two ways to hangup WebRTC calls.
You can use the phone-level method phone.hangup()
which will hangup all calls at once.
Or you can use the session-level method session.hangup()
which will only hangup that call session.
// hangup all calls
phone.hangup();
// hangup single session
session.hangup();
🔗 Source: https://github.com/stephenlb/webrtc-sdk/#webrtc-session-hangup
Related
User started with DataChannel only.
AudioChannel is added through renegotiation later.
var mLocalAudio;
navigator.getUserMedia({ video: false, audio: true },
function (myStream) {
mLocalAudio = myStream;
mConn.addStream(myStream);
}, function (e) { });
On the remote peer, ontrack will be triggered and we add the stream on to an <audio> element.
But since this is a many to many connection, there will have multiple peers trying to swith on / off their audio channel from time to time.
My problem is, how can I identify which audio track is belongs to which user?
I am trying to create a one to one video streaming Ionic app. In the app I have a button "Connect", On click of "Connect" the publisher initialized and on click of "Disconnect" button I disconnect the session.What I want it to work like :
Click Connect - Publisher Initialized
Click Disconnect - Session Disconnect
Click Connect - Publisher Initialized (But here I get the error)
Tokbox Republishing issues
OpenTok:Publisher:error _connectivityAttemptPinger should have been cleaned up +0ms
OpenTok:Publisher:error OT.Publisher State Change Failed: +209ms 'PublishingToSession' cannot transition to 'PublishingToSession'
OpenTok:Publisher:error OT.Publisher.onPublishingTimeout +15s
OpenTok:GlobalExceptionHandler:error OT.exception :: title: Unable to Publish (1500) msg: ICEWorkflow +0ms
OpenTok:Session:error 1500 +0ms Session.publish :: Could not publish in a reasonable amount of time
OpenTok:Session:error 1500 +9ms Session.publish :: Session.publish :: Could not publish in a reasonable amount of time
The code is below:
tokBoxInit() {
if (OT.checkSystemRequirements() == 1) {
this.session = OT.initSession(this.apiKey, this.sessionId);
console.log(this.session);
this.session.connect(this.token, function(error) {
if (error) {
console.log("Error connecting: ", error.name, error.message);
} else {
console.log("Connected to the session.");
}
});
this.publisher = OT.initPublisher('publisher',{insertMode: 'append',width: '100%',height: '100%'});
this.publisher.on({
streamCreated: function (event) {},
streamDestroyed: function (event) {}
});
this.session.on({
sessionConnected: (event: any) => {
console.log("Session Connected Listener");
this.connected = true; // Status to show Connect/Disconnect Button
this.session.publish(this.publisher);
}
});
}
}
How do you handle the disconnect functionality? Do you clear the session after the connection is destroyed and unpublish the destroyed session?
And also check your token capabilities if it has the role of publisher or moderator as it could also be that the token does not have those properties. Or is missing when the session is regenerated.
1500 error message as quoted by opentok has the following description:
Unable to Publish. The client's token does not have the role set to publish or moderator. Once the client has connected to the session, the capabilities property of the Session object lists the client's capabilities.
But code is too simple to tell, because I can't see how you handle session disconnection so check if you handle it and clear session when disconnected.
In this document, it uses URL.createObjectURL to set the video source. (This is the code to answer a call).
var offer = getOfferFromFriend();
navigator.getUserMedia({video: true}, function(stream) {
pc.onaddstream = e => video.src = URL.createObjectURL(e.stream);
pc.addStream(stream);
pc.setRemoteDescription(new RTCSessionDescription(offer), function() {
pc.createAnswer(function(answer) {
pc.setLocalDescription(answer, function() {
// send the answer to a server to be forwarded back to the caller (you)
}, error);
}, error);
}, error);
});
I expected video.src to be the address to retrieve the remote video. So it should be fixed and given by the other side of the connection (whoever initiated the call). But the value of URL.createObjectURL is generated on the answerer's side, and it event depends on when the function is called. How it can be used to get the remote video stream?
Edit:
The result of URL.createObjectURL looks like blob:http://some.site.com/xxxx-the-token-xxxx. With this string, how does the video component know where to load the remote stream? Is there a hashmap of {url:stream} stored somewhere? If so, how does the video component access the hashmap?
A stream object does store a token string, which you can get with stream.toURL. But it is different from the result of URL.createObjectURL. The value of URL.createObjectURL depends on time. If you call it twice in a row, you get different values.
URL.createObjectURL(stream) is a hack. Stop using it. Efforts are underway to remove it.
Use video.srcObject = stream directly instead. It is standard and well-implemented.
This assignment of a local resource should never have been a URL in the first place, and is a red herring to understanding how WebRTC works.
WebRTC is a transmission API, sending data directly from one peer to another. No content URLs are involved. The remote stream you get from onaddstream is a local object receiver side, and is the live streaming result of the transmission, ready to be played.
The documentation you read is old and outdated. Thanks for pointing it out, I'll fix it. It has other problems: you should call setRemoteDescription immediately, not wait for the receiver to share their camera, otherwise incoming candidates are missed. Instead of the code you show, do this:
pc.onaddstream = e => video.srcObject = e.stream;
function getOfferFromFriend(offer) {
return pc.setRemoteDescription(new RTCSessionDescription(offer))
.then(() => navigator.getUserMedia({video: true}))
.then(stream => {
pc.addStream(stream);
return pc.createAnswer();
})
.then(answer => pc.setLocalDescription(answer))
.then(() => {
// send the answer to a server to be forwarded back to the caller (you)
})
.catch(error);
}
It uses srcObject, avoids the deprecated callback API, and won't cause intermittent ICE failures.
Because a WebRTC connection involves several steps and what you get from such a connection is a stream. But the src property of the video tag does not accept a stream, but a URL. And this is the way to "convert" a stream to a URL.
I am looking into WebRTC but I feel like I'm not understanding the full picture. I'm looking at this demo project in particular: https://github.com/oney/RCTWebRTCDemo/blob/master/main.js
I'm having trouble understanding how I can match 2 clients so that Client A can see Client B's video stream and vice versa.
This is in the demo:
function getLocalStream(isFront, callback) {
MediaStreamTrack.getSources(sourceInfos => {
console.log(sourceInfos);
let videoSourceId;
for (const i = 0; i < sourceInfos.length; i++) {
const sourceInfo = sourceInfos[i];
if(sourceInfo.kind == "video" && sourceInfo.facing == (isFront ? "front" : "back")) {
videoSourceId = sourceInfo.id;
}
}
getUserMedia({
audio: true,
video: {
mandatory: {
minWidth: 500, // Provide your own width, height and frame rate here
minHeight: 300,
minFrameRate: 30
},
facingMode: (isFront ? "user" : "environment"),
optional: [{ sourceId: sourceInfos.id }]
}
}, function (stream) {
console.log('dddd', stream);
callback(stream);
}, logError);
});
}
and then it's used like this:
socket.on('connect', function(data) {
console.log('connect');
getLocalStream(true, function(stream) {
localStream = stream;
container.setState({selfViewSrc: stream.toURL()});
container.setState({status: 'ready', info: 'Please enter or create room ID'});
});
});
Questions:
What exactly is MediaStreamTrack.getSources doing? Is this because devices can have multiple video sources (e.g. 3 webcams)?
Doesn't getUserMedia just turn on the client's camera? In the code above isn't the client just viewing a video of himself?
I'd like to know how I can pass client A's URL of some sort to client B so that client B streams the video coming from client A. How do I do this? I'm imagining something like this:
Client A enters, joins room "abc123". Waits for another client to join
Client B enters, also joins room "abc123".
Client A is signaled that Client B has entered the room, so he makes a connection with Client B
Client A and Client B start streaming from their webcam. Client A can see Client B, and Client B can see Client A.
How would I do it using the WebRTC library (you can just assume that the backend server for room matching is created)
The process you are looking for is called JSEP (JavaScript Session Establishment Protocol) and it can be divided in the 3 steps I describe below. These steps start once both clients are in the room and can comunicate through WebSockets, I will use ws as an imaginary WebSocket API for communication between the client and the server and other clients:
1. Invite
During this step, one desinged caller creates and offer and sends it through the server to the other client (callee):
// This is only in Chrome
var pc = new webkitRTCPeerConnection({iceServers:[{url:"stun:stun.l.google.com:19302"}]}, {optional: [{RtpDataChannels: true}]});
// Someone must be chosen to be the caller
// (it can be either latest person who joins the room or the people in it)
ws.on('joined', function() {
var offer = pc.createOffer(function (offer) {
pc.setLocalDescription(offer);
ws.send('offer', offer);
});
});
// The callee receives offer and returns an answer
ws.on('offer', function (offer) {
pc.setRemoteDescription(new RTCSessionDescription(offer));
pc.createAnswer(function(answer) {
pc.setLocalDescription(answer);
ws.send('answer', answer);
}, err => console.log('error'), {});
});
// The caller receives the answer
ws.on('answer', function (answer) {
pc.setRemoteDescription(new RTCSessionDescription(answer));
});
Now both sides are have exchanged SDP packets and are ready to connect to each other.
2. Negotiation (ICE)
ICE candidates are created by each side to find a way to connect to each other, they are pretty much IP addresses where they can be found: localhost, local area network address (192.168.x.x) and external public IP Address (ISP). They are generated automatically by the PC object.
// Both processing them on each end:
ws.on('ICE', candidate => pc.addIceCandidate(new RTCIceCandidate(data)));
// Both sending them:
pc.onicecandidate = candidate => ws.send('ICE', candidate);
After the ICE negotiation, the conexion gets estabished unless you try to connect clients through firewalls on both sides of the connection, p2p communications are NAT traversal but won't work on some scenarios.
3. Data streaming
// Once the connection is established we can start to transfer video,
// audio or data
navigator.getUserMedia(function (stream) {
pc.addStream(stream);
}, err => console.log('Error getting User Media'));
It is a good option to have the stream before making the call and adding it at earlier steps, before creating the offer for the caller and right after receiving the call for the callee, so you don't have to deal with renegotiations. This was a pain a few years ago but it may be better implemented now in WebRTC.
Feel free to check my WebRTC project in GitHub where I create p2p connections in rooms for many participants, it is in GitHub and has a live demo.
MediaStreamTrack.getSources is used to get video devices connected. It seems to be deprecated now. See this stack-overflow question and documentation. Also refer MediaStreamTrack.getSources demo and code.
Yes. getUserMedia is just turning on camera. You can see the demo and code here.
Please refer to this peer connection sample & code here to stream audio and video between users.
Also look at this on Real time communication with WebRTC.
I've set up a text chat service using the PeerJS implementation of WebRTC's data channel. PeerJS provides a basic signalling server for this purpose, but I have tried to replace that with STUN and TURN servers set up through XirSys (recommended by SimpleWebRTC, another WebRTC library). I haven't deployed to the web yet.
Using Node to serve my static files locally, it will work on a local network (when I am sitting next to the person and they navigate to my ip/port in the browser), but will not work when connecting through different access points on the same network (i.e. at work, on opposite ends of the building).
My hypothesis is that it's hitting a firewall, but still directing traffic to PeerJS' signalling server without falling back to the XirSys STUN and TURN servers I've tried to set up. Here's the code I'm working with:
var stun = {};
var turn1 = {};
var turn2 = {};
$.ajax({
type: "POST",
dataType: "json",
url: "https://api.xirsys.com/getIceServers",
data: {
ident: "myusername",
secret: "long-alphanumeric-secret-key",
domain: "www.adomain.com",
application: "anapp",
room: "aroom",
secure: 1
},
success: function (data, status) {
console.log(data);
stun = data.d.iceServers[0];
turn1 = data.d.iceServers[1];
turn2 = data.d.iceServers[2];
},
async: false
});
var conn;
// Connect to PeerJS, have server assign an ID instead of providing one
var peerID = prompt('What would you like your screen name to be?');
var peer = new Peer(
peerID,
{key: 'mypeerjsserverkey', debug: true},
{
config: {'iceServers': [
{url: stun.url},
{url: turn1.url, credential: turn1.credential, username: turn1.username},
{url: turn2.url, credential: turn2.credential, username: turn2.username}
]
}
});
NOTE: My ident, secret, domain, etc. obviously aren't accurately represented here. I don't think that's where my problem is.
Any thoughts?
If you email us a wireshark capture of your STUN/TURN traffic, we should be able to outline where your problem is. Messages sent over signalling are separate but parallel to WebRTC messages. Therefore, if the app is working but the messages are being sent over signalling, then it's possible the configuration of the application isn't correct.
XirSys provides TURN via UDP over TCP through port 80/443, so if the signalling is connecting and flowing, so should the TURN.
Also, looking at your code, if you pass data.d from your getIceServers success handler to the PeerJS config, that should reduce your code quite a bit :-) The ICE string you're reconstructing doesn't need to be broken down.
Regards,
Lee
XirSys CTO