WebRTC: Switch from Video Sharing to Screen sharing during call - webrtc

Initially, I had two different webpages:
One was to do Video Call and
Other was to do Screen Sharing
Now, I want to do both of them in one page.
Here is the scenario:
During Live call, a user wants to stop sharing his/her video and start sharing screen.
Afterwards, again he/she wishes to turn off screen sharing and start video sharing.
For clarity, here are some questions I want to ask:
On Caller Side:
1) How can I change my local stream from video to screen and vice versa?
2) Once it is done, how can I assign it to the local video element?
On Callee Side:
1) How do I handle if the current stream I am receiving is changed from video to screen?
2) How do I handle if the stream I am receiving has stopped? I mean, now I can receive neither video nor screen (just audio)
Kindly, help me in this regards. If there are any open source codes available, kindly share their links too.
Just for your reference, I was trying to handle it using following code. (i know this is naive and won't work)
function handleUserMedia(newStream){
var localvideo = document.getElementById("localvideo");
localvideo.src = URL.createObjectURL(newStream);
localStream = newStream;
sendMessage('got user media');
if (isInitiator) {
maybeStart();
}
}
function handleUserMediaError(error){
console.log(error);
}
var video_constraints = {video: true, audio: true};
var screen_constraints = {video: { mandatory: { chromeMediaSource: 'screen' } }};
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
//getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
$scope.toggleSelected = function () {
$scope.selected = !$scope.selected;
if($scope.selected)
{
getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Video';
}
else
{
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
}
};

Check this demo:
https://www.webrtc-experiment.com/demos/switch-streams.html
and the relevant tutorial:
https://www.webrtc-experiment.com/docs/how-to-switch-streams.html
simply renegotiate peer connections on both users' side!

Related

Agora Web SDK Screen share not returning video track

I have integrated Screen Share function on my web conference and Screen Share content will show on users who are in the session before Screen Share start, but it does not work on user who have joined the session after the Screen Share have started.
Below is the logic for getting video tracks when new user join the session.
// Add current users
this.meetingSession.remoteUsers.forEach(async ru => {
if (ru.uid.search('screen_') > -1) {
this.getScreenShare(ru);
return;
}
let remoteVideo = await this.meetingSession.subscribe(ru, 'video');
this.setVideoAudioElement(ru, 'video');
let remoteAudio = await this.meetingSession.subscribe(ru, 'audio');
this.setVideoAudioElement(ru, 'audio');
})
async getScreenShare (user) {
...
this.currentScreenTrack = user.videoTrack;
// Here user.videoTrack is undefined
console.log(user)
...
},
After the new user's session is created, I'm getting the current user's video track from "remoteUsers" object inside session object. No problem with regular user's video track, but Screen Share object say "hasVideo" is true but "videoTrack" is undefined.
Agora Web SDK meetingSession.remoteUsers Screen Share Object
Is this a specification that videoTrack is not included in meetingSession.remoteUsers for Screen Share?
I'm wondering what method people are using to show Screen Share content for user who have joined the session during Screen Share.
It will be great if someone can give me suggestion about this.
"agora-rtc-sdk-ng": "^4.6.2",
I had it figured out.
I just needed to subscribe the remote user.
this.meetingSession.remoteUsers.forEach(async ru => {
if (ru.uid.search('screen_') > -1) {
// Just needed to subscribe the user...
await this.meetingSession.subscribe(ru, 'video');
this.getScreenShare(ru);
return;
}
let remoteVideo = await this.meetingSession.subscribe(ru, 'video');
this.setVideoAudioElement(ru, 'video');
let remoteAudio = await this.meetingSession.subscribe(ru, 'audio');
this.setVideoAudioElement(ru, 'audio');
})

WebRTC ontrack how to tell if it is a screen sharing session?

function OnTrack(e) {
if (e.track.kind === "audio") {
}
else if (e.track.kind === "video") {
}
// Screen Sharing e.track.kind === 'video'
};
In the code above we can distrigush between Audio / Video. But how can I tell if the Video stream is actually coming from a screen sharing session?
Modify the SDP
Found a workaround, by modifying the SDP. I hope someone can come out with a better solution than this.
The getSettings() method contains the properties of the track, including cursor, displaySurface, logicalSurface which are only present on a screen sharing track.
With displaySurface as an example:
function OnTrack(e) {
let settings = e.track.getSettings()
if (e.track.kind === "audio") {
}
else if (e.track.kind === "video") {
if (settings.displaySurface && (
settings.displaySurface === "application" ||
settings.displaySurface === "browser" ||
settings.displaySurface === "monitor" ||
settings.displaySurface === "window")) {
// Screen Sharing
}
}
};
The values of displaySurface are application, browser, monitor, window.
You can also get displaySurface from the getConstraints() method of the track.
Update
It turned out you need to add these constraints/settings when you are calling the getDisplayMedia(constraints):
These constraints apply to MediaTrackConstraints objects specified as
part of the DisplayMediaStreamConstraints object's video property when
using getDisplayMedia() to obtain a stream for screen sharing.
Note from the MDN page
Not all user agents support all of these surface types.
More information
MediaStreamTrack getSettings()
MediaTrackSettings
MediaTrackSettings DisplaySurface
MediaStreamTrack getConstraints()
MediaDevices getDisplayMedia()

MediaRecorder has a delay of multiple seconda

I'm trying to use a MediaRecorder to record a MediaStream and display it in a video element using a MediaSource. So the setup looks like:
Request a MediaStream from the browser
Add it to the MediaRecorder
Add the recorded blobs to the MediaSource Buffer
The result looks very good but there is one problem: There is a delay in the playback.
When displaying the MediaStream directly there is no delay so I sorted out the first bulletpoint as the problem.
Nevertheless, it seems like either the MediaRecorder or the MediaSource is adding a delay of about 3 seconds to the stream.
this.screenRecording = await mediaDevices.getDisplayMedia({ video: { frameRate: 60, resizeMode: 'none' } });
const mediaRecorder = new MediaRecorder(this.screenRecording);
mediaRecorder.ondataavailable = async (event: any) => {
if (this.screenReceiving.readyState === 'open') {
if (this.screenReceivingBuffer == null) {
this.screenReceivingBuffer = this.screenReceiving.addSourceBuffer('video/webm;codecs=vp8');
}
if (!this.screenReceivingBuffer.updating) {
this.screenReceivingBuffer.appendBuffer(await new Response(event.data).arrayBuffer());
}
}
};
mediaRecorder.start(16);
The above code is only copy & paste from the actual project so please don't expect it to work by copy & paste ;)
Does anyone have an idea why this delay exists?
Any ideas on how to tweak the browser to not add this delay?

WebRTC - Change device/camera in realtime

I'm having a problem trying to change my camera in real time, It works for the local video, but the remote person cannot see the new camera, and still sees the old one. I tried to stop the stream and init again but still not working. This is just some of my code.
I have searched everywhere and I can't find a solution. Can someone help me out?
function init() {
getUserMedia(constraints, connect, fail);
}
$(".webcam-devices").on('change', function() {
var deviceID = this.value;
constraints.video = {
optional: [{
sourceId: deviceID
}]
};
stream.getTracks().forEach(function (track) { track.stop(); });
init();
});
You need to actually change the track you're sending in the PeerConnection. In Firefox, you can use RTPSender.replaceTrack(new_track); to change without renegotiation (this is being added to the spec now). Otherwise, you need to add the new stream/track to the RTCPeerConnection, and remove the old one, and then process the onnegotiationneeded event and renegotatiate
See one of #jib's fiddles: Jib's replaceTrack() fiddle:
function flip() {
flipped = 1 - flipped;
return pc1.getSenders()[0].replaceTrack(streams[flipped].getVideoTracks()[0])
.then(() => log("Flip! (notice change in dimensions & framerate!)"))
.catch(failed);
}

Using multiple USB cameras with Web RTC

I want to use multiple USB camera with Web RTC.
For ex)
https://apprtc.appspot.com/?r=93443359
This application is web RTC sample.
I can connect to another machine, but I have to disconnect once to change the camera.
What I want to is,
1.Use two camera at the same time on the same screen.
2.(if 1 is not possible),I want to switch the camera without disconnecting current connection
Does anyone have information about how to use two camera on Web RTC?
call getUserMedia twice and change the camera input in between
You can use constraints to specify which camera to use and you can have both of them displayed in one page as well. To specify which camera to use take a look at the following snippet (only works on Chrome 30+):
getUserMedia({
video: {
mandatory: {
sourceId: webcamId,
...
}
},
successCallback,
failCallback);
The webcamId you can get by:
MediaStreamTrack.getSources(function(sources){
var cams = _.filter(sources, function(e){ //only return video elements
return e.kind === 'video';
});
var camIds = _.map(cams, function (e) { // return only ids
return e.id;
});
});
In the snippet above I've used underscore methods filter and map.
More information on:
WebRTC video sources
constraints