How to add Video track and remove it using simple-peer - webrtc

I am using simple-peer in my video chat web application. If both the users are in audio call how can I add Video track and how can I disable it. If I use replaceTrack I am again which is giving this issue
error Error: [object RTCErrorEvent]
at makeError (index.js:17)
at RTCDataChannel._channel.onerror (index.js:490)
I am showing a profile picture if the video is not enabled for users. if Video is enabled I want to replace this picture with video and replace it for all people in the call

If both the users enabled audio only, stream contain only audio track so here we can add black space (ended video track ).so we can easily solve this issue
for more info visit this
https://blog.mozilla.org/webrtc/warm-up-with-replacetrack/
Code from the above link
let silence = () => {
let ctx = new AudioContext(), oscillator = ctx.createOscillator();
let dst = oscillator.connect(ctx.createMediaStreamDestination());
oscillator.start();
return Object.assign(dst.stream.getAudioTracks()[0], {enabled: false});
}
let black = ({width = 640, height = 480} = {}) => {
let canvas = Object.assign(document.createElement("canvas"), {width, height});
canvas.getContext('2d').fillRect(0, 0, width, height);
let stream = canvas.captureStream();
return Object.assign(stream.getVideoTracks()[0], {enabled: false});
}
let blackSilence = (...args) => new MediaStream([black(...args), silence()]);
video.srcObject = blackSilence();

Related

Agora Web SDK Screen share not returning video track

I have integrated Screen Share function on my web conference and Screen Share content will show on users who are in the session before Screen Share start, but it does not work on user who have joined the session after the Screen Share have started.
Below is the logic for getting video tracks when new user join the session.
// Add current users
this.meetingSession.remoteUsers.forEach(async ru => {
if (ru.uid.search('screen_') > -1) {
this.getScreenShare(ru);
return;
}
let remoteVideo = await this.meetingSession.subscribe(ru, 'video');
this.setVideoAudioElement(ru, 'video');
let remoteAudio = await this.meetingSession.subscribe(ru, 'audio');
this.setVideoAudioElement(ru, 'audio');
})
async getScreenShare (user) {
...
this.currentScreenTrack = user.videoTrack;
// Here user.videoTrack is undefined
console.log(user)
...
},
After the new user's session is created, I'm getting the current user's video track from "remoteUsers" object inside session object. No problem with regular user's video track, but Screen Share object say "hasVideo" is true but "videoTrack" is undefined.
Agora Web SDK meetingSession.remoteUsers Screen Share Object
Is this a specification that videoTrack is not included in meetingSession.remoteUsers for Screen Share?
I'm wondering what method people are using to show Screen Share content for user who have joined the session during Screen Share.
It will be great if someone can give me suggestion about this.
"agora-rtc-sdk-ng": "^4.6.2",
I had it figured out.
I just needed to subscribe the remote user.
this.meetingSession.remoteUsers.forEach(async ru => {
if (ru.uid.search('screen_') > -1) {
// Just needed to subscribe the user...
await this.meetingSession.subscribe(ru, 'video');
this.getScreenShare(ru);
return;
}
let remoteVideo = await this.meetingSession.subscribe(ru, 'video');
this.setVideoAudioElement(ru, 'video');
let remoteAudio = await this.meetingSession.subscribe(ru, 'audio');
this.setVideoAudioElement(ru, 'audio');
})

MediaRecorder has a delay of multiple seconda

I'm trying to use a MediaRecorder to record a MediaStream and display it in a video element using a MediaSource. So the setup looks like:
Request a MediaStream from the browser
Add it to the MediaRecorder
Add the recorded blobs to the MediaSource Buffer
The result looks very good but there is one problem: There is a delay in the playback.
When displaying the MediaStream directly there is no delay so I sorted out the first bulletpoint as the problem.
Nevertheless, it seems like either the MediaRecorder or the MediaSource is adding a delay of about 3 seconds to the stream.
this.screenRecording = await mediaDevices.getDisplayMedia({ video: { frameRate: 60, resizeMode: 'none' } });
const mediaRecorder = new MediaRecorder(this.screenRecording);
mediaRecorder.ondataavailable = async (event: any) => {
if (this.screenReceiving.readyState === 'open') {
if (this.screenReceivingBuffer == null) {
this.screenReceivingBuffer = this.screenReceiving.addSourceBuffer('video/webm;codecs=vp8');
}
if (!this.screenReceivingBuffer.updating) {
this.screenReceivingBuffer.appendBuffer(await new Response(event.data).arrayBuffer());
}
}
};
mediaRecorder.start(16);
The above code is only copy & paste from the actual project so please don't expect it to work by copy & paste ;)
Does anyone have an idea why this delay exists?
Any ideas on how to tweak the browser to not add this delay?

setSinkId change muliple audio ouputs

Here is the problem,
First I enumerate all the devices that I have available with in select elements:
navigator.mediaDevices.enumerateDevices()
When I change one output, it sounds in the device that I choose.
HTMLMediaElement.setSinkId(deviceId)
After if I play another audio and change the device output (setSinkId), it changes also the first one to the last deviceId. So I have both sounds in the same device.
Do I need to have the last adapter.js version to implement properly that problem?
********* EDITED **********
Following the above comment, it try the web audio, but not success. With getUserMedia everything is fine.
navigator.getUserMedia( { audio: true, video: false },
function (mediaStream) {
// Create an audio context for the audio
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
//var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(mediaStream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
},
function (error) {
console.log(error);
}
);
But using my remote stream, I cannot get any noise
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(stream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
// Element is my HTMLMediaElement
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
this is most likely caused by how Chrome renders audio. See here for a description which also suggests using webaudio to workaround the problem.
adapter.js can not fix this.

WebRTC mix local and remote audio steams and record

So far i've found a way only to record either local or remote using MediaRecorder API but is it possible to mix and record both steams and get a blob?
Please note its audio steam only and i don't want to mix/record in server side.
I've a RTCPeerConnection as pc.
var local_stream = pc.getLocalStreams()[0];
var remote_stream = pc.getRemoteStreams()[0];
var audioChunks = [];
var rec = new MediaRecorder(local_stream);
rec.ondataavailable = e => {
audioChunks.push(e.data);
if (rec.state == "inactive")
// Play audio using new blob
}
rec.start();
Even i tried adding multiple tracks in MediaStream API but it still gives only first track audio. Any help or insight 'd be appreciated!
The WebAudio API can do mixing for you. Consider this code if you want to record all the audio tracks in the array audioTracks:
const ac = new AudioContext();
// WebAudio MediaStream sources only use the first track.
const sources = audioTracks.map(t => ac.createMediaStreamSource(new MediaStream([t])));
// The destination will output one track of mixed audio.
const dest = ac.createMediaStreamDestination();
// Mixing
sources.forEach(s => s.connect(dest));
// Record 10s of mixed audio as an example
const recorder = new MediaRecorder(dest.stream);
recorder.start();
recorder.ondataavailable = e => console.log("Got data", e.data);
recorder.onstop = () => console.log("stopped");
setTimeout(() => recorder.stop(), 10000);

WebRTC: Switch from Video Sharing to Screen sharing during call

Initially, I had two different webpages:
One was to do Video Call and
Other was to do Screen Sharing
Now, I want to do both of them in one page.
Here is the scenario:
During Live call, a user wants to stop sharing his/her video and start sharing screen.
Afterwards, again he/she wishes to turn off screen sharing and start video sharing.
For clarity, here are some questions I want to ask:
On Caller Side:
1) How can I change my local stream from video to screen and vice versa?
2) Once it is done, how can I assign it to the local video element?
On Callee Side:
1) How do I handle if the current stream I am receiving is changed from video to screen?
2) How do I handle if the stream I am receiving has stopped? I mean, now I can receive neither video nor screen (just audio)
Kindly, help me in this regards. If there are any open source codes available, kindly share their links too.
Just for your reference, I was trying to handle it using following code. (i know this is naive and won't work)
function handleUserMedia(newStream){
var localvideo = document.getElementById("localvideo");
localvideo.src = URL.createObjectURL(newStream);
localStream = newStream;
sendMessage('got user media');
if (isInitiator) {
maybeStart();
}
}
function handleUserMediaError(error){
console.log(error);
}
var video_constraints = {video: true, audio: true};
var screen_constraints = {video: { mandatory: { chromeMediaSource: 'screen' } }};
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
//getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
$scope.toggleSelected = function () {
$scope.selected = !$scope.selected;
if($scope.selected)
{
getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Video';
}
else
{
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
}
};
Check this demo:
https://www.webrtc-experiment.com/demos/switch-streams.html
and the relevant tutorial:
https://www.webrtc-experiment.com/docs/how-to-switch-streams.html
simply renegotiate peer connections on both users' side!