I have created a WebRTC Channel for Text Chat only.
When user wishes to have Video / Audio chat, they can turn on the Video / Audio by pressing a button.
In WebRTC, we need to call navigator.getUserMedia and add the video stream before we createOffer.
But there is a problem, because as soon as we call getUserMedia, the browser will ask user for permission to access to the Camera, even though we do not initiate a Video / Audio chat yet.
My question is, is there a way for us to call navigator.getUserMedia at later stage and add the stream into the connection?
Yes! You need to do a re-negotiation. It is just another round of Offer/Answer. It can be done at anytime, and by either side.
Check out play-from-disk-renegotation this shows a peer adding/removing and removing video after it has connected.
Related
I create a WebRTC with simple peer. I need to capture stream from video element and send it to peers. I use vidEl.captureStream() to get Media Stream from video element and send it to other peer. It's working but only audio and video is blackscreen(not something is display).
[Testing] I create another video element on same origin of first video element and use captureStream() and test setObject with Media Stream captured and it's working.
How to send MediaStream from video.captureStream() to new peer through WebRTC and display video from remote peer.
Sorry for bad english.
https://webrtc.github.io/samples/src/content/capture/video-pc/
This sample does'nt work on chrome, edge but fire is worked. I don't know why.
There is currently an open bug in Chrome which breaks this:
https://bugs.chromium.org/p/chromium/issues/detail?id=1156408
With reference to this tutorial. WebRTC - Voice Demo
The audio stream is attached to an <audio /> element for both Local and Remote audio.
I am a bit confused now. Aren't we suppose to connect the local audio to a Mic, instead of playing it?
Besides, for remote audio, is it possible for me to play the audio only without attaching it to an <audio /> element?
No, you need to create an audio element to play local audio as well.
The moment you call navigator.webkitGetUserMedia with constraints { video: false, audio: true }, you get your mic stream as a parameter on the callback. Then in order to play it, you need to attach it to an audio element, just like you would for remote audio, however, the remote audio stream will come from ontrack event instead.
But honestly, there is no reason for you to play your own local stream since the sound of your voice will overlap with the sound from the audio player. I guess they did it on the tutorial for demonstration purposes.
Btw, this tutorial you referred looks quite old, so if things don't work, that might be the reason why.
I have a button for disabling the local video track:
button.onclick = function(){
mediaStream.getVideoTracks()[0].enabled =
!(mediaStream.getVideoTracks()[0].enabled);
}
I want to be able to detect this on the remote side, so that I can switch out the view with a user friendly image instead of showing a black screen.
Are there any events fired or any properties of the stream that the remote user can check on its local stream object that indicate that the other user shut off their video?
No, there is no direct way to identify the remote video muted state.
You need to pass the video disabled event to remote user with signalling(over ws) or you can use data channel to relay the video disabled/enabled events.
You can predict remote video states based peerConnection stats, but they depends on bandwidth/network fluctuations.
And moreover browser will send some video data (empty/black frames) when we disable(mediaStream.getVideoTracks()[0].enabled = 0) the video track.
Currently, the video mute functionality in webrtc is achieved by setting the enabled property of a video track to false
stream.getVideoTracks().forEach(function (track) {
track.enabled = false;
});
But the above code would not only mute the outgoing video, but the local self-view which is rendered using that local stream, also gets black frames.
Is there a way, to ONLY mute the outgoing video frames, but still be able to show a local self-view?
There's no easy way yet. Once MediaStreamTrack.clone() is supported by browsers, you could clone the video track to get a second instance of it with a separately controllable mute property, and send one track to your self-view and the other to the peerConnection. This would let you turn off video locally and remotely independently.
Today, the only workarounds I know of would be to call getUserMedia twice on Chrome (should work on https at least, where permissions will be persisted so the user won't be prompted twice) which would get you two tracks you could video-mute independently, or on Firefox you could use RTCRtpSender.replaceTrack() with a second "fake" video stream from getUserMedia using the non-standard { video: true, fake: true } constraint like this.
I have created webrtc based video chat suing peerjs.
The local and remote video element is created using control:
local:
'video id= [local_peer_id] autoplay="true" controls="true">'
remote:
and
'video id= [remote_peer_id] autoplay="true" controls="true">'
Now during the video chat if local user mute auido remote user can not hear anything and its working perfect.
Problem is with the video. If local user pause his own video he can see the video is paused but remote user still can see his video live.
on the other hand if remote user pause his video, local user still can see his video live.
Any one tell what need to do to deploy the feathure
"Pause" and "resume" video that works real time for both peer?
You need to know the difference between the HTML tags and the WebRTC streams...
You can have streams running, without having them attached to any HTML tag, and the media can still be sent and received by each peer. So, each peer can attach the stream to a audio/video tag, and the tag will only act as a player that you use to play a stream that is already running.
So, if you mute the HTML tag, you will only be muting the player, and not the stream. If you want to make anything to have effect on the other peer, you need to do stuff on the stream or in the peer connection.
In particular, to mute and resume audio or video, you need to toggle the media tracks in the media stream
// create a button to toggle video
var button = document.createElement("button");
button.appendChild(document.createTextNode("Toggle Hold"));
button.onclick = function(){
mediaStream.getVideoTracks()[0].enabled =
!(mediaStream.getVideoTracks()[0].enabled);
}
To pause/resume audio, use getAudioTracks() instead.
calling mediaStream.stop() will stop the camera
where mediaStream is the stream that you got when calling getUserMedia
Here is a working example
mediaStream.getAudioTracks()[0].stop();
mediaStream.getVideoTracks()[0].stop();
Hope this will work with new standards. Its working fine in my app.