how to add screen share function using PeerJS? - webrtc

Currently, i am working on a webRTC project where you can give call and receive call.i also want to add screen share functionality to it.
can anyone provide me a good documentation link?
i am currently following the official documentation of peerJS.
i was able to do audio-video calling but stuck on the screen sharing part.
Help Me!

You need to get stream just like you do with getUserMedia and then you give that stream to PeerJS.
It should be something like this:
var displayMediaOptions = {
video: {
cursor: "always"
},
audio: false
};
navigator.mediaDevices.getDisplayMedia(displayMediaOptions)
.then(function (stream) {
// add this stream to your peer
});

I'm working with and learning about WebRTC. From what I've read, I think the solution here probably hinges on getDisplayMedia. That's also what this React, Node and peerJS tutorial suggests (though I haven't tried it myself yet).

let screenShare = document.getElementById('shareScreen');
screenShare.addEventListener('click', async () => {
captureStream = await navigator.mediaDevices.getDisplayMedia({
audio: true,
video: { mediaSource: "screen" }
});
//Instead of adminId, pass peerId who will taking captureStream in call
myPeer.call(adminId, captureStream);
})

Related

Issue with WebRTC/getUserMedia in iOS 14 Safari and phone sleep/unlock

I seem to have noticed a regression with getUserMedia in iOS 14 Safari. Here are steps to reproduce:
Go to https://webrtc.github.io/samples/src/content/getusermedia/gum/ on iOS 14 Safari
Click "Open camera" and accept camera permissions; you should see local camera video.
Click the power button and lock the phone; let the phone go to sleep
Unlock/wake the phone; the local camera video is gone.
This does not happen on devices running iOS 13.
My questions are:
Can anyone else confirm this on their devices? I have only tested on iPhone 11 so far.
Has anyone found a solution yet?
Yes, I am having the a similar strange issue with iOS 14.2 and getUserMedia I can only get
navigator.mediaDevices.getUserMedia({video: true }) to work
If I change it to:
navigator.mediaDevices.getUserMedia({ audio: true, video: true })
it will fail.
It's not an issue with code as I tested my project on safari MacOS, chrome for MacOS, linux Firefox.
As a temp fix so I could move on with my life for the moment I did this:
const constraints = navigator.userAgent.includes("iPhone") ? {video:true} : {
audio:true,
video: {
width: { ideal: 640 },
height: {ideal: 400 }
}
};
Yes also here!
I check this behavior in Browserstack with iOS:
12.x: ✓
13.x: ✓
14.x: ✗
Try this:
navigator.mediaDevices.getUserMedia({ audio: true, video: true })
.then(stream => {
const videoTracks = stream.getVideoTracks();
console.log(videoTracks[0].enabled);
document.querySelector('video').srcObject = stream;
});
// Output
true <-- ?
Then if you try again get the camera, but replacing the video track on the previous MediaStream works.
Sometimes if you use video constraints with facingMode: 'user' also works, why? I don't know.
I still can't find a consistent solution.
Having the same issue on iPad pro 2nd generation with iOS 14.7.1 and iPhone 7 iOS 14.6.x. The only solution I found that seems to constantly work is to call getUserMedia separated by audio and video constraints. As an example:
async function getMedia(constraints) {
let videoStream = null;
let audioStream = null;
try {
videoStream = await navigator.mediaDevices.getUserMedia({video: true});
audioStream = await navigator.mediaDevices.getUserMedia({audio: true});
/* use the stream */
} catch (err) {
/* handle the error */
}
}
You can replace {video: true} or {audio: true} with your desired constraints. Then you can either work with the separate MediaStream objects or to construct your own MediaStream object from the audio and video tracks of your streams.

TokBox/Vonage allowing audio capture support when screensharing

Screen Capture API, specifically getDisplayMedia(), currently supports screensharing and sharing the audio playing in your device (e.g: youtube) at the same time. Docs. Is this currently supported using TokBox/Vonage Video API? Has someone been able to achieve this?
I guess there could be some workaround using getDisplayMedia and passing the audio source when publishing, e.g: OT.initPublisher({ audioSource: newDisplayMediaAudioTrack }), but doesn't seem like a clean solution.
Thanks,
Manik here from the Vonage Client SDK team.
Although this feature does not exist in the Video Client SDK just yet, you can accomplish the sharing of audio with screen by creating a publisher like so:
let publisher;
try {
const stream = await navigator.mediaDevices.getDisplayMedia({video: true, audio: true });
const audioTrack = stream.getAudioTracks()[0];
const videoTrack = stream.getVideoTracks()[0];
publisher = OT.initPublisher({audioSource: audioTrack, videoSource: videoTrack});
} catch (e) {
// handle error
}
If you share a tab, but the tab doesn't play audio (static pdf or ppt) then the screen flickers. To avoid this, specify frameRate constraint for the video stream. see - https://gist.github.com/rktalusani/ca854ca8621c20488bea6e62ad04e341

React native custom internet phone call

I have a 3 part question from important to less important:
Does someone know if there is a package to do phone calls trough the internet as Whatsapp and Facebook do?
Would it even be possible to do it without a phone number?
For example, only knowing someone's device id.
And can you even make your "ring page" custom? So adding functionalities while calling.
Thank you in advance!
Yes this is possible. There are plenty of ways to attack this but I would recommend using a React Native wrapper for Twilio (https://github.com/hoxfon/react-native-twilio-programmable-voice).
import TwilioVoice from 'react-native-twilio-programmable-voice'
// ...
// initialize the Programmable Voice SDK passing an access token obtained from the server.
// Listen to deviceReady and deviceNotReady events to see whether the initialization succeeded.
async function initTelephony() {
try {
const accessToken = await getAccessTokenFromServer()
const success = await TwilioVoice.initWithToken(accessToken)
} catch (err) {
console.err(err)
}
}
// iOS Only
function initTelephonyWithUrl(url) {
TwilioVoice.initWithTokenUrl(url)
try {
TwilioVoice.configureCallKit({
appName: 'TwilioVoiceExample', // Required param
imageName: 'my_image_name_in_bundle', // OPTIONAL
ringtoneSound: 'my_ringtone_sound_filename_in_bundle' // OPTIONAL
})
} catch (err) {
console.err(err)
}
For that approach I believe you have to have a phone number but you can build out the ui however you like.
If you are not into the Twilio approach, you can use pure JS libraries to do the trick such as SipJS.
There are also tutorials on Youtube which can lead you through the process like this.
I recommend you Voximplant, https://voximplant.com/docs/references/articles/quickstart,
it's easy to use and has clear documentation.

How to select audioOutput with OpenTok

I am building a simple WebRTC app with OpenTok.
I need to be able to select camera, audio input and audio output.
Currently that doesn't seem easily possible.
See opentok-hardware-setup
https://github.com/opentok/opentok-hardware-setup.js/issues/18
I am loading OpenTok in my index.html file
and opentok-hardware-setup.js.
All looks great and I can select microphone and camera BUT not the speaker out aka audiooutput.
<script src="https://static.opentok.com/v2/js/opentok.min.js"></script>
From the console, I tried
OT.getDevices((err, devices) => { console.debug(devices)});
and observed that you can't get the audioOutput
(4) [{…}, {…}, {…}, {…}]
0: {deviceId: "default", label: "Default - Built-in Microphone", kind: "audioInput"}
1: {deviceId: "b183634b059298f3692aa7e5871e6a463127701e21e320742c48bda99acdf925", label: "Built-in Microphone", kind: "audioInput"}
2: {deviceId: "4b441035a4db3c858c65c30eabe043ae1967407b3cc934ccfb332f0f6e33a029", label: "Built-in Output", kind: "audioInput"}
3: {deviceId: "05415e116b36584f848faeef039cd06e5290dde2e55db6895c19c8be3b880d91", label: "FaceTime HD Camera", kind: "videoInput"}
length
:4 __proto__:Array(0)
whereas you can get them using navigator.mediaDevices.enumerateDevices()
Any pointers?
Disclosure, I'm an employee at TokBox :). OpenTok does not currently provide a way to specify the audio output device. This is still an experimental API and only works in Chrome. When the API is standardised and has wider browser support we will make it easier.
In the meantime, it's pretty easy to do this using native WebRTC. There is a good sample for this at https://webrtc.github.io/samples/src/content/devices/multi/ the source code can be found at https://github.com/webrtc/samples/blob/gh-pages/src/content/devices/multi/js/main.js
In summary you use the enumerateDevices method as you found. Then you use the setSinkId() method on the video element https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/setSinkId
You can get access to the videoElement by listening to the videoElementCreated event on the subscriber like so:
subscriber.on('videoElementCreated', (event) => {
if (typeof event.element.sinkId !== 'undefined') {
event.element.setSinkId(deviceId)
.then(() => {
console.log('successfully set the audio output device');
})
.catch((err) => {
console.error('Failed to set the audio output device ', err);
});
} else {
console.warn('device does not support setting the audio output');
}
});
So,
The answer gave by #Adam Ullman is not valid anymore since there is a separate audio element created alongside the video element preventing us to use the setSinkId method of the video element.
I found a solution consisting in finding the audio element from the video one and using its own setSinkId.
Code:
const subscriber_videoElementCreated = async event => {
const videoElem = event.element;
const audioElem = Array.from(videoElem.parentNode.childNodes).find(child => child.tagName === 'AUDIO');
if (typeof audioElem.sinkId !== 'undefined') {
try {
await audioElem.setSinkId(deviceId);
} catch (err) {
console.log('Could not update speaker ', err);
}
}
};
OpenTok (now Vonage) now provides an API for doing exactly this in 2.22.
It is not supported on all browsers (Safari), but for browsers which support setSinkID, there is now a uniform API which wraps the functionality handily.
https://tokbox.com/developer/guides/audio-video/js/#setAudioOutput

React native webview HTML5 video sound not working in IOS silent mode

I am using WebView to load a webpage which has an embedded video player. It works fine when the app is in ringer mode. But does not have any sound when App is in silent mode. I am not well aware of IOS. Any help would be appreciated.
<WebView startInLoadingState={true}
mediaPlaybackRequiresUserAction={false}
javaScriptEnabled={ true }
source={{uri:'http://ab24.live/player'}}/>
Since I can't comment yet, just adding that it's been fixed (last comment of github issue).
So in order to avoid needing to call that hacky workaround function, now you just need to add useWebKit={true} to the WebView component.
Fix was implemented last month and should work with Expo V32+ versions.
Assuming that you're using expo and you have come up against this bug, you can get around this problem using the following:
import { Audio } from "expo";
...
async playInSilentMode() {
// To get around the fact that audio in a `WebView` will be muted in silent mode
// See: https://github.com/expo/expo/issues/211
//
// Based off crazy hack to get the sound working on iOS in silent mode (ringer muted/on vibrate)
// https://github.com/expo/expo/issues/211#issuecomment-454319601
await Audio.setAudioModeAsync({
playsInSilentModeIOS: true,
allowsRecordingIOS: false,
interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_MIX_WITH_OTHERS,
shouldDuckAndroid: false,
interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
playThroughEarpieceAndroid: true
});
// console.log(" 🔈 done: setAudioModeAsync");
await Audio.setIsEnabledAsync(true);
// console.log(" 🔈 done: setIsEnabledAsync");
const sound = new Audio.Sound();
await sound.loadAsync(
require("./500-milliseconds-of-silence.mp3") // from https://github.com/anars/blank-audio
);
// console.log(" 🔈 done: sound.loadAsync");
await sound.playAsync();
sound.setIsMutedAsync(true);
sound.setIsLoopingAsync(true);
// console.log(" 🔈 done: sound.playAsync");
}
Then, reference this in componentDidMount():
async componentDidMount() {
await playInSilentMode();
}