Issue with WebRTC/getUserMedia in iOS 14 Safari and phone sleep/unlock - webrtc

I seem to have noticed a regression with getUserMedia in iOS 14 Safari. Here are steps to reproduce:
Go to https://webrtc.github.io/samples/src/content/getusermedia/gum/ on iOS 14 Safari
Click "Open camera" and accept camera permissions; you should see local camera video.
Click the power button and lock the phone; let the phone go to sleep
Unlock/wake the phone; the local camera video is gone.
This does not happen on devices running iOS 13.
My questions are:
Can anyone else confirm this on their devices? I have only tested on iPhone 11 so far.
Has anyone found a solution yet?

Yes, I am having the a similar strange issue with iOS 14.2 and getUserMedia I can only get
navigator.mediaDevices.getUserMedia({video: true }) to work
If I change it to:
navigator.mediaDevices.getUserMedia({ audio: true, video: true })
it will fail.
It's not an issue with code as I tested my project on safari MacOS, chrome for MacOS, linux Firefox.
As a temp fix so I could move on with my life for the moment I did this:
const constraints = navigator.userAgent.includes("iPhone") ? {video:true} : {
audio:true,
video: {
width: { ideal: 640 },
height: {ideal: 400 }
}
};

Yes also here!
I check this behavior in Browserstack with iOS:
12.x: ✓
13.x: ✓
14.x: ✗
Try this:
navigator.mediaDevices.getUserMedia({ audio: true, video: true })
.then(stream => {
const videoTracks = stream.getVideoTracks();
console.log(videoTracks[0].enabled);
document.querySelector('video').srcObject = stream;
});
// Output
true <-- ?
Then if you try again get the camera, but replacing the video track on the previous MediaStream works.
Sometimes if you use video constraints with facingMode: 'user' also works, why? I don't know.
I still can't find a consistent solution.

Having the same issue on iPad pro 2nd generation with iOS 14.7.1 and iPhone 7 iOS 14.6.x. The only solution I found that seems to constantly work is to call getUserMedia separated by audio and video constraints. As an example:
async function getMedia(constraints) {
let videoStream = null;
let audioStream = null;
try {
videoStream = await navigator.mediaDevices.getUserMedia({video: true});
audioStream = await navigator.mediaDevices.getUserMedia({audio: true});
/* use the stream */
} catch (err) {
/* handle the error */
}
}
You can replace {video: true} or {audio: true} with your desired constraints. Then you can either work with the separate MediaStream objects or to construct your own MediaStream object from the audio and video tracks of your streams.

Related

react-native-share wouldn't open anything in real ios device but app doesn't freeze

I'm trying to implement sharing a pdf feature using react-native-share. When I click 'share' button, there should be a pop-up with different sharing options but nothing pops up when I connect to my real ios device and test it. The app doesn't freeze or anything. Weird thing is everything works as expected when I test it on ios simulator.
Here's my code snippet
const sharePdf = async () => {
const shareOptions = {
url: `file://${reportUri}`,
type: 'application/pdf',
failOnCancel: true,
};
await Share.open(shareOptions);
};

how to add screen share function using PeerJS?

Currently, i am working on a webRTC project where you can give call and receive call.i also want to add screen share functionality to it.
can anyone provide me a good documentation link?
i am currently following the official documentation of peerJS.
i was able to do audio-video calling but stuck on the screen sharing part.
Help Me!
You need to get stream just like you do with getUserMedia and then you give that stream to PeerJS.
It should be something like this:
var displayMediaOptions = {
video: {
cursor: "always"
},
audio: false
};
navigator.mediaDevices.getDisplayMedia(displayMediaOptions)
.then(function (stream) {
// add this stream to your peer
});
I'm working with and learning about WebRTC. From what I've read, I think the solution here probably hinges on getDisplayMedia. That's also what this React, Node and peerJS tutorial suggests (though I haven't tried it myself yet).
let screenShare = document.getElementById('shareScreen');
screenShare.addEventListener('click', async () => {
captureStream = await navigator.mediaDevices.getDisplayMedia({
audio: true,
video: { mediaSource: "screen" }
});
//Instead of adminId, pass peerId who will taking captureStream in call
myPeer.call(adminId, captureStream);
})

Share screen in safari 13 with agora sdk

I am using agora-rtc-sdk 3.3.0. When i click on "share screen" and gave permission (problem is not with getDisplayMedia) safari reload page. The line which cause it, is client.publish(stream), if comment this line stream created successfully (according to console), same as client but i cant publish my "screen sharing" stream. This bug appears only in safari 13, other browsers works fine. Adding this part of code.
const beginShare = async() => {
agoraShareStream = AgoraRTC.createStream({
streamID: SHARE_ID,
audio: false,
video: false,
screen: true
});
await agoraShareStream.init(() => {
console.log('init local stream success')
});
setShareStream(agoraShareStream);
enqueueSnackbar("Started screen sharing", { variant: "info" });
}
const createShareClient = async() => {
const agoraShareClient = AgoraRTC.createClient({
mode: "rtc",
codec: "vp8"
});
await agoraShareClient.init(appId);
await agoraShareClient.join(getAgoraToken({ uid: SHARE_ID, channel }), channel, SHARE_ID);
agoraShareClient.publish(agoraShareStream);
setShareClient(agoraShareClient);
}
Unfortunately, Agora doesn't support screen sharing on Safari as noted in the documentation here: https://docs.agora.io/en/Video/screensharing_web?platform=Web#introduction
The Agora RTC Web SDK supports screen sharing in the following desktop browsers:
Chrome 58 or later
Firefox 56 or later
Edge 80 or later on Windows 10+

React native webview HTML5 video sound not working in IOS silent mode

I am using WebView to load a webpage which has an embedded video player. It works fine when the app is in ringer mode. But does not have any sound when App is in silent mode. I am not well aware of IOS. Any help would be appreciated.
<WebView startInLoadingState={true}
mediaPlaybackRequiresUserAction={false}
javaScriptEnabled={ true }
source={{uri:'http://ab24.live/player'}}/>
Since I can't comment yet, just adding that it's been fixed (last comment of github issue).
So in order to avoid needing to call that hacky workaround function, now you just need to add useWebKit={true} to the WebView component.
Fix was implemented last month and should work with Expo V32+ versions.
Assuming that you're using expo and you have come up against this bug, you can get around this problem using the following:
import { Audio } from "expo";
...
async playInSilentMode() {
// To get around the fact that audio in a `WebView` will be muted in silent mode
// See: https://github.com/expo/expo/issues/211
//
// Based off crazy hack to get the sound working on iOS in silent mode (ringer muted/on vibrate)
// https://github.com/expo/expo/issues/211#issuecomment-454319601
await Audio.setAudioModeAsync({
playsInSilentModeIOS: true,
allowsRecordingIOS: false,
interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_MIX_WITH_OTHERS,
shouldDuckAndroid: false,
interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
playThroughEarpieceAndroid: true
});
// console.log(" 🔈 done: setAudioModeAsync");
await Audio.setIsEnabledAsync(true);
// console.log(" 🔈 done: setIsEnabledAsync");
const sound = new Audio.Sound();
await sound.loadAsync(
require("./500-milliseconds-of-silence.mp3") // from https://github.com/anars/blank-audio
);
// console.log(" 🔈 done: sound.loadAsync");
await sound.playAsync();
sound.setIsMutedAsync(true);
sound.setIsLoopingAsync(true);
// console.log(" 🔈 done: sound.playAsync");
}
Then, reference this in componentDidMount():
async componentDidMount() {
await playInSilentMode();
}

Is it possible broadcast audio with screensharing with WebRTC

is it possible broadcast audio with screensharing with WebRTC?
Simple calling getUserMedia with audio: true fails by permission denied error.
Is there any workeround which could be used to broadcast audio also?
Will be audio implemented beside screensharing?
Thanks.
Refer this demo: Share screen and audio/video from single peer connection!
Multiple streams are captured and attached to a single peer connection. AFAIK, audio alongwith chromeMediaSource:screen is "still" not permitted.
Updated at April 21, 2016
Now you can capture audio+screen using single getUserMedia request both on Firefox and Chrome.
However Chrome merely supports audio+tab i.e. you can NOT capture full-screen along with audio.
Audio+Tab means any chrome tab along with microphone.
Updated at Jan 09, 2017
You can capture both audio and screen streams by making two parallel (UNIQUE) getUserMedia requests.
Now you can use addTrack method to add audio tracks into screen stream:
var audioStream = captureUsingGetUserMedia();
var screenStream = captureUsingGetUserMedia();
var audioTrack = audioStream.getAudioTracks()[0];
// add audio tracks into screen stream
screenStream.addTrack( audioTrack );
Now screenStream has both audio and video tracks.
nativeRTCPeerConnection.addStream( screenStream );
nativeRTCPeerConnection.createOffer(success, failure, options);
As of May 2020
To share the audio track of the screen share you can use getDisplayMedia instead of getUserMedia. Docs.
navigator.mediaDevices.getDisplayMedia({audio: true, video: true})
This is currently only supported in Chrome / Edge and it is only supported when using the "Chrome Tab" sharing option. You'll see a checkmark for Share audio in the dialog box.
In Firefox, you can use getUserMedia to grab a screenshare/etc and mic audio in the same request, and can attach it to a PeerConnection. You can combine it with other streams -- multiple audio or video tracks in a single PeerConnection in Firefox requires Firefox 38 or later. Currently 38 is Developer Edition (formerly termed Aurora). 38 should go to release in around 9 weeks or so.
yes you can record audio and screen record on chrome with two requests.
getScreenId(function (error, sourceId, screen_constraints) {
capture screen
navigator.getUserMedia = navigator.mozGetUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia(screen_constraints, function (stream) {
navigator.getUserMedia({audio: true}, function (audioStream) {
stream.addTrack(audioStream.getAudioTracks()[0]);
var mediaRecorder = new MediaStreamRecorder(stream);
mediaRecorder.mimeType = 'video/mp4'
mediaRecorder.stream = stream;
document.querySelector('video').src = URL.createObjectURL(stream);
var video = document.getElementById('screen-video')
if (video) {
video.src = URL.createObjectURL(stream);
video.width = 360;
video.height = 300;
}
}, function (error) {
alert(error);
});
}, function (error) {
alert(error);
});
});