Is it possible broadcast audio with screensharing with WebRTC - webrtc

is it possible broadcast audio with screensharing with WebRTC?
Simple calling getUserMedia with audio: true fails by permission denied error.
Is there any workeround which could be used to broadcast audio also?
Will be audio implemented beside screensharing?
Thanks.

Refer this demo: Share screen and audio/video from single peer connection!
Multiple streams are captured and attached to a single peer connection. AFAIK, audio alongwith chromeMediaSource:screen is "still" not permitted.
Updated at April 21, 2016
Now you can capture audio+screen using single getUserMedia request both on Firefox and Chrome.
However Chrome merely supports audio+tab i.e. you can NOT capture full-screen along with audio.
Audio+Tab means any chrome tab along with microphone.
Updated at Jan 09, 2017
You can capture both audio and screen streams by making two parallel (UNIQUE) getUserMedia requests.
Now you can use addTrack method to add audio tracks into screen stream:
var audioStream = captureUsingGetUserMedia();
var screenStream = captureUsingGetUserMedia();
var audioTrack = audioStream.getAudioTracks()[0];
// add audio tracks into screen stream
screenStream.addTrack( audioTrack );
Now screenStream has both audio and video tracks.
nativeRTCPeerConnection.addStream( screenStream );
nativeRTCPeerConnection.createOffer(success, failure, options);

As of May 2020
To share the audio track of the screen share you can use getDisplayMedia instead of getUserMedia. Docs.
navigator.mediaDevices.getDisplayMedia({audio: true, video: true})
This is currently only supported in Chrome / Edge and it is only supported when using the "Chrome Tab" sharing option. You'll see a checkmark for Share audio in the dialog box.

In Firefox, you can use getUserMedia to grab a screenshare/etc and mic audio in the same request, and can attach it to a PeerConnection. You can combine it with other streams -- multiple audio or video tracks in a single PeerConnection in Firefox requires Firefox 38 or later. Currently 38 is Developer Edition (formerly termed Aurora). 38 should go to release in around 9 weeks or so.

yes you can record audio and screen record on chrome with two requests.
getScreenId(function (error, sourceId, screen_constraints) {
capture screen
navigator.getUserMedia = navigator.mozGetUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia(screen_constraints, function (stream) {
navigator.getUserMedia({audio: true}, function (audioStream) {
stream.addTrack(audioStream.getAudioTracks()[0]);
var mediaRecorder = new MediaStreamRecorder(stream);
mediaRecorder.mimeType = 'video/mp4'
mediaRecorder.stream = stream;
document.querySelector('video').src = URL.createObjectURL(stream);
var video = document.getElementById('screen-video')
if (video) {
video.src = URL.createObjectURL(stream);
video.width = 360;
video.height = 300;
}
}, function (error) {
alert(error);
});
}, function (error) {
alert(error);
});
});

Related

Issue with WebRTC/getUserMedia in iOS 14 Safari and phone sleep/unlock

I seem to have noticed a regression with getUserMedia in iOS 14 Safari. Here are steps to reproduce:
Go to https://webrtc.github.io/samples/src/content/getusermedia/gum/ on iOS 14 Safari
Click "Open camera" and accept camera permissions; you should see local camera video.
Click the power button and lock the phone; let the phone go to sleep
Unlock/wake the phone; the local camera video is gone.
This does not happen on devices running iOS 13.
My questions are:
Can anyone else confirm this on their devices? I have only tested on iPhone 11 so far.
Has anyone found a solution yet?
Yes, I am having the a similar strange issue with iOS 14.2 and getUserMedia I can only get
navigator.mediaDevices.getUserMedia({video: true }) to work
If I change it to:
navigator.mediaDevices.getUserMedia({ audio: true, video: true })
it will fail.
It's not an issue with code as I tested my project on safari MacOS, chrome for MacOS, linux Firefox.
As a temp fix so I could move on with my life for the moment I did this:
const constraints = navigator.userAgent.includes("iPhone") ? {video:true} : {
audio:true,
video: {
width: { ideal: 640 },
height: {ideal: 400 }
}
};
Yes also here!
I check this behavior in Browserstack with iOS:
12.x: ✓
13.x: ✓
14.x: ✗
Try this:
navigator.mediaDevices.getUserMedia({ audio: true, video: true })
.then(stream => {
const videoTracks = stream.getVideoTracks();
console.log(videoTracks[0].enabled);
document.querySelector('video').srcObject = stream;
});
// Output
true <-- ?
Then if you try again get the camera, but replacing the video track on the previous MediaStream works.
Sometimes if you use video constraints with facingMode: 'user' also works, why? I don't know.
I still can't find a consistent solution.
Having the same issue on iPad pro 2nd generation with iOS 14.7.1 and iPhone 7 iOS 14.6.x. The only solution I found that seems to constantly work is to call getUserMedia separated by audio and video constraints. As an example:
async function getMedia(constraints) {
let videoStream = null;
let audioStream = null;
try {
videoStream = await navigator.mediaDevices.getUserMedia({video: true});
audioStream = await navigator.mediaDevices.getUserMedia({audio: true});
/* use the stream */
} catch (err) {
/* handle the error */
}
}
You can replace {video: true} or {audio: true} with your desired constraints. Then you can either work with the separate MediaStream objects or to construct your own MediaStream object from the audio and video tracks of your streams.

TokBox/Vonage allowing audio capture support when screensharing

Screen Capture API, specifically getDisplayMedia(), currently supports screensharing and sharing the audio playing in your device (e.g: youtube) at the same time. Docs. Is this currently supported using TokBox/Vonage Video API? Has someone been able to achieve this?
I guess there could be some workaround using getDisplayMedia and passing the audio source when publishing, e.g: OT.initPublisher({ audioSource: newDisplayMediaAudioTrack }), but doesn't seem like a clean solution.
Thanks,
Manik here from the Vonage Client SDK team.
Although this feature does not exist in the Video Client SDK just yet, you can accomplish the sharing of audio with screen by creating a publisher like so:
let publisher;
try {
const stream = await navigator.mediaDevices.getDisplayMedia({video: true, audio: true });
const audioTrack = stream.getAudioTracks()[0];
const videoTrack = stream.getVideoTracks()[0];
publisher = OT.initPublisher({audioSource: audioTrack, videoSource: videoTrack});
} catch (e) {
// handle error
}
If you share a tab, but the tab doesn't play audio (static pdf or ppt) then the screen flickers. To avoid this, specify frameRate constraint for the video stream. see - https://gist.github.com/rktalusani/ca854ca8621c20488bea6e62ad04e341

Implement Live Broadcast by Agora.io

I am trying to implement Live Broadcast by Agora.io in React Native mobile application. I have previously implemented video call successfully. I have gone through documentation, compare and contrast video call to live broadcast ( both web sdk). I could only find a difference in mode of the client which corresponds to channelProfile in react-native sdk. In documentation it says there are three different modes: Communication, Live Broadcast and Gaming. When I implemented video call I assigned 1 for the value of channelProfile, it worked fine, quality was good enough. However, when I assign 2 for channelProfile to indicate it is a Live Broadcast, the quality goes heavily down. Do I make anything wrong while implementation of Live Broadcast? How can I improve the quality of Live Broadcast?
For consideration, I put my code below:
const config = {
appid: 'MY APP ID',
channelProfile: this.props.navigation.getParam('channelProfile', 2),
clientRole: this.props.navigation.getParam('clientRole', 1),
videoEncoderConfig: {
width: 360,
height: 480,
bitrate: 1,
frameRate: FPS30,
orientationMode: Adaptative,
},
audioProfile: AudioProfileDefault,
audioScenario: AudioScenarioDefault
}
RtcEngine.on('userJoined', (data) => {
console.warn("user joined", data);
const { peerIds } = this.state;
if (peerIds.indexOf(data.uid) === -1) {
this.setState({
peerIds: [...this.state.peerIds, data.uid]
})
}
})
RtcEngine.on('error', (error) => {
console.warn("error", error);
})
RtcEngine.init(config);
In Agora's SDK there used to be three channel-modes, but recently the gaming SDK has been combined with the native SDKs so there are only two channel-modes, communication and broadcast.
Each mode optimizes for different qualities within the channel and within the streams. For broadcasting the documentation mentions that when using default bitrate that broadcast mode uses twice the bitrate of communication.
If you are having quality issues you should consider changing your bitrate, currently your code is setting the bitrate to 1 which is very low. Agora provides a list of suggested resolution profiles, fps, and bit rates.
Agora Video Bitrate documentation: https://docs.agora.io/en/Interactive%20Broadcast/API%20Reference/oc/Classes/AgoraVideoEncoderConfiguration.html#//api/name/bitrate

Video Js is not auto playing in the mobile browser

I am trying video js based player in my web page. IT is working perfect in the desktop browsers and the mobile browser not auto playing the video.I tried all the options like:
var options = {};
var player = videojs('video-js', options, function onPlayerReady() {
videojs.log('Your player is ready!');
// In this context, `this` is the player that was created by Video.js.
this.play();
// How about an event listener?
this.on('ended', function() {
videojs.log('Awww...over so soon?!');
});
});
Still it is not auto playing the video.
May i know do you have any solution?
Autoplay on iOS 10 and recent versions of Chrome for Android only works if the video is muted. On older versions of each platform, it doesn't work at all.

MediaStream reference not removed / why does my webcam stay busy?

Background / Issue
Using navigator.mediaDevices.getUserMedia(constraints) I can obtain a MediaStream object for various devices, amongst them webcam and microphone, allowing you to do whatever you want with the data that comes through.
The method getUserMedia returns a Promise which resolve to a media stream or rejects if there is no stream available for the given constraints (video, audio etc.) If I do obtain a stream object BUT don't save any reference to the MediaStream - I understand that the garbage collector should remove it.
What I've observed is that the stream is not removed - if I obtain a stream for the webcam for example, it keeps being busy even though I have no reference left to the stream.
Questions
Where is the MediaStream object stored if I don't save a reference to it?
Why is it not removed by the garbage collector?
Why does my webcam stay busy?
The MediaStream API requires you to stop each track contained in the MediaStream instance that you obtained. Until you do so, the media capture will keep going.
navigator.mediaDevices
.getUserMedia({
audio: true,
video: true
})
.then(function (stream) {
console.log('got stream with id ' + stream.id)
stream.getTracks().forEach(function (track) { track.stop() })
// WebCam will not be busy anymore
})
.catch(function (reason) {
console.error('capture failed ' + reason)
})