MediaStream reference not removed / why does my webcam stay busy? - getusermedia

Background / Issue
Using navigator.mediaDevices.getUserMedia(constraints) I can obtain a MediaStream object for various devices, amongst them webcam and microphone, allowing you to do whatever you want with the data that comes through.
The method getUserMedia returns a Promise which resolve to a media stream or rejects if there is no stream available for the given constraints (video, audio etc.) If I do obtain a stream object BUT don't save any reference to the MediaStream - I understand that the garbage collector should remove it.
What I've observed is that the stream is not removed - if I obtain a stream for the webcam for example, it keeps being busy even though I have no reference left to the stream.
Questions
Where is the MediaStream object stored if I don't save a reference to it?
Why is it not removed by the garbage collector?
Why does my webcam stay busy?

The MediaStream API requires you to stop each track contained in the MediaStream instance that you obtained. Until you do so, the media capture will keep going.
navigator.mediaDevices
.getUserMedia({
audio: true,
video: true
})
.then(function (stream) {
console.log('got stream with id ' + stream.id)
stream.getTracks().forEach(function (track) { track.stop() })
// WebCam will not be busy anymore
})
.catch(function (reason) {
console.error('capture failed ' + reason)
})

Related

getUserMedia differentiating between what hardware is erroring

I am running a getUserMedia for camera and microphone,
navigator.mediaDevices
.getUserMedia({audio:true, video: true)
.then((stream) => {})
.catch((error) => {})
Is there a way to differentiate what device is causing the promise to fail? i.e if its the camera that is unreadable or the mic, are you able to find it's the camera from the error object? I can find anything other than error.name and error.message?
No, unfortunately when you capture from both at the same time, they either both succeed for both fail together.
Many applications will capture the audio and video separately, and then create a new MediaStream with the tracks from the MediaStreams from each separate device. I have a hunch that this can lead to synchronization problems in the cases where the audio/video are sent a single stream from the device internally, but haven't proven this. It must not be a significant problem, at least for video conferencing, as this is what Google does for Hangouts/Meet.
What sorts of errors do you want to detect?
If your machine doesn't have the hardware (camera, mic) it needs to make your MediaStream, you can find this out by using .enumerateDevices() before you try to use .getUserMedia(). A function like this might give you the information you need for an {audio: true, video:true} MediaStream.
async function canIDoIt () {
const devices = await navigator.mediaDevices.enumerateDevices()
let hasAudio = false
let hasVideo = false
devices.forEach(function(device) {
if (device.kind === 'videoinput') hasVideo = true
if (device.kind === 'audioinput') hasAudio = true
})
return hasAudio && hasVideo
}
Using that is a good start for a robust media app: you can tell your user they don't have the correct hardware right away before diving into to the arcana of the errors thrown by .getUserMedia().
If your user denies permission to .getUserMedia() access their media devices, it will throw an error with error.message containing the string "permission denied". Keep in mind that when the user denies permission, your program doesn't get back much descriptive information about the devices. Because cybercreeps.
If your user's devices cannot handle the constraints you present to .getUserMedia(), you'll get "constraint violation" in the error.message string. The kinds of device constraints you can violate are things like
{video: { width:{exact:1920},
height:{exact:1080}}}
Avoiding exact in your constraints reduces the chance of constraint violations. Instead you can give something like this.
{video: { width:{min:480, ideal: 720, max: 1920},
height:{min:270, ideal:1280, max: 1040}}}
Other errors are probably more specific to a particular machine and browser. In practically all cases the thrown error object contains an explanatory error.message.

TokBox/Vonage allowing audio capture support when screensharing

Screen Capture API, specifically getDisplayMedia(), currently supports screensharing and sharing the audio playing in your device (e.g: youtube) at the same time. Docs. Is this currently supported using TokBox/Vonage Video API? Has someone been able to achieve this?
I guess there could be some workaround using getDisplayMedia and passing the audio source when publishing, e.g: OT.initPublisher({ audioSource: newDisplayMediaAudioTrack }), but doesn't seem like a clean solution.
Thanks,
Manik here from the Vonage Client SDK team.
Although this feature does not exist in the Video Client SDK just yet, you can accomplish the sharing of audio with screen by creating a publisher like so:
let publisher;
try {
const stream = await navigator.mediaDevices.getDisplayMedia({video: true, audio: true });
const audioTrack = stream.getAudioTracks()[0];
const videoTrack = stream.getVideoTracks()[0];
publisher = OT.initPublisher({audioSource: audioTrack, videoSource: videoTrack});
} catch (e) {
// handle error
}
If you share a tab, but the tab doesn't play audio (static pdf or ppt) then the screen flickers. To avoid this, specify frameRate constraint for the video stream. see - https://gist.github.com/rktalusani/ca854ca8621c20488bea6e62ad04e341

GSM SIM800C text to speech audio stream

I have this USB-to-GSM Serial-GPRS-SIM800C module and I have successfully been able to send AT commands to it and do stuffs, but what I really wanted was Text to speech capabilities, I was able to generate an AMR audio file, upload it unto the module's internal memory and play it whenever some one calls.
But the message heard by caller's is going to be dynamic and TTS will run realtime, so the uploading process of the audio file into the module will cause undesirable delay, is there any way I could stream some audio through the module?
Thanks.
Here's what I have had to do.
Start call (ATDxxxxxxxxxxx;)
Set mode (AT+DTAM=2)
Start recording (AT+CREC=1,1,0)
Speak what I want to playback into microphone
5.Stop recording (AT+CREC=2)
Hang up (ATH)
Now I can playback what I recorded using the following
Start call (ATDxxxxxxxxxxx;)
Set mode (AT+DTAM=2)
Start playback (AT+CREC=4,1,0,80)
Hang up (ATH)
No idea how to do this dynamically or even upload an *.amr file.
Would be grateful if you could share what commands you used to see if there's any way to improve.
To answer #anothersanj
I'm using serialport-gsm to make things easier.
This is how I go about it:
modem.executeCommand('AT+FSMKDIR=C:\\status\\',(result) => { log.debug(result); });
//reading the audio file from your computer with nodejs fs module
fs.readFile('tts2.amr', function(err, amr_data) {
if(!err) {
let fsize= fs.statSync('tts2.amr').size;
log.debug(fsize);
//creating the file on the GSM module's memory
modem.executeCommand('AT+FSCREATE=C:\\stats\\tts2.amr',(result) => { log.debug(result); });
//writing the file on the GSM module's memory
modem.executeCommand('AT+FSWRITE=C:\\stats\\tts2.amr,0,'+fsize+',100',(result) => {
modem.port.write(amr_data);
});
//Display file list on specified path (like ls command)
modem.executeCommand('AT+FSLS=C:\\stats',(result) => { log.debug(result); });
}
});
And for playing the file whenever someone calls you do:
//playing the file on incoming call
modem.on('onNewIncomingCall', (result) => {
log.debug(result);
modem.executeCommand('ATA',(result) => { log.debug(result); });
modem.executeCommand('AT+CMEDPLAY=1,\"C:\\stats\\tts2.amr\",0,100',(result) => { log.debug(result); });
modem.executeCommand('AT+DDET=1',(result) => { log.debug(result); });
});

Audio Player in NativeScript-Vue

I have an mp3 playlist and I want to play these audio tracks in an audio player in NativeScript-Vue. However, there is no plugin for it.
However, there is a NativeScript plugin nativescript-audio which can be used for playing audio.
In the following Playground example, you will notice that it has been adopted to play in a NativeScript-Vue application.
https://play.nativescript.org/?template=play-vue&id=83Hs3D&v=19
This can work, however, the problem is that the player is mounted in the mounted() hook, and even the mp3 file path is supplied there. However, for me, the mp3 file is loaded asynchronously, added to a Vuex store, and then available as computed property in the component.
How can I adopt this code to take the mp3 file from a computed property rather than hard-coded in mounted()?
Here is the documentation for this plugin - https://github.com/bradmartin/nativescript-audio
I was able to find a solution.
Watch your computed property. Let's say it's called media.
On change, update the audio track using the following code:
const playerOptions = {
audioFile: this.media,
loop: false,
autoplay: false
}
this._player
.playFromUrl(playerOptions)
.then(function(res) {
console.log(res);
})
.catch(function(err) {
console.log('something went wrong..', err);
});

Is it possible broadcast audio with screensharing with WebRTC

is it possible broadcast audio with screensharing with WebRTC?
Simple calling getUserMedia with audio: true fails by permission denied error.
Is there any workeround which could be used to broadcast audio also?
Will be audio implemented beside screensharing?
Thanks.
Refer this demo: Share screen and audio/video from single peer connection!
Multiple streams are captured and attached to a single peer connection. AFAIK, audio alongwith chromeMediaSource:screen is "still" not permitted.
Updated at April 21, 2016
Now you can capture audio+screen using single getUserMedia request both on Firefox and Chrome.
However Chrome merely supports audio+tab i.e. you can NOT capture full-screen along with audio.
Audio+Tab means any chrome tab along with microphone.
Updated at Jan 09, 2017
You can capture both audio and screen streams by making two parallel (UNIQUE) getUserMedia requests.
Now you can use addTrack method to add audio tracks into screen stream:
var audioStream = captureUsingGetUserMedia();
var screenStream = captureUsingGetUserMedia();
var audioTrack = audioStream.getAudioTracks()[0];
// add audio tracks into screen stream
screenStream.addTrack( audioTrack );
Now screenStream has both audio and video tracks.
nativeRTCPeerConnection.addStream( screenStream );
nativeRTCPeerConnection.createOffer(success, failure, options);
As of May 2020
To share the audio track of the screen share you can use getDisplayMedia instead of getUserMedia. Docs.
navigator.mediaDevices.getDisplayMedia({audio: true, video: true})
This is currently only supported in Chrome / Edge and it is only supported when using the "Chrome Tab" sharing option. You'll see a checkmark for Share audio in the dialog box.
In Firefox, you can use getUserMedia to grab a screenshare/etc and mic audio in the same request, and can attach it to a PeerConnection. You can combine it with other streams -- multiple audio or video tracks in a single PeerConnection in Firefox requires Firefox 38 or later. Currently 38 is Developer Edition (formerly termed Aurora). 38 should go to release in around 9 weeks or so.
yes you can record audio and screen record on chrome with two requests.
getScreenId(function (error, sourceId, screen_constraints) {
capture screen
navigator.getUserMedia = navigator.mozGetUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia(screen_constraints, function (stream) {
navigator.getUserMedia({audio: true}, function (audioStream) {
stream.addTrack(audioStream.getAudioTracks()[0]);
var mediaRecorder = new MediaStreamRecorder(stream);
mediaRecorder.mimeType = 'video/mp4'
mediaRecorder.stream = stream;
document.querySelector('video').src = URL.createObjectURL(stream);
var video = document.getElementById('screen-video')
if (video) {
video.src = URL.createObjectURL(stream);
video.width = 360;
video.height = 300;
}
}, function (error) {
alert(error);
});
}, function (error) {
alert(error);
});
});