stream is undefined when using navigator.getUserMedia - webrtc

I am using webrtc and trying to show the video after obtaining permission of getUserMedia()
here is what I am trying to do
var mediaConstraints = { audio: true, video: true };
const stream = await navigator.getUserMedia
(mediaConstraints, function() {
console.log("obtained successfully");
}, function() {
console.error("access was denied OR hardware issue");
});
however stream is undefied, it should have a value of any kind

navigator.getUserMedia is deprecated.
Try this instead
navigator.mediaDevices.getUserMedia()

navigator.getUserMedia is the legacy variant of getUserMedia
It uses callbacks and does not return a promise.
You're mixing styles, either use callbacks or navigator.mediaDevices.getUserMedia without callbacks.

Related

webrtc getUserMedia : how to get a stream from getUserMedia and publish it to SRS?

How to get a stream using html5 getUserMedia and publish that to SRS ?
I want to get a stream directly from browser and not using OBS or ffmpeg.
Any sample available ?
Well, it dpends on your use scenario.
If you want to do live streaming, please see this post, the media flow:
Browser --WebRTC--> SRS --HLS/HTTP-FLV--> Viewer
If you want to do video meeting, please see this post, the media flow:
Browser <--WebRTC--> SRS <--WebRTC--> Viewer
Note that for video meeting, there should be NxN streams in a room.
I have a Solution.
Check the below Code...
HTML CODE: Here you need only Video tag.
Index.html
<video id="remoteScreen" autoplay="true"></video>
Screenshare.js file
const getLocalScreenCaptureStream = async () => {try {
const constraints = { video: { cursor: 'always' }, audio: false };
const screenCaptureStream = await navigator.mediaDevices.getDisplayMedia(constraints); return screenCaptureStream; } catch (error) {
console.error('failed to get local screen', error)}}
main.js
var localStreamScreen = null;
async function shareScreen() {localStreamScreen = await getLocalScreenCaptureStream(); console.log("localStreamScreen: ", localStreamScreen)}
screenshare.js
function handleRemoteStreamAddedScreen(event) {
console.log('Remote stream added.');
alert('Remote stream added.');
if ('srcObject' in remoteScreen) {
remoteScreen.srcObject = event.streams[0];
} else {
// deprecated
remoteScreen.src = window.URL.createObjectURL(event.stream);
}
remoteScreenStream = event.stream};
Hope, it will work for you.

Closing WebRTC track will not close camera device or tab camera indicator

Banging my head to the wall with this one, I can't seem to understand what is holding on the camera video stream and not closing when MediaStreamTrack.stop() called.
I have a typescript class where I handle getting the WebRTC stream track and passing it using an observable event to a functional reactjs component, the below code is the component registering to the event and using state for the stream track.
const [videoStreamTrack, setVideoStreamTrack] = useState < MediaStreamTrack > (
null
)
useEffect(() => {
return () => {
videoStreamTrack?.stop()
videoElement.current.srcObject.getVideoTracks().forEach((track) => {
track.stop()
videoElement.current.srcObject.removeTrack(track)
})
videoElement.current.srcObject = null
}
}, [])
case RoomEvents.WebcamProducerAdded:
case RoomEvents.VideoStreamReplaced: {
if (result.data?.track) {
if (result.data.track.kind === 'video') {
previewVideoStreamTrack?.stop()
setPreviewVideoStreamTrack(null)
setVideoStreamTrack(result.data.track)
}
}
break
}
In the "Room" class I use the below code to grab the stream.
const videoDevice = this.webcam.device
if (!videoDevice) {
throw new Error('no webcam devices')
}
const userMedia = await navigator.mediaDevices.getUserMedia({
video: this.environmentPlatformService.isMobile ?
true : {
deviceId: {
exact: this.webcam.device.deviceId
},
...VIDEO_CONSTRAINS[this.webcam.resolution],
},
})
const videoTrack = userMedia.getVideoTracks()[0]
this.eventSubject.next({
eventName: RoomEvents.WebcamProducerAdded,
data: {
track: videoTrack,
},
})
I am holding to this.webcam.device details using the code below.
async updateInputOutputMediaDevices(): Promise < MediaDeviceInfo[] > {
await navigator.mediaDevices.getUserMedia({
audio: true,
video: true
})
const devices = await navigator.mediaDevices.enumerateDevices()
await this.updateWebcams(devices)
await this.updateAudioInputs(devices)
await this.updateAudioOutputs(devices)
return devices
}
private async updateWebcams(devices: MediaDeviceInfo[]) {
this.webcams = new Map < string, MediaDeviceInfo > ()
for (const device of devices.filter((d) => d.kind === 'videoinput')) {
this.webcams.set(device.deviceId, device)
}
const array = Array.from(this.webcams.values())
this.eventSubject.next({
eventName: RoomEvents.CanChangeWebcam,
data: {
canChangeWebcam: array.length > 1,
mediaDevices: array,
},
})
}
Refreshing the page will close the camera and tab indicator.
useEffect(() => {
return () => {
videoStreamTrack?.stop()
videoElement.current.srcObject.getVideoTracks().forEach((track) => {
track.stop()
videoElement.current.srcObject.removeTrack(track)
})
videoElement.current.srcObject = null
}
}, [])
So here you are search and destroying video tracks. Seems right-ish; we'll see
async updateInputOutputMediaDevices(): Promise < MediaDeviceInfo[] > {
await navigator.mediaDevices.getUserMedia({
audio: true,
video: true
})
const devices = await navigator.mediaDevices.enumerateDevices()
await this.updateWebcams(devices)
await this.updateAudioInputs(devices)
await this.updateAudioOutputs(devices)
return devices
}
Above I see there's a call for audio might be where the hiccups are? Can't overly examine but maybe you're opening both and closing just video? Try doing a loop through all tracks not just video and see what's there?
#blanknamefornow answer helped me nail the issue.
We are calling getUserMedia in multiple places not only in the
“room” class handling mediasoup actions but also fore
preview/device-selection/etc and didn’t really ever closed the
tracks retrieved.
Sometimes, those tracks are held into useState
variables and when component unmounted if you try to access the
variables they are already nulled by reactjs. The workaround is
since the HTML elements are still referenced stop the track when
needed. I believe this was the missing ingredient when trying to
figure it out.

I get an stream.getTracks is not a function error in PeerJS. How do i fix this?

so I have been using PeerJS to make a p2p chat application and it works when trying to chat, but when i try to call someone with the function below:
function callEm(id){
call = peer.call(id,
navigator.mediaDevices.getUserMedia({video: false, audio: true})
);
call.on('stream', function(stream) { // B
window.remoteAudio.srcObject = stream; // C
window.remoteAudio.autoplay = true; // D
window.peerStream = stream; //E
showConnectedContent(); //F });
})
}
I get an error from PeerJS saying that e.getTracks is not a function
e.getTracks is a code in the peerJS library: https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js
I have been trying everything i find on the internet and I still get this error. I hope someone will be able to help me out.
Edit: I tried the cleaner version of the library( https://unpkg.com/peerjs#1.3.1/dist/peerjs.js) and i get the error
stream.getTracks() is not a function
So I found the solution, the thing is my code was wrong :P
so I created a function that calls a person with an Id.
earlier it was:
function callEm(id){
console.log(id);
console.log(navigator.mediaDevices.getUserMedia({video: false, audio: true}));
call = peer.call(id,
// window.localStream
navigator.mediaDevices.getUserMedia({video: false, audio: true})
);
call.on('stream', function(stream) { // B
window.remoteAudio.srcObject = stream; // C
window.remoteAudio.autoplay = true; // D
window.peerStream = stream; //E
showConnectedContent(); //F });
})
}
and the issue was that I provided a promise as a MediaStream instead of the actual value. dumb mistake I know
so after a friend in discord pointed that to me and helped me change it,
the function became:
function callEm(id){
navigator.mediaDevices.getUserMedia({video: false, audio: true})
.then(function(stream) {
call = peer.call(id, stream);
call.on('stream', function(stream) { // B
window.remoteAudio.srcObject = stream; // C
window.remoteAudio.autoplay = true; // D
window.peerStream = stream; //E
showConnectedContent(); //F });
})
})
.catch(function(err) {
console.log("error: " + err);
})
);
}
and that fixed it. I didn't work out on answering the call yet, but this sent a call without any errors!
I hope someone finds this useful.

RTCPeerConnection.createAnswer callback returns undefined object in mozilla for WebRTC chat

Following is my code to answer the incoming call:
var pc = connection.pc;
pc.setRemoteDescription(sdp,function() {
pc.createAnswer(function(answer) {
pc.setLocalDescription(answer,function() {
// code for sending the answer
})
})
})
The above code works fine for chrome, but when i run the same in mozilla, the answer obtained from pc.createAnswer callback is undefined. As a result of which it gives me following error:
TypeError: Argument 1 of RTCPeerConnection.setLocalDescription is not
an object.
The problem is you're not checking errors, specifically: not passing in the required error callbacks.
setRemoteDescription and setRemoteDescription require either three arguments (legacy callback style) or one (promises), but you're passing in two. Same for createAnswer minus one.
The browser's JS bindings end up picking the wrong overload, returning you a promise which you're not checking either, effectively swallowing errors.
Either add the necessary error callbacks:
var pc = connection.pc;
pc.setRemoteDescription(sdp, function() {
pc.createAnswer(function(answer) {
pc.setLocalDescription(answer, function() {
// code for sending the answer
}, function(e) {
console.error(e);
});
}, function(e) {
console.error(e);
});
}, function(e) {
console.error(e);
});
Or use the modern promise API:
var pc = connection.pc;
pc.setRemoteDescription(sdp)
.then(() => pc.createAnswer())
.then(answer => pc.setLocalDescription(answer))
.then(() => {
// code for sending the answer
})
.catch(e => console.error(e));
The promise API is available natively in Firefox, or through adapter.js in Chrome. See fiddle.
And always check for errors. ;)

HotTowel SPA with Facebook SDK

I've started a new Visual Studio 2012 Express Web project using the HotTowel SPA template. I'm not sure where I should be placing the code to load the Facebook SDK within the HotTowel structure?
I've tried main.js, and shell.js but I can't seem to get the sdk to load. Facebook says to put the below code to load the sdk asynchronously
window.fbAsyncInit = function () {
// init the FB JS SDK
FB.init({
appId: '577148235642429', // App ID from the app dashboard
channelUrl: '//http://localhost:58585/channel.html', // Channel file for x-domain comms
status: true, // Check Facebook Login status
xfbml: true // Look for social plugins on the page
});
// Additional initialization code such as adding Event Listeners goes here
};
// Load the SDK asynchronously
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) { return; }
js = d.createElement(s); js.id = id;
js.src = "//connect.facebook.net/en_US/all/debug.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Create a module in a file called facebooksdk.js that contains this code. Then "require" the code in the boot sequence, if you want it to load right away.
It is not easy to use to use Facebook SDK with Durandal.
Facebook offers instructions how to set it up with require. But sounds like that method is not supported in Durandal.
I made my dirty version by wrapping global FB object with vm supporting knockout. You could easily use that and bind to any properties like users name.
Include that facebook.js from shell.js to make sure it is loaded when app starts.
Here is my facebook.js:
define(['services/logger'], function (logger) {
var vm = {
facebook: null,
initialized: ko.observable(false),
name: ko.observable(""),
picturesrc: ko.observable(""),
login: login,
logout: logout
};
$.ajaxSetup({ cache: true });
$.getScript('//connect.facebook.net/en_UK/all.js', function () {
window.fbAsyncInit = function () {
vm.facebook = FB;
vm.facebook.init({
appId: '160000000000499',
status: true,
cookie: true,
xfbml: true,
oauth: true
});
vm.initialized(true);
vm.facebook.getLoginStatus(updateLoginStatus);
vm.facebook.Event.subscribe('auth.statusChange', updateLoginStatus);
};
});
return vm;
function login() {
vm.facebook.login( function(response) {
//Handled in auth.statusChange
} , { scope: 'email' });
}
function logout() {
vm.facebook.logout( function (response) {
vm.picturesrc("");
vm.name("");
});
}
function updateLoginStatus(response) {
if (response.authResponse) {
logger.log("Authenticated to Facebook succesfully");
vm.facebook.api('/me', function (response2) {
vm.picturesrc('src="https://graph.facebook.com/' +
+response2.id + '/picture"');
vm.name(response2.name);
});
} else {
logger.log("Not authenticated to Facebook");
vm.picturesrc("");
vm.name("");
}
}
});
I have not tested my code properly. But atleast logging in and fetching name worked fine in Chrome.
Remember change appId and update proper domain at developers.facebook.com.