Webcam not working with JavaScript on some machines - vue.js

I'm building a an Electron application with Vue.js that uses a webcam. The webcam works within the Electron application on one computer but it just shows a black screen on another. The only notable difference (I think) is that the machine where it works (Machine A) uses Node v14.15.0 and on the machine that it doesn't work (Machine B) uses v12.18.4
I have tested the webcam on Machine B separately. It works via the native camera app on windows and on this online tool. The interesting thing is that both the integrated and external webcams fail to work. As soon as I start the stream the light comes on but that's it. It seems that the promise from .getUserMedia() isn't resolving (see code snippet) but I can't identify why.
How can I get the webcam to stream?
let mediaInputs = [];
let devices = [];
if (!navigator.mediaDevices || !navigator.mediaDevices.enumerateDevices) {
console.log("enumerateDevices() not supported.");
return;
}
mediaInputs = await navigator.mediaDevices.enumerateDevices();
devices = await mediaInputs.filter(
(device) => device.kind === "videoinput"
);
//Stop any existing streams
if (this.video.srcObject !== undefined) {
const l = this.video.srcObject;
l.getTracks().forEach((track) => track.stop());
}
const sourceInfo = this.videoSources.find((o) => o.label === this.source);
const constraints = {
video: {
deviceId: sourceInfo.deviceId,
width: 1280,
height: 720,
},
audio: false,
};
try {
console.log('This line is logged');
//This is where I start the stream.
const stream = await navigator.mediaDevices.getUserMedia(constraints);
console.log('This line is never reached');
this.video = this.$refs.video;
this.video.srcObject = stream;
this.video.play();
} catch (error) {
this.showSnackbar(error);
console.error(error);
}

I just had to update to the latest version of Electron (11.1.1 at the time of writing) for it to work.
However if you're still having trouble, there's a GitHub thread on the topic that's still active

Related

Linking.canOpenURL isn't accurate for checking which Maps app is installed choose the correct maps app

I have a button with an address, and when it opens, I want to use the "default" app which is installed. The reason is, for example, many iOS users uninstall Apple Maps app, so they only have Google. Checking if iOS ? 'maps' : 'google' isn't safe because it can't be Platform dependent.
This is using Expo SDK 46.
I then read to try something like:
const openUrl = () => {
const mapNames = ['comgooglemaps', 'maps'];
const hasApp = mapNames.find(async name => {
try {
return await Linking.canOpenURL(
`${name}://?center=${vehicle.coordinates.latitude}, ${vehicle.coordinates.longitude}`,
);
} catch (_e) {
return false;
}
});
openMap({
provider: hasApp,
end: vehicle.streetAddress,
});
};
but this isn't working because Linking.canOpenURL is always returning the first item since it's a "string" and there fore meets the API requirements of "given URL can be handled".
So I tried an alternate option, based on research on other suggestions:
const openUrl = async () => {
let hasGoogleMaps = false;
await Linking.canOpenURL('comgooglemaps').then(canOpen => {
if (canOpen) {
hasGoogleMaps = true;
}
});
openMap({
provider: hasGoogleMaps ? 'google' : 'apple',
end: vehicle.streetAddress,
});
};
This too fails to open Google Maps on iOS.
My question is: how can I for sure know if I have google maps installed and not base it on the Platform.OS itself?
Bonus question: is it true I cannot install Google Maps on a simulator?

webrtc getUserMedia : how to get a stream from getUserMedia and publish it to SRS?

How to get a stream using html5 getUserMedia and publish that to SRS ?
I want to get a stream directly from browser and not using OBS or ffmpeg.
Any sample available ?
Well, it dpends on your use scenario.
If you want to do live streaming, please see this post, the media flow:
Browser --WebRTC--> SRS --HLS/HTTP-FLV--> Viewer
If you want to do video meeting, please see this post, the media flow:
Browser <--WebRTC--> SRS <--WebRTC--> Viewer
Note that for video meeting, there should be NxN streams in a room.
I have a Solution.
Check the below Code...
HTML CODE: Here you need only Video tag.
Index.html
<video id="remoteScreen" autoplay="true"></video>
Screenshare.js file
const getLocalScreenCaptureStream = async () => {try {
const constraints = { video: { cursor: 'always' }, audio: false };
const screenCaptureStream = await navigator.mediaDevices.getDisplayMedia(constraints); return screenCaptureStream; } catch (error) {
console.error('failed to get local screen', error)}}
main.js
var localStreamScreen = null;
async function shareScreen() {localStreamScreen = await getLocalScreenCaptureStream(); console.log("localStreamScreen: ", localStreamScreen)}
screenshare.js
function handleRemoteStreamAddedScreen(event) {
console.log('Remote stream added.');
alert('Remote stream added.');
if ('srcObject' in remoteScreen) {
remoteScreen.srcObject = event.streams[0];
} else {
// deprecated
remoteScreen.src = window.URL.createObjectURL(event.stream);
}
remoteScreenStream = event.stream};
Hope, it will work for you.

React-Native-Image-Picker Auto video recording possible?

I'm a beginner at React Native.
I am trying to access a native(built-in) camera app on Android device.
I used React-Native-Image-Picker to open the camera app but I would like to record a video somehow automatically(?) I mean not using my finger.
I need codes that make it to record and stop the video.
(I don't mean to give me a code rather, please advise if it is even possible?)
Any help would be very appreciated.
Thank you!
It is possible.
Package: https://github.com/mrousavy/react-native-vision-camera
Review the API and Guide section to see how to start and stop recording programmatically.
They also show an example app that demonstrates different types of capture including video recording, ref: https://github.com/mrousavy/react-native-vision-camera/blob/28fc6a68a5744efc85b532a338e2ab1bc8fa45fe/example/src/views/CaptureButton.tsx
...
const onStoppedRecording = useCallback(() => {
isRecording.current = false;
cancelAnimation(recordingProgress);
console.log('stopped recording video!');
}, [recordingProgress]);
const stopRecording = useCallback(async () => {
try {
if (camera.current == null) throw new Error('Camera ref is null!');
console.log('calling stopRecording()...');
await camera.current.stopRecording();
console.log('called stopRecording()!');
} catch (e) {
console.error('failed to stop recording!', e);
}
}, [camera]);
const startRecording = useCallback(() => {
try {
if (camera.current == null) throw new Error('Camera ref is null!');
console.log('calling startRecording()...');
camera.current.startRecording({
flash: flash,
onRecordingError: (error) => {
console.error('Recording failed!', error);
onStoppedRecording();
},
onRecordingFinished: (video) => {
console.log(`Recording successfully finished! ${video.path}`);
onMediaCaptured(video, 'video');
onStoppedRecording();
},
});
// TODO: wait until startRecording returns to actually find out if the recording has successfully started
console.log('called startRecording()!');
isRecording.current = true;
} catch (e) {
console.error('failed to start recording!', e, 'camera');
}
}, [camera, flash, onMediaCaptured, onStoppedRecording]);
//#endregion
...

Closing WebRTC track will not close camera device or tab camera indicator

Banging my head to the wall with this one, I can't seem to understand what is holding on the camera video stream and not closing when MediaStreamTrack.stop() called.
I have a typescript class where I handle getting the WebRTC stream track and passing it using an observable event to a functional reactjs component, the below code is the component registering to the event and using state for the stream track.
const [videoStreamTrack, setVideoStreamTrack] = useState < MediaStreamTrack > (
null
)
useEffect(() => {
return () => {
videoStreamTrack?.stop()
videoElement.current.srcObject.getVideoTracks().forEach((track) => {
track.stop()
videoElement.current.srcObject.removeTrack(track)
})
videoElement.current.srcObject = null
}
}, [])
case RoomEvents.WebcamProducerAdded:
case RoomEvents.VideoStreamReplaced: {
if (result.data?.track) {
if (result.data.track.kind === 'video') {
previewVideoStreamTrack?.stop()
setPreviewVideoStreamTrack(null)
setVideoStreamTrack(result.data.track)
}
}
break
}
In the "Room" class I use the below code to grab the stream.
const videoDevice = this.webcam.device
if (!videoDevice) {
throw new Error('no webcam devices')
}
const userMedia = await navigator.mediaDevices.getUserMedia({
video: this.environmentPlatformService.isMobile ?
true : {
deviceId: {
exact: this.webcam.device.deviceId
},
...VIDEO_CONSTRAINS[this.webcam.resolution],
},
})
const videoTrack = userMedia.getVideoTracks()[0]
this.eventSubject.next({
eventName: RoomEvents.WebcamProducerAdded,
data: {
track: videoTrack,
},
})
I am holding to this.webcam.device details using the code below.
async updateInputOutputMediaDevices(): Promise < MediaDeviceInfo[] > {
await navigator.mediaDevices.getUserMedia({
audio: true,
video: true
})
const devices = await navigator.mediaDevices.enumerateDevices()
await this.updateWebcams(devices)
await this.updateAudioInputs(devices)
await this.updateAudioOutputs(devices)
return devices
}
private async updateWebcams(devices: MediaDeviceInfo[]) {
this.webcams = new Map < string, MediaDeviceInfo > ()
for (const device of devices.filter((d) => d.kind === 'videoinput')) {
this.webcams.set(device.deviceId, device)
}
const array = Array.from(this.webcams.values())
this.eventSubject.next({
eventName: RoomEvents.CanChangeWebcam,
data: {
canChangeWebcam: array.length > 1,
mediaDevices: array,
},
})
}
Refreshing the page will close the camera and tab indicator.
useEffect(() => {
return () => {
videoStreamTrack?.stop()
videoElement.current.srcObject.getVideoTracks().forEach((track) => {
track.stop()
videoElement.current.srcObject.removeTrack(track)
})
videoElement.current.srcObject = null
}
}, [])
So here you are search and destroying video tracks. Seems right-ish; we'll see
async updateInputOutputMediaDevices(): Promise < MediaDeviceInfo[] > {
await navigator.mediaDevices.getUserMedia({
audio: true,
video: true
})
const devices = await navigator.mediaDevices.enumerateDevices()
await this.updateWebcams(devices)
await this.updateAudioInputs(devices)
await this.updateAudioOutputs(devices)
return devices
}
Above I see there's a call for audio might be where the hiccups are? Can't overly examine but maybe you're opening both and closing just video? Try doing a loop through all tracks not just video and see what's there?
#blanknamefornow answer helped me nail the issue.
We are calling getUserMedia in multiple places not only in the
“room” class handling mediasoup actions but also fore
preview/device-selection/etc and didn’t really ever closed the
tracks retrieved.
Sometimes, those tracks are held into useState
variables and when component unmounted if you try to access the
variables they are already nulled by reactjs. The workaround is
since the HTML elements are still referenced stop the track when
needed. I believe this was the missing ingredient when trying to
figure it out.

Trying to capture audio but navigator.mediaDevices.enumerateDevices() is NULL on Safari 12 even with microphone permissions granted

See related question: Navigator.mediaDevices.getUserMedia not working on iOS 12 Safari
We are trying to capture audio from user input user MediaDevices.getUserMedia and Audio Context
When the user clicks a button we check for available devices and then we capture their audio stream
let enumDevicePromise = navigator.mediaDevices.enumerateDevices()
.then(devices => devices.find(d => d.kind === "audioinput" && d.label !== "" && d.deviceId === "default"))
.catch((error) => {
// handle error
});
this.handleCheckEnumeratedDevices(enumDevicePromise); // capture device in backend
.....
navigator.mediaDevices
.getUserMedia({
audio: true,
video: false,
})
.then(stream => {
let AudioContext = window.AudioContext || window.webkitAudioContext;
if (AudioContext) {
let context = new AudioContext();
let source = context.createMediaStreamSource(stream);
let processor = context.createScriptProcessor(4096, 1, 1);
source.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = (event) => {
let audioIn = event.inputBuffer.getChannelData(0);
this.sendMessage(this.toInt16(audioIn));
}
} else {
// handle error, ie, Audio Context not supported
}
}).catch((error) => {
// handle error
});
});
This works fine on Chrome and Firefox, but on Safari 12 we are getting a Null response from the enumerate devices promise - despite allowing microphone permissions - and we because of that we aren't able to capture the audio stream
It happens because Mobile Safari doesn't expose "audioinput" kind of media devices. This is a known limitation.