safari 13.1 navigator.mediaDevices.enumerateDevices() return only audio devices - safari

Im facing with an issue on desktop Safari 13.1 version. If I open the console in the web inspector (with a regular macbook which has webcam and mic) and execute this command on any kind of website:
navigator.mediaDevices.enumerateDevices()
First time it will return in the Promise result with a videoinput and an audioinput.
Second time it will return only 2 audioinput. Videoinput is disappear.
Unfortunately I call this method several times while checking the available devices on my solution.
Any idea why does it happen and how could I get the accurate information about the devices even If I call it more than once?
See the results here

I've found the same issue, also on my iPad running iOS 13.
It seems you need to request camera access first in order to see the correct device list.
navigator.mediaDevices.getUserMedia({ video: true })
This will prompt you for access to the camera (you need to be on HTTPS or localhost).
Grant permission, then run this again and you should see the videoinput device(s) listed in the returned promise:
navigator.mediaDevices.enumerateDevices()
I guess this makes sense as a privacy feature that a website cannot check if a camera exists without first asking your permission.

Related

How to determine which cameras are front and back facing when using HTML5 getUserMedia and enumerateDevices APIs?

When accessing the camera using HTML5 getUserMedia APIs, you can either:
Request an unspecified "user" facing camera
Request an unspecified "environment" facing camera (optionally left or right)
Request a list of cameras available
Originally we used the "facing" constraint to choose the camera. If the camera faces the "user" we show it mirror image as is the convention.
We run into problems, however, when a user does not have exactly 1 user-facing and 1 environment-facing camera. They might be missing one of these, or have multiple. This can result in the wrong camera being used, or the camera not being mirrored appropriately.
So we are looking at enumerating the devices. However, I have not found a way to determine whether a video device is "user facing" and should be mirrored.
Is there any API available to determine whether a video input is "user" facing the in these APIs?
When you enumerate devices, devices that are an input may have a method called getCapabilities(). If this method is available you may call it to get a MediaTrackCapabilities object. This object has a field called facingMode which lists the valid facingMode options for that device.
For me this was empty on the PC but on my Android device it populated correctly.
Here's a jsfiddle you can use to check this on your own devices: https://jsfiddle.net/juk61c07/4/
Thanks to the comment from O. Jones for setting me in the right direction.

Does Agora support screen share on Safari?

i dont have a straight answer whether Agora supports screen sharing on SAFARI. This 4x api page does not seem to list Safari at all, and there is some chatter on Stack overflow to that effect (at least for 3.x api) https://docs.agora.io/en/Interactive%20Broadcast/screensharing_web_ng?platform=Web
This is a show stopper for me, so appreciate a straight answer YES or NO whether Agora supports screen sharing on safari
When i tried it, I got a getDisplayMedia error:
"getDisplayMedia must be called from a user gesture handler" on Safari 13+. I do indeed create the new client, join and publish the local video upon an actual user click on a button, so not sure why we get this error. Only happens with screen share, camera/mic work
Looks like you answered your own question on the Agora RTE Dev Slack, I'll relay it here for anyone looking for a solution.
How Sri did it was essentially:
AgoraRTC.createScreenVideoTrack(..).then(() => client.join( ..)

Liveview on Android/QX1 Sony Camera API 2.01 fails

Using the supplied Android demo from
https://developer.sony.com/downloads/all/sony-camera-remote-api-beta-sdk/
Connected up to the WIFI connection on a Sony QX1. The sample application finds the camera device and is able to connect to it.
The liveview is not displaying correctly. At most, one frame is shown and the code hits an exception in SimpleLiveViewSlicer.java
if (commonHeader[0] != (byte) 0xFF) {
throw new IOException("Unexpected data format. (Start byte)");
}
Shooting a photo does not seem to work. Zooming does work - lens is moving. Camera is working fine when using the PlayMemories app directly, so no hardware issue.
Hoping from advice from Sony on this one - standard hardware and demo application should work.
Can you provide some details of your setup?
What version of Android SDK are you compiling with?
What IDE and OS are you using?
Have you installed the latest firmware? (http://www.sony.co.uk/support/en/product/ILCE-QX1#SoftwareAndDownloads)
Edit:
We tested the sample code using a QX1 lens and the same setup as you and were able to run the sample code just fine.
One thing to check is whether the liveview is ready to transfer images. To confirm whether the camera is ready to transfer liveview images, the client can check “liveviewStatus” status of “getEvent” API (see API specification for details). Perhaps there is some timing issue due to connection speed that is causing the crash.

how to connect disconnect the camera device using getUserMedia and webRTC

I am creating an audio/video and chat application using webRTC and Node.js. I need to mute and unmute the camera device.
Presently, I am able to disconnect and the other party is not able to see me, but the problem I see is that it doesn't disconnect the camera. It still remains active and connected as I see the camera flash still on.
I need help how to disconnect when muted and connect it back when unmuted. I want the same feature as we see in skype video call.
It varies a bit between Firefox and Chrome. These steps, in this order, work for me.
1) Set the src property on your video element to empty string ''.
2) Make sure the stop method exists before calling it as a function. Firefox doesn't have it, and if you try to run it, your code will throw an error.
if (localStream && localStream.stop) {
localStream.stop();
}
3) After you call cameraStream.stop() (or not), set localStream = null. (Maybe not actually necessary, but it couldn't hurt to let the object get garbage-collected. And when the user asks to start the camera up again, you can check against the variable to see if you need to clean up after the previous stream before starting a new one.)
When you are getting your media, in your success callback function you have to keep your localstream in a variable. Then, when you want to stop your stream, you can do localstream.stop();
To start again, you can repeat to call your getUserMedia() method again.

Screen sharing with WebRTC?

We're exploring WebRTC but have seen conflicting information on what is possible and supported today.
With WebRTC, is it possible to recreate a screen sharing service similar to join.me or WebEx where:
You can share a portion of the screen
You can give control to the other party
No downloads are necessary
Is this possible today with any of the WebRTC browsers? How about Chrome on iOS?
The chrome.tabCapture API is available for Chrome apps and extensions.
This makes it possible to capture the visible area of the tab as a stream which can be used locally or shared via RTCPeerConnection's addStream().
For more information see the WebRTC Tab Content Capture proposal.
Screensharing was initially supported for 'normal' web pages using getUserMedia with the chromeMediaSource constraint – but this has been disallowed.
EDIT 1 April 2015: Edited now that screen sharing is only supported by Chrome in Chrome apps and extensions.
You guys probably know that screencapture (not tabCapture ) is avaliable in Chrome Canary (26+) , We just recently published a demo at; https://screensharing.azurewebsites.net
Note that you need to run it under https:// ,
video: {
mandatory: {
chromeMediaSource: 'screen'
}
You can also find an example here; https://html5-demos.appspot.com/static/getusermedia/screenshare.html
I know I am answering bit late, but hope it helps those who stumble upon the page if not the OP.
At this moment, both Firefox and Chrome support sharing entire screen or part of it( some application window which you can select) with the peers through WebRTC as a mediastream just like your camera/microphone feed, so no option to let other party take control of your desktop yet. Other that that, there another catch, your website has to be running on https mode and in both firefox and chrome the users are gonna have to install extensions.
You can give it a try in this Muaz Khan's Screen-sharing Demo, the page contains the required extensions too.
P. S: If you do not want to install extension to run the demo, in firefox ( no way to escape extensions in chrome), you just need to modify two flags,
go to about:config
set media.getusermedia.screensharing.enabled as true.
add *.webrtc-experiment.com to media.getusermedia.screensharing.allowed_domains flag.
refresh the demo page and click on share screen button.
To the best of my knowledge, it's not possible right now with any of the browsers, though the Google Chrome team has said that they're eventually intending to support this scenario (see the "Screensharing" bullet point on their roadmap); and I suspect that this means that eventually other browsers will follow, presumably with IE and Safari bringing up the tail. But all of that is probably out somewhere past February, which is when they're supposed to finalize the current WebRTC standard and ship production bits. (Hopefully Microsoft's last-minute spanner in the works doesn't screw that up.) It's possible that I've missed something recent, but I've been following the project pretty carefully, and I don't think screensharing has even made it into Chrome Canary yet, let alone dev/beta/prod. Opera is the only browser that has been keeping pace with Chrome on its WebRTC implementation (FireFox seems to be about six months behind), and I haven't seen anything from that team either about screensharing.
I've been told that there is one way to do it right now, which is to write your own webcamera driver, so that your local screen appeared to the WebRTC getUserMedia() API as just another video source. I don't know that anybody has done this - and of course, it would require installing the driver on the machine in question. By the time all is said and done, it would probably just be easier to use VNC or something along those lines.
navigator.mediaDevices.getDisplayMedia(constraint).then((stream)=>{
// todo...
})
now you can do that, but Safari is different from Chrome in audio.
it is Possible I have worked on this and built a Demo for Screen share. During this watcher can access your mouse and Keyboard. If he moves his mouse then Your mouse also moves and if he types from his Keyboard, it will be typed into your pc.
View this code this code is for Screen share...
Right now in this days you can share screen with this, you not need any extentions.
const getLocalScreenCaptureStream = async () => {
try {
const constraints = { video: { cursor: 'always' }, audio: false };
const screenCaptureStream = await navigator.mediaDevices.getDisplayMedia(constraints);
return screenCaptureStream;
} catch (error) {
console.error('failed to get local screen', error);
}
};