I have a 4k Logitech brio webcam and I can pull live video from it using WebRTC/getUserMedia. Sadly only in HD 1920x1080 … is there any way to use the 4k capabilities of the camera in the browser/electron app?
I'm working on a single instance media installation, so cross browser support is not an issue. I'm targeting towards whatever webkit electron-builder will package.
Thanks!
getUserMedia can be very... peculiar currently in most browsers, electron included.
First, make sure you are using your constraints correctly. To get 4k you should be trying something similar to this:
{
audio: false,
video: {
width: { exact: 3840 },
height: { exact: 2160 }
}
}
Then if that works, go from there on toning down the constraints to get other non-UHD webcams to work. Make sure you read up on the constraints and what is possible here, and always include the WebRTC adapter.js even in the latest version of electron it is still needed (mainly for converstion of error names to the proper "standard" ones).
Most likely you will end up with a constraints setup similar to this:
{
audio: false,
video: {
width: {
min: 1280,
ideal: 3840,
max: 3840
},
height: {
min: 720,
ideal: 2160,
max: 2160
}
}
}
That will make the browser attempt to get a 4k resolution, but then will step down to a minimum of 720p if needed.
Also, if you want to check if your browser/camera supports UHD correctly, you can always try this website which will run a test to get which resolutions getUserMedia supports on your system.
And finally, make sure you are choosing the right camera. Many new devices are including multiple environment-facing cameras, and if you don't define the deviceId you want to use, the useragent will pick for you, and they often choose poorly (for example, a Kyocera phone I recently worked with used a wide-angle lens by default unless told otherwise, and the wide-angle lens didn't support any "normal" resolutions making it fallback to a very low resolution and very strange aspect ratio.
Related
I'm sending video from OBS Studio to Ant Media Server at 1280x720, but the WebRTC embed iframe is serving it at 560x315. How can I make the latter match the former?
You can change WebRTC stream resolution by editing media constraints in /usr/local/antmedia/webapps/YOUR_APP/index.html file. For example, to make 360x240 you can set media constraint as:
var mediaConstraints = {
video : {width: 360,height: 240},
audio : true
};
You may also want to change video bitrate proportional to the resolution settings. You can pass bandwidth parameter of webrtcAdaptor bandwidth: value or max bandwidth: "unlimited". It's default 900 kbps.
I understand that I can resize the iframe, but when I do that it doesn't change the size of the video stream coming from Ant Media Server. How do I change that resolution?
I am building an app for video streaming using HLS from s3.
I want to support the functionality to select Video Quality.
Still unable to find how to select the desired quality.
Can any one please help me in this issue.
If some one knows some react-native api or some other solution, please help.
Thanks
it is possible to select a video quality manually according to the react-native-video documentation use the selectedVideoTrack Prop
see https://github.com/react-native-community/react-native-video#selectedvideotrack
example
selectedVideoTrack={{
type: "resolution",
value: 1080
}}
add these lines in react-native.config.js
module.exports = {
dependencies: {
'react-native-video': {
platforms: {
android: {
sourceDir: '../node_modules/react-native-video/android-exoplayer',
},
},
},
},
assets:['./src/assets/fonts/'],
};
remove assets:['./src/assets/fonts/'], if you don't have fonts directory.
Then in Video component select video track like this
selectedVideoTrack={{
type: "resolution",
value: 1080
}}
This solution tested with only Android devices.
Original answer: https://stackoverflow.com/a/71148981/4291272
If you have multiple quality renditions in your HLS video, you could use hls.js for the playback component. Normally this will just switch playback for you, but you can control this manually with the Quality Switch API.
I.e. you would get the available qualities using hls.levels and then iterate through them. Then set hls.currentLevel tot he desired quality.
I'd like to use two geo location watchers in my app. One with useSignificantChangesand one with high accuracy.
The "lowres" watcher would provide my Redux store with approximate locations all the time, whereas the "highres" watcher would be enabled when the user is working in a live view.
Here are the options for the low res watcher
const options = {
enableHighAccuracy: false,
useSignificantChanges: true,
distanceFilter: 500,
};
And the high res watcher:
const options = {
enableHighAccuracy: true,
timeout: 60e3,
maximumAge: 10e3,
};
I have played around with the settings, but I can't see any difference in the output. Both watchers emits the exact same positions at the same time. I'm using the iOS simulator for the moment.
Questions:
I should be able to have several watcher, shouldn't I? What would be the point with the returned watchId otherwise?
Is this a problem only in the simulator?
Have I misunderstood or goofed?
Edit, the actual question is:
Why do I get high frequent accurate gps positions even in "significant changes" mode. This mode is supposed to save battery if I have understood correctly.
Thanks!
The useSignificantChanges option is fairly new and only recently implemented, so you need to make sure that:
You are using a version of React Native that supports it. Based on merge dates, looks to be v0.47+
You are testing on an iOS version that can use it. The Github issue states that this is a new feature that impacts iOS 11, so I presume that you would need at least that version to see any differences.
I am trying to test WebRTC and want to display my own stream as well as the peer's stream. I currently have a simple shim to obtain the camera's stream and pipe that into a video element, however the frame rate is extremely low. The rare thing about this is that I can try examples from the WebRTC site and they work flawlessly.. The video is smooth and there are no problems. I go to the console and my code resembles theirs.. What could be happening? I tried to create both a fiddle and run that code within brackets but it still performs horribly.
video = document.getElementById('usr-cam');
navigator.mediaDevices.getUserMedia({video : {
width : {exact : 320},
height : {exact: 240}
}})
.then(function(stream){
if(navigator.mozGetUserMedia)
{
video.mozSrcObject = stream;
}
else
{
video.srcObject = stream;
}
})
.catch(function(e){
alert(e);
});
Pretty much everything I do. Take into account that I am using the new navigator.mediaDevices() API instead of navigator.getUserMedia() but I don't see how that would matter since 1.I am using a shim provided by the WebRTC group named adapter.js which they themselves use. 2. I don't think how you obtain hold of the video stream would affect performance.
Alright, I feel very stupid for this one... I was kind of deceived by the fact that the video element will update the displayed image without you having to do anything but pipe the output stream, which means the image will update but just at really long intervals, making it seem as if the video is lagging. What I forgot to do was actually play() the video or add autoplay as its property... it works well now.
We're exploring WebRTC but have seen conflicting information on what is possible and supported today.
With WebRTC, is it possible to recreate a screen sharing service similar to join.me or WebEx where:
You can share a portion of the screen
You can give control to the other party
No downloads are necessary
Is this possible today with any of the WebRTC browsers? How about Chrome on iOS?
The chrome.tabCapture API is available for Chrome apps and extensions.
This makes it possible to capture the visible area of the tab as a stream which can be used locally or shared via RTCPeerConnection's addStream().
For more information see the WebRTC Tab Content Capture proposal.
Screensharing was initially supported for 'normal' web pages using getUserMedia with the chromeMediaSource constraint – but this has been disallowed.
EDIT 1 April 2015: Edited now that screen sharing is only supported by Chrome in Chrome apps and extensions.
You guys probably know that screencapture (not tabCapture ) is avaliable in Chrome Canary (26+) , We just recently published a demo at; https://screensharing.azurewebsites.net
Note that you need to run it under https:// ,
video: {
mandatory: {
chromeMediaSource: 'screen'
}
You can also find an example here; https://html5-demos.appspot.com/static/getusermedia/screenshare.html
I know I am answering bit late, but hope it helps those who stumble upon the page if not the OP.
At this moment, both Firefox and Chrome support sharing entire screen or part of it( some application window which you can select) with the peers through WebRTC as a mediastream just like your camera/microphone feed, so no option to let other party take control of your desktop yet. Other that that, there another catch, your website has to be running on https mode and in both firefox and chrome the users are gonna have to install extensions.
You can give it a try in this Muaz Khan's Screen-sharing Demo, the page contains the required extensions too.
P. S: If you do not want to install extension to run the demo, in firefox ( no way to escape extensions in chrome), you just need to modify two flags,
go to about:config
set media.getusermedia.screensharing.enabled as true.
add *.webrtc-experiment.com to media.getusermedia.screensharing.allowed_domains flag.
refresh the demo page and click on share screen button.
To the best of my knowledge, it's not possible right now with any of the browsers, though the Google Chrome team has said that they're eventually intending to support this scenario (see the "Screensharing" bullet point on their roadmap); and I suspect that this means that eventually other browsers will follow, presumably with IE and Safari bringing up the tail. But all of that is probably out somewhere past February, which is when they're supposed to finalize the current WebRTC standard and ship production bits. (Hopefully Microsoft's last-minute spanner in the works doesn't screw that up.) It's possible that I've missed something recent, but I've been following the project pretty carefully, and I don't think screensharing has even made it into Chrome Canary yet, let alone dev/beta/prod. Opera is the only browser that has been keeping pace with Chrome on its WebRTC implementation (FireFox seems to be about six months behind), and I haven't seen anything from that team either about screensharing.
I've been told that there is one way to do it right now, which is to write your own webcamera driver, so that your local screen appeared to the WebRTC getUserMedia() API as just another video source. I don't know that anybody has done this - and of course, it would require installing the driver on the machine in question. By the time all is said and done, it would probably just be easier to use VNC or something along those lines.
navigator.mediaDevices.getDisplayMedia(constraint).then((stream)=>{
// todo...
})
now you can do that, but Safari is different from Chrome in audio.
it is Possible I have worked on this and built a Demo for Screen share. During this watcher can access your mouse and Keyboard. If he moves his mouse then Your mouse also moves and if he types from his Keyboard, it will be typed into your pc.
View this code this code is for Screen share...
Right now in this days you can share screen with this, you not need any extentions.
const getLocalScreenCaptureStream = async () => {
try {
const constraints = { video: { cursor: 'always' }, audio: false };
const screenCaptureStream = await navigator.mediaDevices.getDisplayMedia(constraints);
return screenCaptureStream;
} catch (error) {
console.error('failed to get local screen', error);
}
};