WebRTC ontrack how to tell if it is a screen sharing session? - webrtc

function OnTrack(e) {
if (e.track.kind === "audio") {
}
else if (e.track.kind === "video") {
}
// Screen Sharing e.track.kind === 'video'
};
In the code above we can distrigush between Audio / Video. But how can I tell if the Video stream is actually coming from a screen sharing session?

Modify the SDP
Found a workaround, by modifying the SDP. I hope someone can come out with a better solution than this.

The getSettings() method contains the properties of the track, including cursor, displaySurface, logicalSurface which are only present on a screen sharing track.
With displaySurface as an example:
function OnTrack(e) {
let settings = e.track.getSettings()
if (e.track.kind === "audio") {
}
else if (e.track.kind === "video") {
if (settings.displaySurface && (
settings.displaySurface === "application" ||
settings.displaySurface === "browser" ||
settings.displaySurface === "monitor" ||
settings.displaySurface === "window")) {
// Screen Sharing
}
}
};
The values of displaySurface are application, browser, monitor, window.
You can also get displaySurface from the getConstraints() method of the track.
Update
It turned out you need to add these constraints/settings when you are calling the getDisplayMedia(constraints):
These constraints apply to MediaTrackConstraints objects specified as
part of the DisplayMediaStreamConstraints object's video property when
using getDisplayMedia() to obtain a stream for screen sharing.
Note from the MDN page
Not all user agents support all of these surface types.
More information
MediaStreamTrack getSettings()
MediaTrackSettings
MediaTrackSettings DisplaySurface
MediaStreamTrack getConstraints()
MediaDevices getDisplayMedia()

Related

Agora Web SDK Screen share not returning video track

I have integrated Screen Share function on my web conference and Screen Share content will show on users who are in the session before Screen Share start, but it does not work on user who have joined the session after the Screen Share have started.
Below is the logic for getting video tracks when new user join the session.
// Add current users
this.meetingSession.remoteUsers.forEach(async ru => {
if (ru.uid.search('screen_') > -1) {
this.getScreenShare(ru);
return;
}
let remoteVideo = await this.meetingSession.subscribe(ru, 'video');
this.setVideoAudioElement(ru, 'video');
let remoteAudio = await this.meetingSession.subscribe(ru, 'audio');
this.setVideoAudioElement(ru, 'audio');
})
async getScreenShare (user) {
...
this.currentScreenTrack = user.videoTrack;
// Here user.videoTrack is undefined
console.log(user)
...
},
After the new user's session is created, I'm getting the current user's video track from "remoteUsers" object inside session object. No problem with regular user's video track, but Screen Share object say "hasVideo" is true but "videoTrack" is undefined.
Agora Web SDK meetingSession.remoteUsers Screen Share Object
Is this a specification that videoTrack is not included in meetingSession.remoteUsers for Screen Share?
I'm wondering what method people are using to show Screen Share content for user who have joined the session during Screen Share.
It will be great if someone can give me suggestion about this.
"agora-rtc-sdk-ng": "^4.6.2",
I had it figured out.
I just needed to subscribe the remote user.
this.meetingSession.remoteUsers.forEach(async ru => {
if (ru.uid.search('screen_') > -1) {
// Just needed to subscribe the user...
await this.meetingSession.subscribe(ru, 'video');
this.getScreenShare(ru);
return;
}
let remoteVideo = await this.meetingSession.subscribe(ru, 'video');
this.setVideoAudioElement(ru, 'video');
let remoteAudio = await this.meetingSession.subscribe(ru, 'audio');
this.setVideoAudioElement(ru, 'audio');
})

React Native - Derectional Pad support Android TV App

I would want to build an Android TV app using React-Native. I have followed up the recommendation on this document: Building For TV Devices.
After, update the AndroidManifest.xml file I run the application using the command line - react-native run android. The app running without any issue; however, I tried to use the Directional-pad option from android emulator TV (720p) API 23 emulator and it didn't work. I was expecting to catch the event listed on the code below and write to the console respective test for each event. On the other hand, even the component that was used for text didn't get highlighted either focus on when I try to navigate using Directional-pad.
I am reaching out to the community to see if someone had this issue in the past and what was your issue and what you have done to resolve it? Also, as I am listing the steps below, if you could let me know if I missing something?
Please, let me know if you need any extra information in order to help me.
react-native init Dpad
cd Dpad
Update code based on - Building For
TV Devices
Start Android TV (720p) API 23 emulator.
react-native run-android
ANNEX:
Android TV (720p) API 23
Here is the code:
import React, { Component } from 'react';
import { Text, View } from 'react-native';
import Channel from '../channel/channel.component';
import styles from './presentation.component.styles';
var TVEventHandler = require('TVEventHandler');
export default class Grid extends Component {
constructor(props){
super(props);
this.state = {
command: 'undefined'
}
}
setcomand( command) {
this.setState( () => { return { command: command }; });
}
_tvEventHandler: null;
_enableTVEventHandler() {
this._tvEventHandler = new TVEventHandler();
this._tvEventHandler.enable(this, function(cmp, evt) {
if (evt && evt.eventType === 'right') {
setcomand('Press Right!');
} else if(evt && evt.eventType === 'up') {
setcomand('Press Up!');
} else if(evt && evt.eventType === 'left') {
setcomand('Press Left!');
} else if(evt && evt.eventType === 'down') {
setcomand('Press Down!');
}
});
}
_disableTVEventHandler() {
if (this._tvEventHandler) {
this._tvEventHandler.disable();
delete this._tvEventHandler;
}
}
componentDidMount() {
this._enableTVEventHandler();
console.warn("component did mount");
}
componentWillUnmount() {
this._disableTVEventHandler();
console.warn("component Will Unmount");
}
render() {
return (
<View style={styles.container}>
<Text>{this.state.command}</Text>
<Channel name="Globo" description="Its brazilian TV channles for news"/>
<Channel name="TVI" description="Its Portuguese TV channles for news"/>
<Channel name="TVI" description="Its Portuguese TV channles for news"/>
</View>
);
}
}
I'm also struggling with this problem for a month. Still can't find help/solution.
I testing this on Android Studio Emulator and also on few real android TV boxes with real remote d-pads.
I still can't figure out if it's React Native problem (bug) or Android TV devices don't emit response (keyCode) on directional d-pad arrows.
I can reproduce events like: focus, blur, select, fastForward, playPause, rewind, but no way to get events like e.g. "left".
I search a lot of google and other sites, you are first one who struggling with same issue.
I feel like no one cares about Android TV in React-Native.
You can also comment my Issue thread on React-Native github page.
https://github.com/facebook/react-native/issues/20924
I hope we figure it out soon.
Cheers
I was not able to verify that Directional-pad works with react-native. That was my goal of this demo. I am learning to build android Tv using react-native and so far Directional-pad it's being a big challenge given that TV user won't use touch screen event.
However, I couldn't find out why my app was not responding to the Directional-pad keyboard (left, right, up and down). There was no code error.
Have you tried to use Directional-pad to navigate on your react-native app?
Thank you,
Dave -
I decided to develop Android TV app using React Native because the video that react-native team shared - [https://www.youtube.com/watch?v=EzIQErHhY20] and the tutorial page [https://facebook.github.io/react-native/docs/building-for-apple-tv]. I think that's everything we have; other than that we won't get further support.
GOOD NEWS - I have started a new project from scratch using react native version 0.57.0, node version V10.7.not and **npm Version 4.6.1. Also, for navigation, I am using react-navigation version 2. I was able to see that my Directional-pad emulator was working, however, I was not able to see the focus on the element that I am navigating (left, right, down, up).
I will be working to see how I can fix the focus issue.
Let's keeping share our progress and feel free to reach out.
Thank you,
Justimiano Alves
use this code and u can see the console on debugger
_tvEventHandler: any;
_enableTVEventHandler() {
var self = this;
this._tvEventHandler = new TVEventHandler();
this._tvEventHandler.enable(this, function (cmp, evt) {
console.log("kcubsj"+evt.eventType)
if (evt && evt.eventType === 'right') {
console.log('right');
} else if (evt && evt.eventType === 'up') {
console.log('up');
} else if (evt && evt.eventType === 'left') {
console.log('left');
} else if (evt && evt.eventType === 'down') {
console.log('down');
} else if (evt && evt.eventType === 'select') {
//self.press();
}
});
}
_disableTVEventHandler() {
if (this._tvEventHandler) {
this._tvEventHandler.disable();
delete this._tvEventHandler;
}
}
componentDidMount() {
this._enableTVEventHandler();
}
componentWillUnmount() {
this._disableTVEventHandler();
}
I have really good news about support for D-Pad arrow events (up, down, left, right).
It turned out that one of Android TV contributor for react-native is person from my country. I reach out contact with him and tell about this problem. He check that out and actually, there is missing code for that.
He made pull request to support that in react-native. It should be fixed in one of upcoming new version releases (he said it might took about month).
Temporarly I know how to handle this (add code and recompile java files), I already tested it and its work great. All events now working. If you really need that support now and don't want to wait, I can share how to do that.
Cheers
Yes. I would like to see your solution because I am able to navigate using the D-pad but I couldn't see which element I am navigating to. I need to highlight or show focus on the element that I navigating to.
Having console.log inside the TVEventHandler callback seems to break it when running without remote js debugger on.
I have observed that D-pad does not work if there is no focusable component. To solve this, I have placed a transparent touchable opacity component on my screen. After that D-pad started working. My code for D-pad key event is given below:
enableTVEventHandler() {
this.tvEventHandler = new TVEventHandler();
this.tvEventHandler.enable(this, (cmp, { eventType, eventKeyAction }) => {
// eventKeyAction is an integer value representing button press(key down) and release(key up). "key up" is 1, "key down" is 0.
if (eventType === 'playPause' && eventKeyAction === 0)
{
console.log('play pressed')
}
else if(eventType === 'fastForward' && eventKeyAction === 0){
console.log('forward pressed')
}
else if (eventType === 'rewind' && eventKeyAction === 0){
console.log('rewind pressed')
}
else if (eventType === 'select' && eventKeyAction === 0)
{
console.log('select pressed')
}else if(eventType === 'left' && eventKeyAction === 0){
console.log('left pressed')
}
else if(eventType === 'right' && eventKeyAction === 0){
console.log('right pressed')
}
else if(eventType === 'up' && eventKeyAction === 0){
console.log('up pressed')
}
else if(eventType === 'down' && eventKeyAction === 0){
console.log('down pressed')
}
});
}

How to check if a user device is using fingerprint/face as unlock method. [ReactNative] [Expo]

I'm using ReactNative based on Expo Toolkit to develop a App and I want know how I can check if the user is using the fingerprint (TouchID on iPhone) or face detection (FaceID on iPhone X>) to unlock the device.
I already know how to check if device has the required hardware using Expo SDK, as follow:
let hasFPSupport = await Expo.Fingerprint.hasHardwareAsync();
But I need check if the user choose the fingerprint/face as unlock method on your device, instead pattern or pin.
Thanks
Here's an update to Donald's answer that takes into account Expo's empty string for the model name of the new iPhone XS. It also takes into account the Simulator.
const hasHardwareSupport =
(await Expo.LocalAuthentication.hasHardwareAsync()) &&
(await Expo.LocalAuthentication.isEnrolledAsync());
let hasTouchIDSupport
let hasFaceIDSupport
if (hasHardwareSupport) {
if (Constants.platform.ios) {
if (
Constants.platform.ios.model === '' ||
Constants.platform.ios.model.includes('X')
) {
hasFaceIDSupport = true;
} else {
if (
Constants.platform.ios.model === 'Simulator' &&
Constants.deviceName.includes('X')
) {
hasFaceIDSupport = true;
}
}
}
hasTouchIDSupport = !hasFaceIDSupport;
}
EDIT: Expo released an update that fixes the blank model string. However you might want to keep a check for that just in case the next iPhone release cycle causes the same issue.
Currently, you could determine that a user has Face ID by checking Expo.Fingerprint.hasHardwareAsync() and Expo.Fingerprint.isEnrolledAsync(), and then also checking that they have an iPhone X using Expo.Constants.platform (docs here).
So:
const hasHardwareSupport = await Expo.Fingerprint.hasHardwareAsync() && await Expo.Fingerprint.isEnrolledAsync();`
if (hasHardwareSupport) {
const hasFaceIDSupport = Expo.Constants.platform.ios && Expo.Constants.platform.ios.model === 'iPhone X';
const hasTouchIDSupport = !hasFaceIDSupport;
}
Incase you tried the above answer and it's not working please note as at the time of my post expo's documentation has changed
- import * as LocalAuthentication from 'expo-local-authentication';
- let compatible = await LocalAuthentication.hasHardwareAsync()
We can check if device has scanned fingerprints:
await Expo.Fingerprint.isEnrolledAsync()
So, this can be used to reach the objective as follow:
let hasFPSupport = await Expo.Fingerprint.hasHardwareAsync() && await Expo.Fingerprint.isEnrolledAsync();

Xamarin camera not on main navigation page

I've managed to get the camera going cross platform using xamarin and this tutorial:
Camera access with Xamarin.Forms
I'm now trying to get it working on a different navigation form (The camera functionality would be several forms away from the main page.) However the device specific code accesses many things wired up to the App instance which I'm struggling to wire up from another form. Does anyone know of a good camera example that isn't on the main page? I've been coding C# for years but I'm new to Xamarin and the camera stuff seems to be the hardest to get going. Thanks in advance.
Jeff
use the Media plugin
takePhoto.Clicked += async (sender, args) =>
{
await CrossMedia.Current.Initialize();
if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakePhotoSupported)
{
DisplayAlert("No Camera", ":( No camera available.", "OK");
return;
}
var file = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions
{
Directory = "Sample",
Name = "test.jpg"
});
if (file == null)
return;
await DisplayAlert("File Location", file.Path, "OK");
image.Source = ImageSource.FromStream(() =>
{
var stream = file.GetStream();
file.Dispose();
return stream;
});
};

WebRTC: Switch from Video Sharing to Screen sharing during call

Initially, I had two different webpages:
One was to do Video Call and
Other was to do Screen Sharing
Now, I want to do both of them in one page.
Here is the scenario:
During Live call, a user wants to stop sharing his/her video and start sharing screen.
Afterwards, again he/she wishes to turn off screen sharing and start video sharing.
For clarity, here are some questions I want to ask:
On Caller Side:
1) How can I change my local stream from video to screen and vice versa?
2) Once it is done, how can I assign it to the local video element?
On Callee Side:
1) How do I handle if the current stream I am receiving is changed from video to screen?
2) How do I handle if the stream I am receiving has stopped? I mean, now I can receive neither video nor screen (just audio)
Kindly, help me in this regards. If there are any open source codes available, kindly share their links too.
Just for your reference, I was trying to handle it using following code. (i know this is naive and won't work)
function handleUserMedia(newStream){
var localvideo = document.getElementById("localvideo");
localvideo.src = URL.createObjectURL(newStream);
localStream = newStream;
sendMessage('got user media');
if (isInitiator) {
maybeStart();
}
}
function handleUserMediaError(error){
console.log(error);
}
var video_constraints = {video: true, audio: true};
var screen_constraints = {video: { mandatory: { chromeMediaSource: 'screen' } }};
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
//getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
$scope.toggleSelected = function () {
$scope.selected = !$scope.selected;
if($scope.selected)
{
getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Video';
}
else
{
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
}
};
Check this demo:
https://www.webrtc-experiment.com/demos/switch-streams.html
and the relevant tutorial:
https://www.webrtc-experiment.com/docs/how-to-switch-streams.html
simply renegotiate peer connections on both users' side!