Agora Web SDK Screen share not returning video track - agora.io

I have integrated Screen Share function on my web conference and Screen Share content will show on users who are in the session before Screen Share start, but it does not work on user who have joined the session after the Screen Share have started.
Below is the logic for getting video tracks when new user join the session.
// Add current users
this.meetingSession.remoteUsers.forEach(async ru => {
if (ru.uid.search('screen_') > -1) {
this.getScreenShare(ru);
return;
}
let remoteVideo = await this.meetingSession.subscribe(ru, 'video');
this.setVideoAudioElement(ru, 'video');
let remoteAudio = await this.meetingSession.subscribe(ru, 'audio');
this.setVideoAudioElement(ru, 'audio');
})
async getScreenShare (user) {
...
this.currentScreenTrack = user.videoTrack;
// Here user.videoTrack is undefined
console.log(user)
...
},
After the new user's session is created, I'm getting the current user's video track from "remoteUsers" object inside session object. No problem with regular user's video track, but Screen Share object say "hasVideo" is true but "videoTrack" is undefined.
Agora Web SDK meetingSession.remoteUsers Screen Share Object
Is this a specification that videoTrack is not included in meetingSession.remoteUsers for Screen Share?
I'm wondering what method people are using to show Screen Share content for user who have joined the session during Screen Share.
It will be great if someone can give me suggestion about this.
"agora-rtc-sdk-ng": "^4.6.2",

I had it figured out.
I just needed to subscribe the remote user.
this.meetingSession.remoteUsers.forEach(async ru => {
if (ru.uid.search('screen_') > -1) {
// Just needed to subscribe the user...
await this.meetingSession.subscribe(ru, 'video');
this.getScreenShare(ru);
return;
}
let remoteVideo = await this.meetingSession.subscribe(ru, 'video');
this.setVideoAudioElement(ru, 'video');
let remoteAudio = await this.meetingSession.subscribe(ru, 'audio');
this.setVideoAudioElement(ru, 'audio');
})

Related

How to add Video track and remove it using simple-peer

I am using simple-peer in my video chat web application. If both the users are in audio call how can I add Video track and how can I disable it. If I use replaceTrack I am again which is giving this issue
error Error: [object RTCErrorEvent]
at makeError (index.js:17)
at RTCDataChannel._channel.onerror (index.js:490)
I am showing a profile picture if the video is not enabled for users. if Video is enabled I want to replace this picture with video and replace it for all people in the call
If both the users enabled audio only, stream contain only audio track so here we can add black space (ended video track ).so we can easily solve this issue
for more info visit this
https://blog.mozilla.org/webrtc/warm-up-with-replacetrack/
Code from the above link
let silence = () => {
let ctx = new AudioContext(), oscillator = ctx.createOscillator();
let dst = oscillator.connect(ctx.createMediaStreamDestination());
oscillator.start();
return Object.assign(dst.stream.getAudioTracks()[0], {enabled: false});
}
let black = ({width = 640, height = 480} = {}) => {
let canvas = Object.assign(document.createElement("canvas"), {width, height});
canvas.getContext('2d').fillRect(0, 0, width, height);
let stream = canvas.captureStream();
return Object.assign(stream.getVideoTracks()[0], {enabled: false});
}
let blackSilence = (...args) => new MediaStream([black(...args), silence()]);
video.srcObject = blackSilence();

A better way to handle async saving to backend server and cloud storage from React Native app

In my React Native 0.63.2 app, after user uploads images of artwork, the app will do 2 things:
1. save artwork record and image records on backend server
2. save the images into cloud storage
Those 2 things are related and have to be done successfully all together. Here is the code:
const clickSave = async () => {
console.log("save art work");
try {
//save artwork to backend server
let art_obj = {
_device_id,
name,
description,
tag: (tagSelected.map((it) => it.name)),
note:'',
};
let img_array=[], oneImg;
imgs.forEach(ele => {
oneImg = {
fileName:"f"+helper.genRandomstring(8)+"_"+ele.fileName,
path: ele.path,
width: ele.width,
height: ele.height,
size_kb:Math.ceil(ele.size/1024),
image_data: ele.image_data,
};
img_array.push(oneImg);
});
art_obj.img_array = [...img_array];
art_obj = JSON.stringify(art_obj);
//assemble images
let url = `${GLOBAL.BASE_URL}/api/artworks/new`;
await helper.getAPI(url, _result, "POST", art_obj); //<<==#1. send artwork and image record to backend server
//save image to cloud storage
var storageAccessInfo = await helper.getStorageAccessInfo(stateVal.storageAccessInfo);
if (storageAccessInfo && storageAccessInfo !== "upToDate")
//update the context value
stateVal.updateStorageAccessInfo(storageAccessInfo);
//
let bucket_name = "oss-hz-1"; //<<<
const configuration = {
maxRetryCount: 3,
timeoutIntervalForRequest: 30,
timeoutIntervalForResource: 24 * 60 * 60
};
const STSConfig = {
AccessKeyId:accessInfo.accessKeyId,
SecretKeyId:accessInfo.accessKeySecret,
SecurityToken:accessInfo.securityToken
}
const endPoint = 'oss-cn-hangzhou.aliyuncs.com'; //<<<
const last_5_cell_number = _myself.cell.substring(myself.cell.length - 5);
let filePath, objkey;
img_array.forEach(item => {
console.log("init sts");
AliyunOSS.initWithSecurityToken(STSConfig.SecurityToken,STSConfig.AccessKeyId,STSConfig.SecretKeyId,endPoint,configuration)
//console.log("before upload", AliyunOSS);
objkey = `${last_5_cell_number}/${item.fileName}`; //virtual subdir and file name
filePath = item.path;
AliyunOSS.asyncUpload(bucket_name, objkey, filePath).then( (res) => { //<<==#2 send images to cloud storage with callback. But no action required after success.
console.log("Success : ", res) //<<==not really necessary to have console output
}).catch((error)=>{
console.log(error)
})
})
} catch(err) {
console.log(err);
return false;
};
};
The concern with the code above is that those 2 async calls may take long time to finish while user may be waiting for too long. After clicking saving button, user may just want to move to next page on user interface and leaves those everything behind. Is there a way to do so? is removing await (#1) and callback (#2) able to do that?
if you want to do both tasks in the background, then you can't use await. I see that you are using await on sending the images to the backend, so remove that and use .then().catch(); you don't need to remove the callback on #2.
If you need to make sure #1 finishes before doing #2, then you will need to move the code for #2 intp #1's promise resolving code (inside the .then()).
Now, for catching error. You will need some sort of error handling that alerts the user that an error had occurred and the user should trigger another upload. One thing you can do is a red banner. I'm sure there are packages out there that can do that for you.

appleAuthRequestResponse returning null for fullName and email

I am confused why me below snippet of code is showing null for email and fullName in console after user is authenticated successfully. I have read the documentation carefully and tried every possible thing I could. Any help would be highly appreciated.
async function onAppleButtonPress() {
// performs login request
const appleAuthRequestResponse = await appleAuth.performRequest({
requestedOperation: AppleAuthRequestOperation.LOGIN,
requestedScopes: [AppleAuthRequestScope.EMAIL, AppleAuthRequestScope.FULL_NAME],
});
//api getting current state of the user
const credentialState = await appleAuth.getCredentialStateForUser(appleAuthRequestResponse.user);
if (credentialState === AppleAuthCredentialState.AUTHORIZED) {
// user is authenticated
console.log("email is",appleAuthRequestResponse.email);
console.log("full name is",appleAuthRequestResponse.fullName);
}
}
You can still retrieve the e-mail from the identityToken provided by appleAuthrequestResponse with any jwt decoder like jwt-decode
const {identityToken, nonce, email} = appleAuthRequestResponse
const {email} = jwt_decode(identityToken)
console.log(email)
Apple only returns the full name and email on the first login, it will return null on the succeeding login so you need to save those data.
To receive these again, go to your device settings; Settings > Apple ID, iCloud, iTunes & App Store > Password & Security > Apps Using Your Apple ID, tap on your app and tap Stop Using Apple ID. You can now sign-in again and you'll receive the full name and `email.
Source here.

Xamarin camera not on main navigation page

I've managed to get the camera going cross platform using xamarin and this tutorial:
Camera access with Xamarin.Forms
I'm now trying to get it working on a different navigation form (The camera functionality would be several forms away from the main page.) However the device specific code accesses many things wired up to the App instance which I'm struggling to wire up from another form. Does anyone know of a good camera example that isn't on the main page? I've been coding C# for years but I'm new to Xamarin and the camera stuff seems to be the hardest to get going. Thanks in advance.
Jeff
use the Media plugin
takePhoto.Clicked += async (sender, args) =>
{
await CrossMedia.Current.Initialize();
if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakePhotoSupported)
{
DisplayAlert("No Camera", ":( No camera available.", "OK");
return;
}
var file = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions
{
Directory = "Sample",
Name = "test.jpg"
});
if (file == null)
return;
await DisplayAlert("File Location", file.Path, "OK");
image.Source = ImageSource.FromStream(() =>
{
var stream = file.GetStream();
file.Dispose();
return stream;
});
};

WebRTC: Switch from Video Sharing to Screen sharing during call

Initially, I had two different webpages:
One was to do Video Call and
Other was to do Screen Sharing
Now, I want to do both of them in one page.
Here is the scenario:
During Live call, a user wants to stop sharing his/her video and start sharing screen.
Afterwards, again he/she wishes to turn off screen sharing and start video sharing.
For clarity, here are some questions I want to ask:
On Caller Side:
1) How can I change my local stream from video to screen and vice versa?
2) Once it is done, how can I assign it to the local video element?
On Callee Side:
1) How do I handle if the current stream I am receiving is changed from video to screen?
2) How do I handle if the stream I am receiving has stopped? I mean, now I can receive neither video nor screen (just audio)
Kindly, help me in this regards. If there are any open source codes available, kindly share their links too.
Just for your reference, I was trying to handle it using following code. (i know this is naive and won't work)
function handleUserMedia(newStream){
var localvideo = document.getElementById("localvideo");
localvideo.src = URL.createObjectURL(newStream);
localStream = newStream;
sendMessage('got user media');
if (isInitiator) {
maybeStart();
}
}
function handleUserMediaError(error){
console.log(error);
}
var video_constraints = {video: true, audio: true};
var screen_constraints = {video: { mandatory: { chromeMediaSource: 'screen' } }};
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
//getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
$scope.toggleSelected = function () {
$scope.selected = !$scope.selected;
if($scope.selected)
{
getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Video';
}
else
{
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
}
};
Check this demo:
https://www.webrtc-experiment.com/demos/switch-streams.html
and the relevant tutorial:
https://www.webrtc-experiment.com/docs/how-to-switch-streams.html
simply renegotiate peer connections on both users' side!