Agora.io User can't display the other user but can send self image and display it - agora.io

I am using Agora SDK on web, it works completely fine on Google Chrome and Firefox browsers on macOS and windows, but on Safari, the user can send his own image and he can also sees himself but he can't see the person he is in videocall with.

Agora's Web SDK leverages the WebRTC API, and for this reason Safari browser has some limitations. Your question does not include much detail or code, so its difficult to make a definitive diagnosis, but the issue sounds like the browser autoplay policy is blocking the remote video stream from playing. Usually the autoplay policy does not affect the local streams (because the local streams do not play the local audio), so you only need to deal with the remote streams.
There are two options for working around the autoplay policy.
Bypass the autoplay block when the playback fails.
When detecting an autoplay block, instruct the user to click on the webpage to resume the playback:
stream.play("agora_remote"+ stream.getId(), function(err){
if (err && err.status !== "aborted"){
// The playback fails. Guide the user to resume the playback by clicking.
document.querySelector("#agora_remote"+ stream.getId()).onclick=function(){
stream.resume().then(
function (result) {
console.log('Resume succeeds: ' + result);
}).catch(
function (reason) {
console.log('Resume fails: ' + reason);
});
}
}
});
Bypass the autoplay block in advance.
If you prefer dealing with the autoplay block in advance, choose one of the following methods:
Method one: Play the stream without sound first by Stream.play("ID", { muted: true }), because autoplay without sound is allowed.
Method two: In your UI design, instruct the user to interact with the webpage, either by clicking or touching, to play the stream.
For more details about the two implementations for bypassing the autoplay policy ahead in advance, I would recommend taking a look at Agora's "Deal with autoplay policy" guide.

Related

Twilio remote video is dark on iOS Safari

I am using Twilio Video for creating a video/chat application, and the remote video tracks are displayed dark on Safari IOS (Using Safari Technology Preview) as shown in the picture below. (One way video)
I was thinking the issue is with the Browsers Autoplay Policy, but i think this should not be the case, since the audio track is played, while the video track remains dark.
Also, i make sure that the user presses a "Join Call" button, to ensure user interaction, which allows the rendering of a component which runs this React useEffect.
The codec of the video is H264, to ensure all Safari users can join the Room (Group Room)
useEffect(() => {
const canConnectToRoom = !room && !isConnectingToRoom && token && roomName
if (canConnectToRoom) {
connectToRoom(token, {
video: false,
name: roomName,
})
}
}, [room, isConnectingToRoom, token, roomName, connectToRoom])
Any help would be appreciated, thanks.
UPDATE 1:
TO RECEIVE THE REMOTE VIDEO STREAM
1. Render a RemoteParticipant component:
The collapsed code is for rendering a fallback UI when the remote camera is disabled (This gets shown when the remote camera is disabled, but when the remote camera is enabled just a dark screen)
2. Extract the participant publications
3. Render the publication tracks as:
4. Render the Video Track
5. Warnings in Safari Console
The warnings in the console are printed out until i allow microphone access. I tried joining the room after allowing microphone access, we eliminate console warnings that way, but remote video is still dark.

how to post video on fb timeline using share dialog with javascript sdk?

I have developed one application. I have to upload video on fb timeline, from developed application trough Facebook JavaScript sdk with help of FB.ui method.
i have shared part of my code, which i tried to post video on facebook timeline.when i used this code, video get upload as a link. it will navigate to new tab and play when i click on that link.(my video type is mp4.)
FB.ui({
method: 'feed',
display: 'popup',
type:'mp4',
source:filePath,
picture:filePath,
},function (response) {
if (response && !response.error_message) {
alert('Posting completed.');
} else {
alert('Error while posting.');
}
I expect the video to be play on my timeline instead of posting as a link.
I expect the video to be play on my timeline instead of posting as a link.
That expectation is simply unfounded – this isn’t supposed to work this way, and never has.
You would need to share a link to an HTML document, that has the video embedded via Open Graph meta tags, see https://developers.facebook.com/docs/sharing/webmasters#video
But Facebook has begun limiting the occasions on which they actually play such videos inline; so even if you implement this properly and technically correct, there is no guarantee any more it will play in news feed; users clicking on such a post might simply get redirected to your external site to play the video there.

WebRTC: View self-view while muting outgoing video in a call

Currently, the video mute functionality in webrtc is achieved by setting the enabled property of a video track to false
stream.getVideoTracks().forEach(function (track) {
track.enabled = false;
});
But the above code would not only mute the outgoing video, but the local self-view which is rendered using that local stream, also gets black frames.
Is there a way, to ONLY mute the outgoing video frames, but still be able to show a local self-view?
There's no easy way yet. Once MediaStreamTrack.clone() is supported by browsers, you could clone the video track to get a second instance of it with a separately controllable mute property, and send one track to your self-view and the other to the peerConnection. This would let you turn off video locally and remotely independently.
Today, the only workarounds I know of would be to call getUserMedia twice on Chrome (should work on https at least, where permissions will be persisted so the user won't be prompted twice) which would get you two tracks you could video-mute independently, or on Firefox you could use RTCRtpSender.replaceTrack() with a second "fake" video stream from getUserMedia using the non-standard { video: true, fake: true } constraint like this.

Does video.js have a callback if unsupported?

I'm using video.js for my html5 video, but on older devices (such as BlackBerry OS4), neither the html5 or the Flash fallback work.
Is there any way to detect this - some sort of onError callback within videojs itself, or any other way that I can detect if the video isn't supported?
I've found this code, which works for older BB, but then also detects older IE, which would run hapily with the Flash fallback.
function supportsVideo() {
return !!document.createElement('video').canPlayType;
}
Any help or pointers would be appreciated.

Screen sharing with WebRTC?

We're exploring WebRTC but have seen conflicting information on what is possible and supported today.
With WebRTC, is it possible to recreate a screen sharing service similar to join.me or WebEx where:
You can share a portion of the screen
You can give control to the other party
No downloads are necessary
Is this possible today with any of the WebRTC browsers? How about Chrome on iOS?
The chrome.tabCapture API is available for Chrome apps and extensions.
This makes it possible to capture the visible area of the tab as a stream which can be used locally or shared via RTCPeerConnection's addStream().
For more information see the WebRTC Tab Content Capture proposal.
Screensharing was initially supported for 'normal' web pages using getUserMedia with the chromeMediaSource constraint – but this has been disallowed.
EDIT 1 April 2015: Edited now that screen sharing is only supported by Chrome in Chrome apps and extensions.
You guys probably know that screencapture (not tabCapture ) is avaliable in Chrome Canary (26+) , We just recently published a demo at; https://screensharing.azurewebsites.net
Note that you need to run it under https:// ,
video: {
mandatory: {
chromeMediaSource: 'screen'
}
You can also find an example here; https://html5-demos.appspot.com/static/getusermedia/screenshare.html
I know I am answering bit late, but hope it helps those who stumble upon the page if not the OP.
At this moment, both Firefox and Chrome support sharing entire screen or part of it( some application window which you can select) with the peers through WebRTC as a mediastream just like your camera/microphone feed, so no option to let other party take control of your desktop yet. Other that that, there another catch, your website has to be running on https mode and in both firefox and chrome the users are gonna have to install extensions.
You can give it a try in this Muaz Khan's Screen-sharing Demo, the page contains the required extensions too.
P. S: If you do not want to install extension to run the demo, in firefox ( no way to escape extensions in chrome), you just need to modify two flags,
go to about:config
set media.getusermedia.screensharing.enabled as true.
add *.webrtc-experiment.com to media.getusermedia.screensharing.allowed_domains flag.
refresh the demo page and click on share screen button.
To the best of my knowledge, it's not possible right now with any of the browsers, though the Google Chrome team has said that they're eventually intending to support this scenario (see the "Screensharing" bullet point on their roadmap); and I suspect that this means that eventually other browsers will follow, presumably with IE and Safari bringing up the tail. But all of that is probably out somewhere past February, which is when they're supposed to finalize the current WebRTC standard and ship production bits. (Hopefully Microsoft's last-minute spanner in the works doesn't screw that up.) It's possible that I've missed something recent, but I've been following the project pretty carefully, and I don't think screensharing has even made it into Chrome Canary yet, let alone dev/beta/prod. Opera is the only browser that has been keeping pace with Chrome on its WebRTC implementation (FireFox seems to be about six months behind), and I haven't seen anything from that team either about screensharing.
I've been told that there is one way to do it right now, which is to write your own webcamera driver, so that your local screen appeared to the WebRTC getUserMedia() API as just another video source. I don't know that anybody has done this - and of course, it would require installing the driver on the machine in question. By the time all is said and done, it would probably just be easier to use VNC or something along those lines.
navigator.mediaDevices.getDisplayMedia(constraint).then((stream)=>{
// todo...
})
now you can do that, but Safari is different from Chrome in audio.
it is Possible I have worked on this and built a Demo for Screen share. During this watcher can access your mouse and Keyboard. If he moves his mouse then Your mouse also moves and if he types from his Keyboard, it will be typed into your pc.
View this code this code is for Screen share...
Right now in this days you can share screen with this, you not need any extentions.
const getLocalScreenCaptureStream = async () => {
try {
const constraints = { video: { cursor: 'always' }, audio: false };
const screenCaptureStream = await navigator.mediaDevices.getDisplayMedia(constraints);
return screenCaptureStream;
} catch (error) {
console.error('failed to get local screen', error);
}
};