How to select audioOutput with OpenTok - webrtc

I am building a simple WebRTC app with OpenTok.
I need to be able to select camera, audio input and audio output.
Currently that doesn't seem easily possible.
See opentok-hardware-setup
https://github.com/opentok/opentok-hardware-setup.js/issues/18
I am loading OpenTok in my index.html file
and opentok-hardware-setup.js.
All looks great and I can select microphone and camera BUT not the speaker out aka audiooutput.
<script src="https://static.opentok.com/v2/js/opentok.min.js"></script>
From the console, I tried
OT.getDevices((err, devices) => { console.debug(devices)});
and observed that you can't get the audioOutput
(4) [{…}, {…}, {…}, {…}]
0: {deviceId: "default", label: "Default - Built-in Microphone", kind: "audioInput"}
1: {deviceId: "b183634b059298f3692aa7e5871e6a463127701e21e320742c48bda99acdf925", label: "Built-in Microphone", kind: "audioInput"}
2: {deviceId: "4b441035a4db3c858c65c30eabe043ae1967407b3cc934ccfb332f0f6e33a029", label: "Built-in Output", kind: "audioInput"}
3: {deviceId: "05415e116b36584f848faeef039cd06e5290dde2e55db6895c19c8be3b880d91", label: "FaceTime HD Camera", kind: "videoInput"}
length
:4 __proto__:Array(0)
whereas you can get them using navigator.mediaDevices.enumerateDevices()
Any pointers?

Disclosure, I'm an employee at TokBox :). OpenTok does not currently provide a way to specify the audio output device. This is still an experimental API and only works in Chrome. When the API is standardised and has wider browser support we will make it easier.
In the meantime, it's pretty easy to do this using native WebRTC. There is a good sample for this at https://webrtc.github.io/samples/src/content/devices/multi/ the source code can be found at https://github.com/webrtc/samples/blob/gh-pages/src/content/devices/multi/js/main.js
In summary you use the enumerateDevices method as you found. Then you use the setSinkId() method on the video element https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/setSinkId
You can get access to the videoElement by listening to the videoElementCreated event on the subscriber like so:
subscriber.on('videoElementCreated', (event) => {
if (typeof event.element.sinkId !== 'undefined') {
event.element.setSinkId(deviceId)
.then(() => {
console.log('successfully set the audio output device');
})
.catch((err) => {
console.error('Failed to set the audio output device ', err);
});
} else {
console.warn('device does not support setting the audio output');
}
});

So,
The answer gave by #Adam Ullman is not valid anymore since there is a separate audio element created alongside the video element preventing us to use the setSinkId method of the video element.
I found a solution consisting in finding the audio element from the video one and using its own setSinkId.
Code:
const subscriber_videoElementCreated = async event => {
const videoElem = event.element;
const audioElem = Array.from(videoElem.parentNode.childNodes).find(child => child.tagName === 'AUDIO');
if (typeof audioElem.sinkId !== 'undefined') {
try {
await audioElem.setSinkId(deviceId);
} catch (err) {
console.log('Could not update speaker ', err);
}
}
};

OpenTok (now Vonage) now provides an API for doing exactly this in 2.22.
It is not supported on all browsers (Safari), but for browsers which support setSinkID, there is now a uniform API which wraps the functionality handily.
https://tokbox.com/developer/guides/audio-video/js/#setAudioOutput

Related

Expo-Camera for local simulator development is not supported, how to work around this?

expo-camera is not supported by any of the simulators. Therefore, when using this component in development & on the simulator, it results in an error and data is not stored as it would on the actual device.
Possible Unhandled Promise Rejection. Error: Video recording is not supported on a simulator
This is causing an issue in terms of verifying functionality and the "happy path".
I added an interim check for a video field and then return the basic camera, so at least our flow works, but I don't think it's a long terms solution:
if (
process.env.NODE_ENV === 'development' &&
field.kind === ComponentTypes.VIDEO
) {
return <CameraImage />
}
How do you work around this intended behavior, in terms of getting data to store? All I can think of doing for this is adding a catch and faking the data:
cameraRef?.current
.recordAsync(options)
.then(vid => {
console.log({ vid });
setRecordedVideo(vid);
})
.catch(() => {
if (process.env.NODE_ENV === 'development') {
setRecordedVideo({
uri: someFakeUri,
});
}
})
.finally(() => setIsRecording(false));

how to add screen share function using PeerJS?

Currently, i am working on a webRTC project where you can give call and receive call.i also want to add screen share functionality to it.
can anyone provide me a good documentation link?
i am currently following the official documentation of peerJS.
i was able to do audio-video calling but stuck on the screen sharing part.
Help Me!
You need to get stream just like you do with getUserMedia and then you give that stream to PeerJS.
It should be something like this:
var displayMediaOptions = {
video: {
cursor: "always"
},
audio: false
};
navigator.mediaDevices.getDisplayMedia(displayMediaOptions)
.then(function (stream) {
// add this stream to your peer
});
I'm working with and learning about WebRTC. From what I've read, I think the solution here probably hinges on getDisplayMedia. That's also what this React, Node and peerJS tutorial suggests (though I haven't tried it myself yet).
let screenShare = document.getElementById('shareScreen');
screenShare.addEventListener('click', async () => {
captureStream = await navigator.mediaDevices.getDisplayMedia({
audio: true,
video: { mediaSource: "screen" }
});
//Instead of adminId, pass peerId who will taking captureStream in call
myPeer.call(adminId, captureStream);
})

Is it possible to share both text message and image using expo-sharing?

Sharing.shareAsync(url, options)
Opens action sheet to share file to different applications which can handle this type of file.
Arguments
url (string) -- Local file URL to share.
options (object) --
A map of options:
mimeType (string) -- sets mimeType for Intent (Android only)
dialogTitle (string) -- sets share dialog title (Android and Web only)
UTI (string) -- (Uniform Type Identifier) the type of the target file (iOS only)
This is what they say on their page. I dont see any option to share text message along with a local image.
Is there any way to share both the image and text message ?
According to the docs, you should be able to use it like so for Android:
url = '<image-to-be-shared-local-url>';
messageText = 'Text that you want to share goes here';
const options = {
mimeType: 'image/jpeg',
dialogTitle: messageText,
};
Sharing.shareAsync(url, options);
But I would recommend to use react-native-share as this is more widely used and has more options for you to experiment with.
Here is the library documentation
Hope this helps :)
react-native-share still not compatible with expo. you have to use either expo-sharing or Share from react-native.
Example:
ShareMessage = async () => {
if (Platform.OS === "android") {
Share.share({
message: API.base_url + this.state.ShareProductName, // supporting android
url: this.state.share_images[0], // not supporting
title: this.state.ShareProductName,
})
.then((result) => console.log(result))
.catch((errorMsg) => console.log(errorMsg));
return;
} else if (Platform.OS === "ios") {
Share.share({
message:API.base_url + this.state.ShareProductName,
url: this.state.share_images[0],
title: this.state.ShareProductName, // not supporting
})
.then((result) => console.log(result))
.catch((errorMsg) => console.log(errorMsg));
return;
}
};

React Native Uploading Video to YouTube (Stuck at Processing)

I am attempting to upload video files to YouTube using their v3 API and their MediaUploader. It works in my ReactJS application, but not in my React Native application. When uploading via React Native, the upload completes, then stalls at 100%. In my YouTube account, I can see the new video file, but it is stuck at "Video is still being processed."
I believe the issue may be that I need to send a video file and not an object with a video uri but I don't know how to get around that.
I am using the YouTube MediaUploader from the CORS example at https://github.com/youtube/api-samples/blob/master/javascript/cors_upload.js I am using an OAuth 2.0 client Id, and this setup works correctly when using the ReactJS app via my website. I am using React Native Expo with Camera, which returns me an Object with a URI, for example:
Video File: Object {
"uri": "file:///var/mobile/Containers/Data/Application/353A7969-E2A8-4C80-B641-C80B2B029555/Library/Caches/ExponentExperienceData/%2540dj_walksalot%252Fwandereo/Camera/E971DFEC-AB3E-4B6D-892F-9027AFE47A1A.mov",
}
This file can be viewed in the application, and I can even successfully send this to my server for playback on the web app and in the React Native app. However, sending this object in the MediaUploader does not work. It will take an appropriate amount of time to upload, but then sits at 100%, while my YouTube account will show it has received the video with the correct metadata, but the video itself remains stuck at "Video is still being processed."
video_file: Object {
"uri": "file:///var/mobile/Containers/Data/Application/353A7969-E2A8-4C80-B641-C80B2B029555/Library/Caches/ExponentExperienceData/%2540dj_walksalot%252Fwandereo/Camera/E971DFEC-AB3E-4B6D-892F-9027AFE47A1A.mov",
}
export const uploadToYouTube = (access_token, video_file, metadata) => async (dispatch) => {
...cors_upload...
var uploader = new MediaUploader({
baseUrl: `https://www.googleapis.com/upload/youtube/v3/videos?part=snippet%2Cstatus&key=API_KEY`,
file: video_file,
token: access_token,
metadata: metadata,
contentType: 'video/quicktime',
// contentType: 'application/octet-stream',//"video/*",
// contentType = options.contentType || this.file.type || 'application/octet-stream';
params: {
part: Object.keys(metadata).join(',')
},
onError: function(data) {
// onError code
let err = JSON.parse(data);
dispatch(returnErrors(err.message, err.code))
console.log('Error: ', err);
},
onProgress: function(progressEvent){
// onProgress code
let percentCompleted = Math.round((progressEvent.loaded * 100) / progressEvent.total);
dispatch({
type: UPLOAD_PROGRESS,
payload: percentCompleted
});
},
onComplete: function(data) {
console.log('Complete');
// onComplete code
let responseData = JSON.parse(data);
dispatch({
type: UPLOAD_YOUTUBE_VIDEO,
payload: JSON.parse(data)
})
dispatch({
type: UPLOAD_PROGRESS,
payload: 0
});
}
});
uploader.upload();
}
Similar to my currently-working web app, after completing the upload, the "onComplete" function should fire, and YouTube should process the video. This does not happen. I believe it's because I'm attaching an object with a URI and not the actual file.
I was able to solve this from a post at Expert Mill by Joe Edgar at https://www.expertmill.com/2018/10/19/using-and-uploading-dynamically-created-local-files-in-react-native-and-expo/
By using fetch and .blob() I was able to convert the URI object to a data object and upload. Additional code:
const file = await fetch(video_file.uri);
const file_blob = await file.blob();
No need to install RNFetchBlob since this is in the Expo SDK.

difference between udid and client_identification_sequence

I cannot understand somethings about the push notification with quickblox
I have a chat (a webchat) in a webview in a Xamarin apps (i know isn't very clever approach)
i try to create a subscrition via javascript
but i cannot undestand the way for calculate
uuid and client_identification_sequence
var params = {
notification_channels: 'gcm',
device: {
platform: 'android',
udid: '538a068a-d66a-44d4-86c8-18ffed7f20d8'
},
push_token: {
environment: 'development',
client_identification_sequence: ''
}
}; 
QB.pushnotifications.subscriptions.create(params, function (err, res) {
debugger;
if (err) {
debugger;
// error
} else {
debugger;
// success
}
});
i've tried to calculate the uuid with "Xam.Plugin.DeviceInfo"
but what is the way for client_identification_sequence?
should I take this value from the "apns" (for apple push notification) but where?
I have the same roblem with the xamarin project
var d = await wbWrapper.SubscribeForPushNotificationAsync([pushtoken], CrossDeviceInfo.Current.GenerateAppId());
thankyou
uuid - it's your device unique identifier. It's actually can be anything that uniquely identify current particular device.
client_identification_sequence - it's your push token.
For Android - it's registration id (or registration token).
For iOS - it's device token