Opentok react native not publishing - react-native

I'm trying to build an app with opentok-react-native library.
When the app is running, the publisher component becomes a black square and the subscriber component shows my camera video.
In the logs I can see that the stream is created, with a stream ID and apparently is working. But when I go to my Tokbox account I can't see any data on my dashboard:
Here is my current code: https://github.com/victor0402/opentok-demo
The important part is:
import React, {Component} from 'react';
import {View} from 'react-native';
import {OT, OTPublisher, OTSession, OTSubscriber} from "opentok-react-native";
export default class App extends Component {
//Publisher token
token = 'TOKEN HERE';
//Routed session ID
session = 'SESSION ID HERE';
apiKey = 'API KEY';
constructor(props) {
super(props);
this.state = {
streamProperties: {},
};
this.publisherProperties = {
publishVideo: true,
publishAudio: true,
cameraPosition: 'front'
};
this.publisherEventHandlers = {
streamCreated: event => {
console.log('publisherEventHandlers: streamCreated.... updating state');
const streamProperties = {
...this.state.streamProperties, [event.streamId]: {
subscribeToAudio: true,
subscribeToVideo: true,
style: {
width: 400,
height: 300,
},
}
};
this.setState({streamProperties});
},
streamDestroyed: event => {
console.log('Publisher stream destroyed!', event);
}
};
this.subscriberProperties = {
subscribeToAudio: true,
subscribeToVideo: true,
};
this.sessionEventHandlers = {
streamCreated: event => {
console.log('sessionEventHandlers : streamCreated');
},
streamDestroyed: event => {
console.log('Stream destroyed!!!!!!', event);
},
};
this.subscriberEventHandlers = {
error: (error) => {
console.log(`There was an error with the subscriber: ${error}`);
},
};
}
render() {
OT.enableLogs(true);
return (
<View>
<OTSession apiKey={this.apiKey} sessionId={this.session} token={this.token}
eventHandlers={this.sessionEventHandlers}>
<OTPublisher
properties={this.publisherProperties}
eventHandlers={this.publisherEventHandlers}
style={{ height: 100, width: 100 }}
/>
<OTSubscriber
properties={this.subscriberProperties}
eventHandlers={this.subscriberEventHandlers}
style={{height: 100, width: 100}}
streamProperties={this.state.streamProperties}
/>
</OTSession>
</View>
);
}
}
So, is there something wrong with my code?
How can I publish a vídeo and get the results and recordings on my account dashboard?

TokBox Developer Evangelist here.
The usage metrics take about 24 hours to populate in the dashboard which is why you're not seeing them immediately.
I also reviewed the code you shared and it looks like your setting the streamProperties object in the streamCreated callback for OTPublisher. Please note that the streamCreated event for the publisher will only fire when your publisher starts to publish. Since you're using the streamProperties to set properties for the subscriber, you should be setting this data in the streamCreated event callback for OTSession because that's dispatched when a new stream (other than your own) is created in a session. Your code would then look something like this:
this.sessionEventHandlers = {
streamCreated: event => {
const streamProperties = {
...this.state.streamProperties, [event.streamId]: {
subscribeToAudio: true,
subscribeToVideo: true,
style: {
width: 400,
height: 300,
},
}
};
this.setState({streamProperties});
},
};
Lastly to check and see if everything is working correctly, I recommend using the OpenTok Playground Tool and connect with the same sessionId as the one in your React Native application.

Hope you are using updated vrsion
"opentok-react-native": "^0.17.2"

Related

Issue sending & receiving streams between two clients in LiveKit's React Native SDK

I'm trying to build on the example app provided by livekit, so far I've implemented everything like the example app and I've been successful with connecting to a room on example website, I recieve audio from website, but I don't read the video stream, and I also can't send audio or video at all.
Steps to reproduce the behavior:
add the following to index.js
import { registerRootComponent } from "expo";
import { registerGlobals } from "livekit-react-native";
import App from "./App";
registerRootComponent(App);
registerGlobals();
Rendering the following component in App.tsx
import { Participant, Room, Track } from "livekit-client";
import {
useRoom,
useParticipant,
AudioSession,
VideoView,
} from "livekit-react-native";
import { useEffect, useState } from "react";
import { Text, ListRenderItem, StyleSheet, FlatList, View } from "react-native";
import { ParticipantView } from "./ParticipantView";
import { RoomControls } from "./RoomControls";
import type { TrackPublication } from "livekit-client";
const App = () => {
// Create a room state
const [, setIsConnected] = useState(false);
const [room] = useState(
() =>
new Room({
publishDefaults: { simulcast: false },
adaptiveStream: true,
})
);
// Get the participants from the room
const { participants } = useRoom(room);
const url = "[hard-coded-url]";
const token =
"[hard-coded-token";
useEffect(() => {
let connect = async () => {
// If you wish to configure audio, uncomment the following:
await AudioSession.configureAudio({
android: {
preferredOutputList: ["speaker"],
},
ios: {
defaultOutput: "speaker",
},
});
await AudioSession.startAudioSession();
await room.connect(url, token, {});
await room.localParticipant.setCameraEnabled(true);
await room.localParticipant.setMicrophoneEnabled(true);
await room.localParticipant.enableCameraAndMicrophone();
console.log("connected to ", url);
setIsConnected(true);
};
connect();
return () => {
room.disconnect();
AudioSession.stopAudioSession();
};
}, [url, token, room]);
// Setup views.
const stageView = participants.length > 0 && (
<ParticipantView participant={participants[0]} style={styles.stage} />
);
const renderParticipant: ListRenderItem<Participant> = ({ item }) => {
return (
<ParticipantView participant={item} style={styles.otherParticipantView} />
);
};
const otherParticipantsView = participants.length > 0 && (
<FlatList
data={participants}
renderItem={renderParticipant}
keyExtractor={(item) => item.sid}
horizontal={true}
style={styles.otherParticipantsList}
/>
);
const { cameraPublication, microphonePublication } = useParticipant(
room.localParticipant
);
return (
<View style={styles.container}>
{stageView}
{otherParticipantsView}
<RoomControls
micEnabled={isTrackEnabled(microphonePublication)}
setMicEnabled={(enabled: boolean) => {
room.localParticipant.setMicrophoneEnabled(enabled);
}}
cameraEnabled={isTrackEnabled(cameraPublication)}
setCameraEnabled={(enabled: boolean) => {
room.localParticipant.setCameraEnabled(enabled);
}}
onDisconnectClick={() => {
// navigation.pop();
console.log("disconnected");
}}
/>
</View>
);
};
function isTrackEnabled(pub?: TrackPublication): boolean {
return !(pub?.isMuted ?? true);
}
const styles = StyleSheet.create({
container: {
flex: 1,
alignItems: "center",
justifyContent: "center",
},
stage: {
flex: 1,
width: "100%",
},
otherParticipantsList: {
width: "100%",
height: 150,
flexGrow: 0,
},
otherParticipantView: {
width: 150,
height: 150,
},
});
export default App;
the components used here are mostly the same as what's in the example, I've removed the screensharing logic and the messages
5. I run the app using an expo development build
6. it will log that it's connected, you'll be able to hear sound from the remote participant, but not see any video or send any sound.
7. if i try to add
await room.localParticipant.enableCameraAndMicrophone();
in the useEffect, I get the following error:
Possible Unhandled Promise Rejection (id: 0):
Error: Not implemented.
getSettings#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:103733:24
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:120307:109
generatorResume#[native code]
asyncGeneratorStep#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21908:26
_next#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21927:29
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21932:14
tryCallTwo#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:26656:9
doResolve#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:26788:25
Promise#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:26675:14
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21924:25
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:120173:52
generatorResume#[native code]
asyncGeneratorStep#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21908:26
_next#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21927:29
tryCallOne#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:26648:16
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:26729:27
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:27687:26
_callTimer#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:27602:17
_callReactNativeMicrotasksPass#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:27635:17
callReactNativeMicrotasks#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:27799:44
__callReactNativeMicrotasks#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21006:46
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:20806:45
__guard#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:20986:15
flushedQueue#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:20805:21
flushedQueue#[native code]
Expected behavior
This should both receive & send video and audio streams between the two clients

[expo-notifications][managed workflow][EAS Build][Android] custom sound not playing in locally scheduled notifications

I am scheduling notifications locally with custom sound but custom sound is not playing. Infact no notification alert is shown. even though show alert and shouldPlaySound is set to true in setNotificationsHandler. It should also be mentioned that color also remains the same even though i have added the custom color in expo-notifications plugin in app.json as well as notification channel and notification content input.
i checked the notification settings of the device in which i installed the apk. notification channel is present and default ringtone of this channel is also set to the custom sound. however it just doesn’t play when the notification comes.
Relevant Code:
import { StatusBar } from "expo-status-bar";
import React from "react";
import { Text, View, Platform, Button } from "react-native";
import * as Notifications from "expo-notifications";
Notifications.setNotificationHandler({
handleNotification: async () => ({
shouldShowAlert: true,
shouldPlaySound: true,
shouldSetBadge: false,
}),
});
export default function App() {
React.useEffect(() => {
setNotificationChannelAsync();
}, []);
return (
<View
style={{
flex: 1,
alignItems: "center",
justifyContent: "space-around",
}}
>
<Text>This App is for Testing Notifications</Text>
<Button
title="Press to schedule a notification"
onPress={async () => {
await scheduleNotification();
}}
/>
<StatusBar style="auto" />
</View>
);
}
const setNotificationChannelAsync = () => {
if (Platform.OS === "android") {
Notifications.setNotificationChannelAsync("sound", {
name: "sound notification",
importance: Notifications.AndroidImportance.HIGH,
vibrationPattern: [0, 250, 250, 250],
lightColor: "#FF231F7C",
sound: "adhan.wav",
});
}
};
async function scheduleNotification() {
await Notifications.scheduleNotificationAsync({
content: {
title: "You've got mail! 📬",
body: "Here is the notification body",
data: { data: "goes here" },
sound: "adhan.wav",
color: "#FF231F7C",
},
trigger: { seconds: 5, channelId: "sound" },
});
}
following is the github repo of minimum reproducible example.
https://github.com/basit3407/testing-custom-sound-notifications
I think it was because of the default notification channel. I deleted all the channels. Set the new new channel with new channel identifier and it started working.
Okay, for whatever reason, the android notification will only play if you set the vibrate property to false when you schedule your notification like so:
async function scheduleNotification() {
await Notifications.scheduleNotificationAsync({
content: {
title: "You've got mail! 📬",
body: "Here is the notification body",
data: { data: "goes here" },
sound: "adhan.wav",
color: "#FF231F7C",
vibrate: false,
},
trigger: { seconds: 5, channelId: "sound" },
});
}

React Native Gifted Chat + Firestore not showing messages correctly?

I am trying to create a chat feature in my react native app. I am using react-native-gifted-chat and saving the messages in firestore. Here is the behavior that is occurring:
When I send a message, ALL the messages re render, some of them are duplicates, as you can see I only have 3 messages sent so far, but all these duplicates are making me wonder why the entire thing is re-rendering and why there are duplicates when it does re-render.
The code:
class Chat extends React.Component {
constructor(props) {
super(props)
this.state = {
messages: [],
currentUser: null,
isLoading: true,
messageID: ""
}
}
//---------------------------------------------------------------
async componentDidMount (){
// get user info from firestore
let userUID = Firebase.auth().currentUser.uid
await Firebase.firestore().collection("users").doc(userUID).get()
.then(doc => {
data = doc.data()
this.setState({
currentUser: {
name: data.username,
avatar: data.profilePic,
_id: doc.id,
},
})
})
const messages = []
await Firebase.firestore().collection("chat")
.orderBy("createdAt", "desc")
.limit(50)
.onSnapshot(querySnapshot => {
querySnapshot.forEach((res) => {
const {
user,
text,
createdAt,
} = res.data();
messages.push({
key: res._id,
user,
text,
createdAt,
});
})
this.setState({
messages,
isLoading: false,
});
})
}
//Load 50 more messages when the user scrolls
//
//Add a message to firestore
onSend = async(message) => {
await Firebase.firestore().collection("chat")
.add({
user: {
_id: this.state.currentUser._id,
name: this.state.currentUser.name,
avatar: this.state.currentUser.avatar,
},
})
.then(ref => this.setState({messageID: ref.id}))
await Firebase.firestore().collection("chat")
.doc(this.state.messageID)
.set({
_id: this.state.messageID,
text: message[0].text,
createdAt: message[0].createdAt
}, { merge: true })
}
render() {
if(this.state.isLoading){
return(
<View style = {{backgroundColor: '#000000', flex: 1}}>
<ActivityIndicator size="large" color="#9E9E9E"/>
</View>
)
}
return (
<View style={{backgroundColor: '#000000', flex: 1}}>
<GiftedChat
showUserAvatar={true}
renderUsernameOnMessage={true}
messages={this.state.messages}
onSend={message => this.onSend(message)}
scrollToBottom
/>
</View>
)
}
}
Some notes:
Every time the component mounts, the messages array pushes the messages to the state array.
The component mounts when I send a message, thus re-rendering the array of messages
Each message ID is unique and generated by firebase using "Add"
Let me know how I can fix this issue! thanks
Duplication is because of just single line
const messages = []
Move this line inside listener, i.e.onSnapShot()
await Firebase.firestore().collection("chat")
.orderBy("createdAt", "desc")
.limit(50)
.onSnapshot(querySnapshot => {
const messages = []
// rest of your code which is having forEach loop
});
The issue was that messages object was created only once when the component loaded, and you were pushing elements to that object only.

Getting a spinning plank screen and Error: Camera is not ready yet. Wait for 'onCameraReady' callback using expo Camera component

I'm new to web development and I'm trying to build an image recognition app using expo for testing. My code for the camera is below. On screen load, I get a black screen (not the camera) with my "capture" button. When I click on capture, I get the error:
Unhandled promise rejection: Error: Camera is not ready yet. Wait for 'onCameraReady' callback.
My code is below
import { Dimensions, Alert, StyleSheet, ActivityIndicator } from 'react-native';
// import { RNCamera } from 'react-native-camera';
import CaptureButton from './CaptureButton.js'
import { Camera } from 'expo-camera';
export default class AppCamera extends React.Component {
constructor(props){
super(props);
this.state = {
identifiedAs: '',
loading: false
}
}
takePicture = async function(){
if (this.camera) {
// Pause the camera's preview
this.camera.pausePreview();
// Set the activity indicator
this.setState((previousState, props) => ({
loading: true
}));
// Set options
const options = {
base64: true
};
// Get the base64 version of the image
const data = await this.camera.takePictureAsync(options)
// Get the identified image
this.identifyImage(data.base64);
}
}
identifyImage(imageData){
// Initialise Clarifai api
const Clarifai = require('clarifai');
const app = new Clarifai.App({
apiKey: '8d5ecc284af54894a38ba9bd7e95681b'
});
// Identify the image
app.models.predict(Clarifai.GENERAL_MODEL, {base64: imageData})
.then((response) => this.displayAnswer(response.outputs[0].data.concepts[0].name)
.catch((err) => alert(err))
);
}
displayAnswer(identifiedImage){
// Dismiss the acitivty indicator
this.setState((prevState, props) => ({
identifiedAs:identifiedImage,
loading:false
}));
// Show an alert with the answer on
Alert.alert(
this.state.identifiedAs,
'',
{ cancelable: false }
)
// Resume the preview
this.camera.resumePreview();
}
render () {
const styles = StyleSheet.create({
preview: {
flex: 1,
justifyContent: 'flex-end',
alignItems: 'center',
height: Dimensions.get('window').height,
width: Dimensions.get('window').width,
},
loadingIndicator: {
flex: 1,
alignItems: 'center',
justifyContent: 'center',
}
});
return (
<Camera ref={ref => {this.camera = ref;}}style={styles.preview}>
<ActivityIndicator size="large" style={styles.loadingIndicator} color="#fff" animating={this.state.loading}/>
<CaptureButton buttonDisabled={this.state.loading} onClick={this.takePicture.bind(this)}/>
</Camera>
)
}
}```
Could someone kindly point me in the right direction to fix this error?
https://docs.expo.dev/versions/latest/sdk/camera/#takepictureasyncoptions
Note: Make sure to wait for the onCameraReady callback before calling this method.
So, you might resolve if you add onCameraReady props to Camera component like this document.
I'm facing issue like this, and it is not resolved now... I hope my advice works well.

React - WEBRTC - Peer to Peer - Video call - Doesn't seem to work

I have being trying to make a Video call inside a React-Native app. Currently using react-native-webrtc, which is the mainstream lib for this kind of project.
I'm pretty new to this, but based on the minimum example of p2p video (found here), i made a code trying to make if work with different networks.
The example creates a connection between one streamer to a receiver, but both on the same page execution, same network, same everything.
In my case i need both users to stream video and receive video, from different networks.
Problem is, i can't find a decent place to read and understand how the negotiation actually works on this scenario.
Code sample:
/**
* #format
* #flow
*/
import React, { useEffect } from 'react';
import firebase from '../firebase.config';
import { useSelector } from 'react-redux';
import {
View,
SafeAreaView,
Button,
StyleSheet,
Dimensions,
Text,
} from 'react-native';
import { RTCPeerConnection, RTCView, mediaDevices } from 'react-native-webrtc';
import store from '../redux/store';
import { Actions } from 'react-native-router-flux';
import { User } from 'models';
const oUserService = new User().getService(firebase);
const oCurrentReceivedStreamingService = new User().getService(
firebase,
store,
'currentReceivedStreaming',
);
const viewport = Dimensions.get('window');
const Streaming = {
call: (caller, receiver, localDescription) => {
return {
status: 'pending',
users: {
caller: {
uid: caller.uid,
localDescription,
},
receiver: {
uid: receiver.uid,
localDescription: '',
},
},
};
},
answer: (receiver, localDescription) => {
return {
...receiver.streaming,
status: 'ongoing',
users: {
...receiver.streaming.users,
receiver: {
...receiver.streaming.users.receiver,
localDescription,
},
},
};
},
close: streaming => {
return {
...streaming,
status: 'closed',
};
},
};
const configuration = {
iceServers: [
{ url: 'stun:stun.l.google.com:19302' },
// { url: 'stun:stun1.l.google.com:19302' },
// { url: 'stun:stun2.l.google.com:19302' },
// { url: 'stun:stun3.l.google.com:19302' },
// { url: 'stun:stun4.l.google.com:19302' },
// { url: 'stun:stun.ekiga.net' },
// { url: 'stun:stun.ideasip.com' },
// { url: 'stun:stun.iptel.org' },
// { url: 'stun:stun.rixtelecom.se' },
// { url: 'stun:stun.schlund.de' },
// { url: 'stun:stunserver.org' },
// { url: 'stun:stun.softjoys.com' },
// { url: 'stun:stun.voiparound.com' },
// { url: 'stun:stun.voipbuster.com' },
// { url: 'stun:stun.voipstunt.com' },
],
};
export default function App({ user, receiver, caller, session }) {
const currentUserStore = useSelector(s => s.currentUserStore);
const userStreamingStore = useSelector(s => s.userStreamingStore);
const currentReceivedStreaming = useSelector(
s => s.currentReceivedStreaming,
);
const [localStream, setLocalStream] = React.useState();
const [remoteStream, setRemoteStream] = React.useState();
const [cachedLocalPC, setCachedLocalPC] = React.useState();
const [cachedRemotePC, setCachedRemotePC] = React.useState();
useEffect(() => {
oCurrentReceivedStreamingService.get(caller.uid);
}, [receiver, caller, user, session]);
let localPC, remotePC;
const startLocalStream = async () => {
const isFront = true;
const devices = await mediaDevices.enumerateDevices();
const facing = isFront ? 'front' : 'back';
const videoSourceId = devices.find(
device => device.kind === 'videoinput' && device.facing === facing,
);
const facingMode = isFront ? 'user' : 'environment';
const constraints = {
audio: true,
video: {
mandatory: {
minWidth: (viewport.height - 100) / 2,
minHeight: (viewport.height - 100) / 2,
minFrameRate: 30,
},
facingMode,
optional: videoSourceId ? [{ sourceId: videoSourceId }] : [],
},
};
const newStream = await mediaDevices.getUserMedia(constraints);
setLocalStream(newStream);
return Promise.resolve(newStream);
};
const startCall = async () => {
try {
let newStream = await startLocalStream();
oCurrentReceivedStreamingService.get(session.user.uid);
localPC = new RTCPeerConnection(configuration);
remotePC = new RTCPeerConnection(configuration);
localPC.onicecandidate = e => {
try {
if (e.candidate) {
remotePC.addIceCandidate(e.candidate);
}
} catch (err) {
console.error(`Error adding remotePC iceCandidate: ${err}`);
}
};
remotePC.onicecandidate = e => {
try {
if (e.candidate) {
localPC.addIceCandidate(e.candidate);
}
} catch (err) {
console.error(`Error adding localPC iceCandidate: ${err}`);
}
};
remotePC.onaddstream = e => {
if (e.stream && remoteStream !== e.stream) {
setRemoteStream(e.stream);
}
};
localPC.addStream(newStream);
const offer = await localPC.createOffer();
await localPC.setLocalDescription(offer);
oUserService.patch(currentReceivedStreaming.current.uid, {
streaming: Streaming.call(
currentReceivedStreaming.current,
user,
localPC.localDescription,
),
});
} catch (err) {
console.error(err);
}
setCachedLocalPC(localPC);
setCachedRemotePC(remotePC);
};
const answerCall = async (oUser, oCaller) => {
try {
let newStream = await startLocalStream();
localPC = new RTCPeerConnection(configuration);
remotePC = new RTCPeerConnection(configuration);
localPC.onicecandidate = e => {
try {
if (e.candidate) {
remotePC.addIceCandidate(e.candidate);
}
} catch (err) {
console.error(`Error adding remotePC iceCandidate: ${err}`);
}
};
remotePC.onicecandidate = e => {
try {
if (e.candidate) {
localPC.addIceCandidate(e.candidate);
}
} catch (err) {
console.error(`Error adding localPC iceCandidate: ${err}`);
}
};
remotePC.onaddstream = e => {
if (e.stream && remoteStream !== e.stream) {
setRemoteStream(e.stream);
}
};
localPC.addStream(newStream);
await remotePC.setRemoteDescription(oCaller.localDescription);
let remoteStreams = remotePC.getRemoteStreams();
remoteStreams.map(s => {
console.log(s);
setRemoteStream(s);
});
await localPC.setRemoteDescription(oCaller.localDescription);
const offer = await localPC.createOffer();
// const offer = await localPC.createAnswer();
await localPC.setLocalDescription(offer);
oUserService.patch(currentReceivedStreaming.current.uid, {
streaming: Streaming.answer(
currentReceivedStreaming.current,
localPC.localDescription,
),
});
} catch (err) {
console.error(err);
}
setCachedLocalPC(localPC);
setCachedRemotePC(remotePC);
};
useEffect(() => {
if (currentReceivedStreaming.current.uid) {
let current = currentReceivedStreaming.current;
if (current.streaming) {
if (
current.streaming.status === 'closed' ||
current.streaming.status === 'rejected'
) {
// Actions.popTo('dashboard');
}
if (current.streaming.status === 'pending') {
if (
current.streaming.users.receiver.uid ===
session.user.uid
) {
answerCall(current, current.streaming.users.caller);
}
}
if (current.streaming.status === 'ongoing' && remotePC) {
if (
current.streaming.users.caller.uid === session.user.uid
) {
remotePC.setRemoteDescription(
current.streaming.receiver.localDescription,
);
}
}
}
}
}, [currentReceivedStreaming.current]);
const closeStreams = () => {
try {
if (cachedLocalPC) {
cachedLocalPC.removeStream(localStream);
cachedLocalPC.close();
}
if (cachedRemotePC) {
cachedRemotePC.removeStream(remoteStream);
cachedRemotePC.close();
}
setLocalStream();
setRemoteStream();
setCachedRemotePC();
setCachedLocalPC();
oUserService
.patch(currentReceivedStreaming.current.uid, {
streaming: {
...currentReceivedStreaming.current.streaming,
status: 'closed',
},
})
.then(() => Actions.popTo('dashboard'));
} catch (e) {
console.log('ERROR', e);
}
};
useEffect(() => {
if (!localStream && caller.uid === session.user.uid) {
startCall();
}
}, [currentUserStore.current.streaming]);
return (
<SafeAreaView style={styles.container}>
{/* {!localStream && (
<Button
title="Click to start stream"
onPress={startLocalStream}
/>
)} */}
{/* {localStream && (
<Button
title="Click to start call"
onPress={startCall}
disabled={!!remoteStream}
/>
)} */}
<View style={styles.rtcview}>
{localStream && (
<RTCView
style={styles.rtc}
streamURL={localStream.toURL()}
/>
)}
</View>
<Text>{!!remoteStream && 'YES'}</Text>
<View style={styles.rtcview}>
{remoteStream && (
<RTCView
style={styles.rtc}
streamURL={remoteStream.toURL()}
/>
)}
</View>
<Button title="Click to stop call" onPress={closeStreams} />
</SafeAreaView>
);
}
const styles = StyleSheet.create({
container: {
backgroundColor: '#313131',
justifyContent: 'space-between',
alignItems: 'center',
height: '100%',
paddingVertical: 30,
},
text: {
fontSize: 30,
},
rtcview: {
justifyContent: 'center',
alignItems: 'center',
height: '40%',
width: '80%',
backgroundColor: 'black',
borderRadius: 10,
},
rtc: {
width: '80%',
height: '100%',
},
});
In a nutshell, how does a video call between two browsers look like from a developer's point of view?
After all the preliminary preparation and creation of the necessary
JavaScript objects on the first browser, the WebRTC method
createOffer() is called, which returns a text packet in SDP format
(or, in the future, a JSON serializable object if the oRTC version
of the API picks up the "classical" one). This packet contains
information about what kind of communication the developer wants:
voice, video or send data, which codecs are there - this whole
story.
And now - the signaling. The developer must somehow (really, it is written in the specification!) Pass this text packet offer to
the second browser. For example, using your own server on the
Internet and WebSocket connection from both browsers.
After receiving the offer on the second browser, the developer passes it to WebRTC using the setRemoteDescription()
method. Then he calls the createAnswer() method, which returns the
same text packet in SDP format, but for the second browser and
taking into account the received packet from the first Browser.
Signaling continues: the developer passes the answer text packet back to the first Browser.
After receiving the answer on the first Browser, the developer passes it to WebRTC using the already mentioned
setRemoteDescription() method, after which WebRTC in both browsers
is minimally aware of each other. Can I connect? Unfortunately no.
In fact, everything is just beginning...
WebRTC in both Browsers begins to analyze the status of the network connection (in fact, the standard does not indicate when to
do this, and for many browsers WebRTC starts to study the network
immediately after creating the corresponding objects, so as not to
create unnecessary delays when connecting). When the developer in
the first step was creating WebRTC objects, he should at least pass
the address of the STUN server. This is a server that, in response
to the UDP packet “what is my IP”, transmits the IP address from
which this packet was received. WebRTC uses the STUN server to get
an “external” IP address, compare it with an “internal” one and see
if there is NAT. And if so, which reverse ports does NAT use to
route UDP packets?
From time to time, WebRTC on both browsers will call the onicecandidate callback, transmitting the SIP packet with
information for the second connection participant. This packet
contains information about internal and external IP addresses,
connection attempts, ports used by NAT, and so on. The developer
uses signaling to transfer these packets between Browsers. The
transmitted packet is sent to WebRTC using the addIceCandidate()
method.
After a while, WebRTC will establish a peer-to-peer connection. Or will not be able if NAT will interfere. For such
cases, the developer can transmit the address of the TURN server,
which will be used as an external connecting element: both browsers
will transmit UDP packets with voice or video through it. If the
STUN server can be found for free (for example, google has it), then
you will have to raise the TURN server yourself. Nobody is
interested in passing terabytes of video traffic through themselves
for free.
In conclusion: you will need in minimum STUN server, as maximum TURN server. Or both, if you do not know which kind of network configuration will be used by users.
You can read this blog for more detailed information. I find it very informative.