Agora.io broadcast speaker video/audio to another channel(s) - react-native

We are developing the following scenario and can only get the speakers audio to be streamed for some reason. Can someone give advice on how to implement this or if its even possible with Agora React Native SDK:
I just wanted to check that what we are doing is possible in Agora. So to clarify we have:
Channel 1: speaker 1, maybe speaker 2 - They will chat each other like normal
Channel 2: Audience 1, Audience 2 - They video chat to each other as normal + speaker 1 and speaker 2 video from Channel 1 (video and Audio but can't communicate with Channel 2 audience)
Channel 3+: same as Channel 2 but with different audience members.
Basically the speakers can be seen and heard by audience, but they cannot hear the audience. At the moment we have this working but the speakers video doesn't show to the audience, just the sound.
cheers
Mike

What you described is possible using multi channel features of the SDK. The only limitation with multi channel is that you can only publish your local video to one channel at a time, you can however subscribe to multiple videos (from multiple channels).

Related

WEBRTC + PEERJS - Stream from 2 cameras same time from one user

Hi I have currently web app where users can stream video from camera to each other, but I have question:
Is it possible to stream from 2 cameras same time from one user and send this two streams to another one ?
I'm using javascript with peer.js,node.js,socke.io and express.

Using pre-recorded audio instead of text-to-speech along with Watson Conversation bot

I built a conversation bot with text-to-speech, but no matter how well I tune it, the voice sounds robotic.
I think it would be simpler to have the conversation bot pick a pre-recorded audio and stream it back to the user.
Does anyone see issues with this?
Is there already an example of this so I don't reinvent the wheel?
This functionality needs to be implemented on the client side of the application. Watson Conversation Service can return a text answer and for example an index of the audio record you want to play.
This index then needs to be picked up by the client application communicating with Watson Conversation Service (e.g. a web page in node.js) and the audio record can be played to the user.
As for some examples...in Conversation Service docs there are links to github projects that integrate Watson Conversation Service with node.js web applications - these can be extended by adding the audio records and functionality that will play those records to the user.

WebRTC Video & PSTN integration

This is a broad question - are there any solutions to WebRTC Video & PSTN integration ? The requirements are:
Multi-party WebRTC video conference (SFU or MCU, not peer to peer)
Ability to join the conference via PSTN end points (telephones) - obviously with audio fallback
Prefer paid service (like Tokbox or Twilio) rather than roll-your-own solution
We are currently using TokBox, however, it does not provide a PSTN integration. Since the call signalling is entirely hidden under the TokBox API, it seems unlikely that we could add (some kind of) WebRTC to PSTN gateway and make it work. Twilio has a video offering but it's actually in a very infant stage right now (peer to peer only, it seems with a limit of 4 participants).
Since we a Web App company and not a infrastructure company, I'd prefer a solution that handles the infrastructure part (like TokBox and Twilio do), but am open to other solutions as well, if that's what it'll take.
Thank you.
Avinash,
Twilio Video does not currently support PSTN integration. And there is currently that 4 participant limit regarding video chat.
This product is still a beta and constantly evolving so I'd suggest this group to you for keeping up with the updates.

How to implement group video chat using vLine API

I am new to WebRTC and vLine API. I need to implement a Group Video Chat just like Google Hangout using vLine API and PHP. Please suggest if any example or solution available or how to go about it?
Thanks
Tanjum
We have a PHP example on GitHub: https://github.com/vline/vline-php-example (More examples available here).
With the current API you can implement group chat with a mesh topology where each user establishes a connection with the other users (using person.startMedia() on each person you want to include in the conference). Due to both bandwidth and CPU, this won't scale well beyond four people or so.
We have a better conferencing solution in development (that won the Best Conferencing Award at the WebRTC Expo), but it's currently only available to a select group of beta testers.

One to many video Audio conferencing - webrtc - openTok

I searched about this on google but could not find any suitable answer so posting here for help.
I want to implement video streaming with multiple participants connected. While google this topic I found that WebRTC provide similar functionality but I want to make sure whether WebRTC can support all my requirements.
I want to build an application that should support large number of participants in conference (around 10000).
I want to implement facility like one participant is broadcasting its video and audio streams and other are just listening to their stream.
Also when prompted only one participant will be able to communicate with broadcaster which will be managed by one participant (a administrator). Administrator will decide who can communicate with broadcaster.
Is same can be possible with any other WebAPI ?? I found OpenTok, but not confident if it provide any feature of moderation in conference (i.e. feature of having an Administrator who manages stuff)
Did anybody worked on similar concept or having any information related to this.
Let me know if I am not clear of any further details are required.
Any help would be useful,
Thanks in anticipation
Hardik - I am Product Manager at TokBox, the makers of the OpenTok platform. Good news: TokBox can fulfill virtually all of your requirements, but with a few caveats.
TokBox has been building a video chat/conferencing platform for years, long before WebRTC even existed in fact. In that time we have supported many customers with almost your exact requirements on OpenTok, a platform that is based on Flash (Major League Baseball is one such customer). Building applications on this architecture has the added advantage of solving virtually all of the interop issues that exist when connecting people using different devices and browsers. It is based on Flash however, which technically doesn't meet your WebRTC requirement. So you know, there's that.
WebRTC is where it's at though, which is why we created OpenTok for WebRTC in 2012. It was a complete rewrite of the platform that not only provides higher quality video, but also gives developers more hooks and far more control over how exactly they integrate video and audio chat into their primary customer experience.
Currently in beta (as of this writing in June 2013) are two new components in our WebRTC infrastructure. The first we refer to as Mantis, which solves many of the challenges associated with hosting large multi-party calls. The other is Cloud Raptor, which gives developers access to a stream of events stemming from a WebRTC session, and through which developers can issue events and commands of their own. Raptor is what enables you for example to moderate calls, boot participants, and control whose audio and video streams are broadcast to all the other participants.
So, TokBox has what you need. In the short term we can help you get up and running using OpenTok pretty quickly. Then we can discuss with you how to get you onto OpenTok for WebRTC and into our Mantis and Raptor beta program.