Integrating RTCMultiConnection with phone and mobile calls - webrtc

I am using this library https://github.com/muaz-khan/RTCMultiConnection to create web based audio broadcasting application like a radio but now my customer needs to take his phone and mobile calls audio stream and broadcast them along with his own audio stream
So I wonder if there is any way to do that without depending on other services
Any ideas would be highly appreciated
Thanks in advance

Related

Develop a web chat app using webrtc looking for any API for voice changing

I'm developing a web chat app using webrtc I want to know that can we change the voice of the user live calling in webrtc and is there any API for live call voice changer so let me know thanks
What you looking for is insertable streams api. It allows you to access the media stream and apply transformations to the stream.
Check out this example which applies low pass filter on the audio track. There's a link to code at the bottom of the page.

Voice call connection routing React-Native

I am trying to build an application which works this way: I as a user want to start a call with another user. The way I want the connection be made is by random. So it will connect to one of the many clients out there by random. Also when other clients try to make a call, it should connect to another random client and so on. I want those phone calls be made via application(such as WhatsApp) not as a phone call.
Now, the question is; is Twilio a good approach for this purpose?
If yes can you tell me which of their feature would fit my app the best?
Thanks for any suggestions!
Twilio developer evangelist here.
I can answer that Twilio would be a good approach for you to do this within your own application. I'd recommend using Twilio Video to build this as it allows cross platform communication via audio or video (in your case, you may not need the video, but this will give you the best audio quality).
As an example, my colleague Dominik built a video roulette application. It is the case that the interface was built in JavaScript for the web, but the idea would be the same for a native app. The code for the server side part of the application should give some insight into how to connect random pairings.
It's also possible to integrate Twilio Video with CallKit and Connection Services so that you can make outbound calls to other devices that ring like a real incoming call.

Using pre-recorded audio instead of text-to-speech along with Watson Conversation bot

I built a conversation bot with text-to-speech, but no matter how well I tune it, the voice sounds robotic.
I think it would be simpler to have the conversation bot pick a pre-recorded audio and stream it back to the user.
Does anyone see issues with this?
Is there already an example of this so I don't reinvent the wheel?
This functionality needs to be implemented on the client side of the application. Watson Conversation Service can return a text answer and for example an index of the audio record you want to play.
This index then needs to be picked up by the client application communicating with Watson Conversation Service (e.g. a web page in node.js) and the audio record can be played to the user.
As for some examples...in Conversation Service docs there are links to github projects that integrate Watson Conversation Service with node.js web applications - these can be extended by adding the audio records and functionality that will play those records to the user.

UWP Send Image to Whatsapp

Currently, I had tried to develop an app that can send an image to Whatsapp but I saw a lot of solution only share Text only.
As I known Windows Phone 8.1/Windows 8.1 are using Data Transfer Manager.
May I ask about is that any ways to share an image to Whatsapp in UWP or Better solution?
Thank You.
If you want to use share contract for sending message to WhatsApp, then I can recommend you this example:
How to share an image
What about API, I have found only this article:
WhatsApp cofounder: Sorry developers, no API for you

Can you obtain audio stream data to the System output device using CoreAudio?

Is it possible to obtain a stream of audio data arriving at the system output (speakers, headphones, etc.) using CoreAudio or another framework?
Example: You're listening to a song on iTunes while watching a YouTube video, all while playing a computer game that makes sounds of its own, all of which are being played through your computer's speakers (Probably terribly annoying). My app would need to receive the entire mix as streaming data.
Thanks in advance.
Not at a user application's Core Audio or other app framework level. Some audio output capture/snoop apps may do this with a kernel extension (kext), or perhaps a replacement audio hardware driver.