Record audio from Agora.io Livestreaming to a file at the user browser - agora.io

I would like to record the all session group audio call without using the Agora Recording api

Related

Can we mute remote audio at source while using Agora Web SDK API?

I am working on an application where host needs to mute other participants. I have tried using remotestream.muteAudio() api. But this doesn't mute the audio at source. Which means with this API, if host mutes Participant A it is muted only for the host. But participant B can still hear Participant A.
I went through the API documentation and couldn't find the solution.
https://docs.agora.io/en/faq/API%20Reference/web/interfaces/agorartc.stream.html#muteaudio
Can this be achieved by any ways?
Hi there you need to use the Agora RTM SDK to do that.
You can create an RTM channel with the same name as the RTC channel. Alternatively, You can use P2P messaging(Also present in RTM) if that suits your use case better.
Then the host can send a message to the concerned user to mute. On receiving this message, the user can call the muteAudio on their side, thereby muting the audio at source. All of this would be done programmatically.

Spotify Web API Add new controllable device to Spotify devices list

I try to set up a new device which is able to use Spotiy functionality over Spotify Web API.
There is a API call https://developer.spotify.com/web-api/get-a-users-available-devices/ where I can get available devices.
The problem is now that there is no call to add a new device to this list. The new device is not my smart phone but an external spotify certified speaker.
Native Spotify application adds the external speaker in an unknown(?) way.
I tried to capture some packages with Wireshark which are sent from
speakers when using Spotify but I just see some MDNS broadcasts.
Question:
Is it possible to add the external device over Spotify Web API? Do someone know how Spotify Application implements this register process?

WebRTC and won't play GarageBand manipulated sound after redirected with sound flower. Only not working in chrome

I am writing a web based app the requires real time audio manipulation, specifically a pitch shift in a users voice. For now my prototype uses GarageBand to pitch shift and soundflower to redirect the audio as my input audio source on the browser. Then using webRTC (simple webRTC library)I send a users web cam video and the manipulated webRTC stream to other browsers. This works great in Firefox but I have not had luck with chrome. The video channel is received fine but the audio is only silent on chrome. Any ideas ?

How to start and publish On Air Hangout via API

I want to have "gimme air" slack command that would:
Create Hangout on Air and make it available for given google domain
Record itâ„¢
Publish on YouTube as an unlisted video
Retrieve the link
Per this and this question, I can see it's possible to render the button into webpage, but I'd like to get it started automatically.
Is there a way to do that via an API?

Android-webrtc Stream data from other than camera source directly to webbrowser

UseCase - I am using android projection API to capture my android device screen. The output is displayed in a SurfaceView. Next i want to project surfaceview data to a web browser using Webrtc.
I have seen lots of example which uses device camera and stream it output to web browser. how can i stream a video being played on surfaceView/TextureView to web browser.