Is it possible to create an audio visualizer with Agora WebSDK-NG? I'm looking for something similar to :
https://webrtc.github.io/samples/src/content/getusermedia/volume/
or
https://www.cssscript.com/audio-visualizer-with-html5-audio-element/
Thanks for any suggestion.
Yes this is technically feasible using Agora's NG WebSDK because the SDK is built using WebRTC.
If you are looking to add this to the local user's interface, look into the documentation for local-audio-tracks specifically you will want to create an audio track locally to be able to pass it to the visualizer.
Or if you want to visualize the audio from a remote stream you can use the user (AgoraRTCRemoteUser) and call user.audioTrack to get the audio track.
#Hermes's answer is correct. If you're looking for some template code to experiment with, I'd recommend starting with the basic live demo or looking at any of the other demos. Either way, what's important is that you need to create a local or remote audio track, then once you have the track you can create an MediaStream object and add the track to it as follows:
const audioStream = new MediaStream(); // Web Audio Api
audioStream.addTrack(remoteAudioTrack._mediaStreamTrack); // remote or local
var mediaSource = audioContext.createMediaStreamSource(audioStream); // don't forget to setup an audio context
const analyser = audioContext.createAnalyser();
mediaSource.connect(analyser);
There's more to do here but this should help you get started. If you're unfamiliar with the Web Audio API, I'd recommend you start with this Video. The MDN web docs also provides all the information you need and some demos.
Good luck
Related
I'm developing a web chat app using webrtc I want to know that can we change the voice of the user live calling in webrtc and is there any API for live call voice changer so let me know thanks
What you looking for is insertable streams api. It allows you to access the media stream and apply transformations to the stream.
Check out this example which applies low pass filter on the audio track. There's a link to code at the bottom of the page.
I'm using Ant Media Server for streaming. My use case requires me to record the Live Streams as VODs so the users can access the content later as well.
Like the live streams, I want to apply adaptive settings to the VODs as well so that users can get the suited resolution as per their network.
I can't find any built in solution for this yet. Can you please tell me any solution as to how can I do this!
I'm using S3 to store the recordings.
Thanks.
Thank you for the question. As far as I understand from the question, it seems that Live Streams are recorded as VoD files.
I think the most efficient way is doing that through HLS. With this way, the VoD files are recorded as HLS and multibitrates is available. No need to transcode again and it'll be played directly. Let me explain this solution step by step.
Set HLS playlist type to event and settings.deleteHLSFilesOnEnded to false . Edit your red5-web.properties for the application and set the following settings
settings.hlsPlayListType=event
settings.deleteHLSFilesOnEnded=false
Restart the server
sudo service antmedia restart
Add adaptive bitrates on the web panel.
Start Live Streaming and let the Ant Media Server create HLS(m3u8 and ts) files for each bitrate.
Stop Live Streaming
Then you can watch the stream by giving the master m3u8 file which is {STREAM_ID}_adaptive.m3u8. It can be even played directly by embedded player even if it's not live.
For more information, take a look at this wiki about HLS Playing
Please let me know if this approach helps you.
antmedia.io
I want to get the live stream from DJI spark to my PC, because I want to run a deep learning model to do object detection in real time. Is there any ways to get the live stream by using RTMP or RTP servers. I found by using Tellopy module in python we can get the live feed from DJI tello.
If you mean built-into the SDK, then the answer is no but you could decide the video stream yourself and package it up accordingly based on the needs of the server you wish to communicate with.
A few have done this work, the GO app, in order to support the various streaming sites has done it so it's known to be possible. It might take some work though.
Best of luck!
Is it possible with the Programmable Video API from Twilio to build something that resembles the Google Hangouts functionality in terms of how it focuses on the person talking automatically?
I don't see any examples or notes about this in their documentation and the github for this doesn't seem to be frequented that much.
Would appreciate any help, thank you!
Twilio developer evangelist here.
You can build that sort of thing, however it is currently out of scope for the Video SDK itself.
I haven't done this before, but I'd start by looking into analysing the audio coming from each participant in the chat. You can actually create audio sources from an existing <video> or <audio> element. In the case of Twilio Video, each track is created as a separate element, so you want to look for <audio> elements and use them:
var audioElements = document.querySelectorAll('audio');
audioElements.forEach(audio => {
var audioCtx = new AudioContext();
var source = audioCtx.createMediaElementSource(audio);
// create audio analyser, analyse volume in audio
})
You want to use the Web Audio API to then analyse all the remote tracks and work out which is currently making the most noise for a sustained period of time and switch to that one. This blog post may help with the analysis. I've not seen anything that will help with the selection, but hopefully you can work it out from there.
Let me know if this helps at all.
I want to create an application capable to play YouTube video's audios and also save the downloaded content in a local cache, therefore when the user decides to resume or play the video again, then it doesn't have to download part of video again but only download the remaining part (User can decide what to do with the cache then, and how to organize it).
It is also very convenient for mobiles (it is my main focus) but I'd like to create a desktop one too for experimental purposes.
So, my question itself is, does YouTube provide any API for this? I mean, in order to cache the download content I need that my application download the content and not any embed player (also remember that it is a native application). I have a third-party application in my Android system that plays YouTube videos, so I think it's possible unless that the developers use some sort of hack, again this is what I don't know.
Don't confuse with the web gdata info API and the embed API, this is not what I want, what I want is to handle the video transfer.
As far as I know, there is no official API for that. However, you could use libquvi to look up the URLs of the real video data, or you could have a look at how they do it and reimplement it yourself (see here).