Play audio stream using WebAudio API - webkit

I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe

How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.

Related

Transfer perframe metadata from webrtc client to browser

I know this question was already asked number of times a long time ago,but always remained unanswered.
I have webrtc client which transmits stream trough server(flashphoner) to browser. I need the way to mark specific frames by 4byte label on client side and parse this label in browser using js code.
Other theoretical ability is to add textual/qrcode watermarks and parse it on browser side using some ocr or qrparser library. The problem that i dont know how it possible to access decoded frame data on browser side. Any suggestions?
While something like this hasn't been possible in the past, the WebRTC Insertable Streams / Encoded Transform (specification) API allows this but browser support varies.
https://webrtc.github.io/samples/src/content/insertable-streams/endtoend-encryption/ shows a sample that a trivial XOR encryption and, more important for your use-case, adds a four-byte checksum.

Playing a Live stream from media server on android application

My setup is as follows:
OBS Studio to create the video feed
Ant Media Server to distribute the stream
Now I'm building an app that will display this stream and I'm currently using ExoPlayer, however I'm having a hard time getting it to work for both RTMP and HLS, I read some where that I could embed a webplayer in my app would that be easier? Here is my code for ExoPlayer:
//RTMP Url
String url = "rtmp://192.168.1.244/WebRTCApp/379358104902020985845622";
BandwidthMeter bandwidthMeter = new DefaultBandwidthMeter();
TrackSelection.Factory videoTrackSelectionFactory =
new AdaptiveTrackSelection.Factory();
TrackSelector trackSelector =
new DefaultTrackSelector(videoTrackSelectionFactory);
SimpleExoPlayer player = ExoPlayerFactory.newSimpleInstance(this, trackSelector);
PlayerView playerView = findViewById(R.id.simple_player);
playerView.setPlayer(player);
/*
Create RTMP Data Source
*/
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory();
MediaSource videoSource = new ExtractorMediaSource.Factory(rtmpDataSourceFactory)
.createMediaSource(Uri.parse(url));
player.prepare(videoSource);
player.setPlayWhenReady(true);
Any help on this would be much appreciated.
Most online video streaming use Adaptive Bit Rate streaming (ABR) protocols to deliver the video, mainly HSL and DASH this days.
Most Media players, like ExoPlayer, support these protocols well, although they are complex and evolving protocols so there are always edge cases.
Many video conferencing applications use WebRTC which is a real time optimised protocol - the usual approach is to use a WebRTC client for this type of stream.
The difference between the two approaches from a streaming latency point of view, at a very high level, is:
ABR protocols prioritise quality and avoiding interruptions and buffer enough of the video to try to gaurantee uninterrupted playback. They are usually aimed at movie and live video streaming services. Even for low latency implementation the latency is measured in multiple seconds and more.
WebRTC prioritises latency and sacrifices quality if necessary. It is aimed typically at real time sensitive applications like video conferencing where it is important not to fall behind the discussion even if it means a temporary video glitch or even brief interruption in video. Latency is usually measured in sub seconds.
Any Media Server comes from the WebRTC side although recent versions support HLS /CMAF and Low Latency DASH (these are still higher latency than WebRTC generally as noted above).
For your service, if you are able to use a DASH or HLS stream you may find that it is an easier path with ExoPlayer. If you look at the demo app for example you will see DASH and HLS streams but no RTMP ones. You can easily extend or modify the demo app to play your own HLS or DASH stream and this is often an easy way to start - look at the sample material in the assets/media.exolist.json and add your own URL:
https://github.com/google/ExoPlayer/blob/aeb306a164911aa1491b46c2db4da0d329c83c65/docs/demo-application.md
However, ExoPlayer should also support RTMP via an extension if this is your preferred route - there is a specific extension that allows this:
https://github.com/google/ExoPlayer/blob/0ba317b1337eaa789f05dd6c5241246478a3d1e5/extensions/rtmp/README.md
In theory you simply need to add this dependency to your application:
if your application is using DefaultDataSource or DefaultDataSourceFactory, adding support for RTMP streams is as simple as adding a dependency to the RTMP extension
It would be worth checking the issues list in this repository for any recent issues and/or workarounds.

Is it possible to stream the output of an ffmpeg command to a client with dot net core?

I'm trying to take two videos and transform them with ffmpeg into a single video. It works great if you take the two videos, run them through ffmpeg and then serve that file up via an API. Unfortunately the upper range for these videos is ~20 minutes, and this method takes too long to create the full video (~30 seconds w/ ultrafast).
I had an idea to stream the output of the ffmpeg command to the client which would eliminate the need to wait for ffmpeg to create the whole video. I've tried to proof this out myself and haven't had much success. It could be my inexperience with streams, or this could be impossible.
Does anyone know if my idea to stream the in-progress output of ffmpeg is possible / feasible?
you should check hangfire. I used this for running the process on the background, and if it needs a notification, signalR will help you
What do you mean by "streaming" ? Serving the result of your command to an http client on the fly ? Or your client is some video player that play the video (like a VLC player receiving a tcp stream of 4 IP cameras) ?
Dealing with video isn't a simple task, and you need to choose your protocols, tools and even hardware carefully.
Based on the command that you send as an example, you probably need some jobs that convert your videos.
Here's a complete article on how to use Azure Batch to process using ffmeg. You can use any batching solution if you want (another answer suggests Hangfire and it's ok too)

Server side real time analysis of video streamed from client

I'm trying to build a system for real-time analysis on server for video streamed from the client using WebRTC.
Here is what I currently have in mind. I would capture the webcam video stream from the client and send it (compressed using H.264?) to my server.
On my server, I would receive the stream and every raw frame to my C++ library for analysis.
The output of the analysis (box coordinates to draw) would then be sent back to the client via WebRTC or a separate WebSocket connection.
I've been looking online and found open-source media server like Kurento and Mediasoup but, since I only need to read the stream (no dispatch to other clients), do I really need to use an existing server? Or could I build it myself and if so, where to start?
I'm fairly new to the WebRTC and video streaming world in general so I was wondering, does this whole thing sound right to you?
That depends on how real-time your requirements are. If you want 30-60fps and near-realtime, getting the images to the server via RTP is the best solution. And then you'll need things like a jitter buffer, depacketization etc, video decoders, etc.
If you require only one image per second, grabbing it from the canvas and sending it via Websockets or HTTP POST is easier. https://webrtchacks.com/webrtc-cv-tensorflow/ shows how to do that in Python.

Streaming API vs Rest API?

The canonical example here is Twitter's API. I understand conceptually how the REST API works, essentially its just a query to their server for your particular request in which you then receive a response (JSON, XML, etc), great.
However I'm not exactly sure how a streaming API works behind the scenes. I understand how to consume it. For example with Twitter listen for a response. From the response listen for data and in which the tweets come in chunks. Build up the chunks in a string buffer and wait for a line feed which signifies end of Tweet. But what are they doing to make this work?
Let's say I had a bunch of data and I wanted to setup a streaming API locally for other people on the net to consume (just like Twitter). How is this done, what technologies? Is this something Node JS could handle? I'm just trying to wrap my head around what they are doing to make this thing work.
Twitter's stream API is that it's essentially a long-running request that's left open, data is pushed into it as and when it becomes available.
The repercussion of that is that the server will have to be able to deal with lots of concurrent open HTTP connections (one per client). A lot of existing servers don't manage that well, for example Java servlet engines assign one Thread per request which can (a) get quite expensive and (b) quickly hits the normal max-threads setting and prevents subsequent connections.
As you guessed the Node.js model fits the idea of a streaming connection much better than say a servlet model does. Both requests and responses are exposed as streams in Node.js, but don't occupy an entire thread or process, which means that you could continue pushing data into the stream for as long as it remained open without tying up excessive resources (although this is subjective). In theory you could have a lot of concurrent open responses connected to a single process and only write to each one when necessary.
If you haven't looked at it already the HTTP docs for Node.js might be useful.
I'd also take a look at technoweenie's Twitter client to see what the consumer end of that API looks like with Node.js, the stream() function in particular.