How to stream raw data from web audio into webrtc data channel - webrtc

I use getUserMedia to the the audio stream, and pass the stream into web audio using createMediaStreamSource. I then want to stream the raw audio data into a webrtc data channel.
There isn’t a data channel destination node in web audio. I’ve only been able to access the raw audio data from inside an audio worklet, but I don’t know how to get that data into a data channel. How should I go about streaming raw audio from getUserMedia data into a data channel?

Related

Can Isochronous stream and control signal methods are possible simultaneously in USB-OTG without any data corruption/ delay in video stream?

Here, Data transfer is for controlling video pause and video record. We are using iMX8Mini Eval board - for streaming video to Android Mobile via USB-OTG. We would like to know, whether video stream does not affected with any command sent over same USB-OTG.

rtmp vod, how to play multiple video files(mp4 or flv) continuously using simplertmp client

as the title shows, is there any methods I can use to play multiple videos continuously using simple rtmp client(my rtmp server is wowza)? Here is the way I think:
when the first video is about to be finished,open a new thread to send a new createStream command and a new play command and get the video rtmp packet and put them into a buffer list, when the first video is finished, then play the video rtmp in the buffer list..
Can this way be available or are there any other recommended methods to achieve it? Any suggestion will be appreciated, thanks!
While the functionality is not built-in, Wowza does have a module called StreamPublisher to allow you to implement a server-side type of playlist. The source code for the module is available on GitHub. A scheduled playlist of multiple VOD files is streamed as a live stream, similar to a TV broadcast.

WebRTC - Reduce streamed audio volume

Suppose we get an audio stream by using getUserMedia(gUM) API. I am broadcasting this stream to other users using WebRTC.
Is it possible to reduce the volume of the audio getting streamed.
Note : I am not looking to reduce device mic volume, because I understand we cannot control through browser and I dont want to
Try with GainNode from WebAudio. But the best thing will be to do this on the receiving end.

Is there a way to send two audio streams instead of a video and audio stream in webRTC?

I am just concerned with audio, and I'd like to send two audio streams to be synced on the receiver's side.
Now, for audio and video, the local stream can be obtained via
getUserMedia({'audio':true, 'video':constraints}, onUserMediaSuccess,
onUserMediaError);
Assuming I have two microphones, how do I get access to the two audio streams, and further, synchronize them at the receiver?
I think it will be possible soon.
var audio1 = new AudioStreamTrack(constraints1);
var audio2 = new AudioStreamTrack(constraints2);
var stream = new MediaStream([audio1, audio2]);
navigator.getUserMedia(stream, successCb, errorCb);
You'll be able to track both audio/video streams and combine them in a single stream.

Stream live audio data (from line-in) with Cocoa

I want to stream live recorded audio via a HTTP stream from the Mac's line in. When searching for this problem I found a lot of tutorials about how to receive a stream, but nothing about sending a stream.
Is there any tutorial about sending a stream which does not base on a static local file? I found some information on CocoaHTTPServer, but I believe this server can only stream static files. I am right?