How to stream WebRTC audio to Icecast server? - webrtc

How can i make an audio stream captured in browser using WebRTC to be streamed live via icecast/shoutcast protocols?

Use liquidsoap + webcaster.js:
https://github.com/webcast/webcaster

Related

How to capture rtp stream from webrtc then convert it to hls to broadcast to client?

How to capture rtp stream from webrtc then convert it to hls to broadcast to client ?
I want to receive rtp from webrtc in browser via media server (eg kurento ... ) then convert it to hls stream. User can use hlsEndpoint to play.
WebRTC -> RTP -> HLS
What is the correct way?
My aim is to create a live stream app that supports push streams using webrtc , i'm working with rtmp , i want webrtc as an additional option.
Tks all.
Just use a media server to covert WebRTC to Live Streaming like RTMP, HTTP-FLV or HLS, please read this wiki.
Because the WebRTC is not only RTP, but also need to transcode the audio from opus to aac, and there is something like the jitter-buffer, NACK or packet out-of-order to handle.
For live streaming, the RTMP is the de-facto standard in live streaming industry, so if you covert WebRTC to RTMP, you got everything, like transcoding by FFmpeg, forwarding to YouTube, or DVR to file, etc.
If you need to convert WebRTC to HLS or RTMP you may check Ant Media Server
The community edition also provides this.

Live streaming audio with WebRTC browser => server

I'm trying to sent some audio stream from my browser to some server(udp, also try websockets).
I'm recording audio stream with webrtc , but I have problems with transmitting data from a nodeJS client to the my server.
Any idea? is it possible to send audio stream to the server using webrtc(openwebrtc)?
To get audio from the browser to the server, you have a few different possibilities.
Web Sockets
Simply send the audio data over a binary web socket to your server. You can use the Web Audio API with a ScriptProcessorNode to capture raw PCM and send it losslessly. Or, you can use the MediaRecorder to record the MediaStream and encode it with a codec like Opus, which you can then stream over the Web Socket.
There is a sample for doing this with video over on Facebook's GitHub repo. Streaming audio only is conceptually the same thing, so you should be able to adapt the example.
HTTP (future)
In the near future, you'll be able to use a WritableStream as the request body with the Fetch API, allowing you to make a normal HTTP PUT with a stream source from a browser. This is essentially the same as what you would do with a Web Socket, just without the Web Socket layer.
WebRTC (data channel)
With a WebRTC connection and the server as a "peer", you can open a data channel and send that exact same PCM or encoded audio that you would have sent over Web Sockets or HTTP.
There's a ton of complexity added to this with no real benefit. Don't use this method.
WebRTC (media streams)
WebRTC calls support direct handling of MediaStreams. You can attach a stream and let the WebRTC stack take care of negotiating a codec, adapting for bandwidth changes, dropping data that doesn't arrive, maintaining synchronization, and negotiating connectivity around restrictive firewall environments. While this makes things easier on the surface, that's a lot of complexity as well. There aren't any packages for Node.js that expose the MediaStreams to you, so you're stuck dealing with other software... none of it as easy to integrate as it could be.
Most folks going this route will execute gstreamer as an RTP server to handle the media component. I'm not convinced this is the best way, but it's the best way I know of at the moment.

Is it possible to deliver RTSP stream via Kurento. WebRTC to RTSP

I want to use Kurento as media server which takes WebRTC as an input and provides RTSP stream as url: rtsp://kurento/streamName
Is this possible?
I saw https://github.com/lulop-k/kurento-rtsp2webrtc/ project which does opposite thing.
My final goal is to deliver a stream to mobile browsers via JSMPEG.
This is not possible, as Kurento team says: "We can consume it, but not produce it."
Now, as a common solution for this, you could stream from Kurento to Wowza media server using an RTP endpoint, and then re-stream RTSP from Wowza. In KMS google group there is a lot of content related to the integration between the two of them.

Which codec is used for online video Streaming?

I am using WebRTC with kurento media server, as far as i came to know WebRTC supports VP8 for video Streaming and using opus for audio Streaming, So my question is if i want to compress the stream which includes both audio and video, so do i need to use both(VP8 and opus)?
If you are streaming both audio and video then it will use both an audio codec (typically Opus) and a video codec (typically VP8, VP9, or H.264).

How does webRTC implement synchronization of the their audio and video streams from remote?

webRTC is implemented PeerConnection as per https://apprtc.appspot.com/
How does webRTC implement synchronization of the their audio and video streams from remote?
Normal RTP a/v sync done using RTCP SR/RR reports and the timestamps in each SRTP packet.
See any VoIP application.