Integrating Asterisk with WebRTC - ground up - webrtc

I am trying to integrate Asterisk with webRTC. There was a query posted here but it barely provides any solution.
I already have a basic webRTC infrastructure in place which I have tested for proof-of-concept. I use socket.io for signalling, COTURN for STUN/TURN with node.js and supporting modules for my web server.
I use MySQL for session persistence.
My asterisk installation works fine with SIP phones and a PRI card for my PSTN interface. My Asterisk, webserver and other supporting servers run on the same box.
There are instructions on Asterisk here and on sipjs here (and other similar products site) to integrate Asterisk with WebRTC.
From my reading there, it appears that Asterisk has a builtin webserver for wss support, uses pjproject for ICE, TURN/STUN servers, among other things.
I see that taking the approach here would mean duplicating the infrastructure.
I would like to implement an audio gateway from WebRTC to a SIP or DAHDI channel. This is essentially an audio call to a PSTN number or a SIP end-point from the browser.
The way I see it is that with what I have in place, I will need the following:
A codec transcoder for audio (Browser codec to Asterisk codec),
possibly Kurento.
Some way to convert a WebRTC SDP to an Asterisk
SDP.
Some way to "register" a logical webRTC peer to the SIP
proxy(Asterisk).
Some intermediate module for Asterisk to think of a
WebRTC peer as a SIP end point.
Anything else?
I think this must have been implemented before. I am unable to find any solution or discussion in this direction.
Am I on the wrong track?
Am I reinventing the wheel?
Any guidance will be most appreciated.

There is nothing to be "implemented" here. All the listed points are already implemented in Asterisk.
The links you mentioned discusses mostly old versions of Asterisk. I recommend to use a recent guide for WebRTC on Asterisk 13.
A codec transcoder for audio (Browser codec to Asterisk codec),
possibly Kurento.
Transcoding is built-in Asterisk by default. However WebRTC has support also for G.711 (PCMU and PCMA) so most probably you never have to transcode.
Some way to convert a WebRTC SDP to an Asterisk SDP.
This is already handled by Asterisk and all the popular WebRTC SIP clients (sip.js, webphone, sipml5) using RFC 7118 (WebSocket for SIP protocol). Instead of using socket.io with your custom protocol, I would highly recommend to use this. (Socket.io is using websocket anyway in all modern browsers and when webrtc is not available webrtc will be missing too)
Some way to "register" a logical webRTC peer to the SIP
proxy(Asterisk).
This is like the usual SIP REGISTER on websocket mentioned above
Some intermediate module for Asterisk to think of a WebRTC peer as a
SIP end point.
Nothing extra is needed for this. Follow the guide which I have mentioned above to setup a WebRTC externsion (it is like other SIP extension and WebRTC can talk with SIP once configured).
Note that most probably you don't even need TURN and STUN for this if your Asterisk has a public static IP. (Except some basic STUN which is part of the ICE protocol and already built in Asterisk)

Related

Do I need SIP + WebRTC

I am working on the webRTC application. Which can receive a call from browsers, The caller's source can be from any phone number or the extension dialed from the webRTC application. I am using the FreeSwitch server for this purpose.
Can anyone help me to know if this is achievable using only WebRTC or do I need SIP + webRTC like sip.JS, jsSIP
You can create a calling application using WebRTC without SIP but you will need to create or choose some form of signalling protocol. WebRTC can transport the audio and video packets for you but it does not specify how to set up the connection between two peers.
Given you're intending to use FreeSWITCH you may find that using SIP is the easiest option for you. FreeSWITCH plus one of the SIP javascript libraries you've mentioned solves your signalling requirements.

SIP-WebRTC gateway/bridge: Kurento OR openwebrtc OR Intel CS for webrtc

I am researching implementation of a WebRTC-SIP gateway/bridge. That is, for example, to make a WebRTC call to a SIP end point via a SIP server like Asterisk. I know that Asterisk already supports this but I need an intermediary server for various needs like logging, recording, integration with local auth/signalling and other app modules. I looked at Kurento, Openwebrtc (Ericson) and the lesser known Intel's Collaboration Suite for WebRTC.
I need a server-side solution to interact with my Node Application server. Specifically, the server-API should be able to generate a SDP for a RTP end point and convert WebRTC SDP to the more generic SDP used by Legacy SIP servers or have a way to bridge these two end-points. I feel comfortable that this is possible with Kurento (saw a post on except that I am not aware of any jsSip/sipML5 kind of API for Kurento. Kurento itself is not meant to provide signalling. For e.g., if the SDP generated by Kurento for the rtpEndpoint in Kurento has to be used in a SIP call/INVITE, how would one implement it? For that matter, how would one initiate a SIP INVITE, for example, from Kurento? Are there third-party modules to do this?
Has anyone used the any of the servers listed above for a similar use case?
This is a programming question. I am looking for server APIs to implement a WebRTC to SIP gateway/bridge for media transcoding (if required), SDP transformation and SIP signalling.

Server as WebRTC data channel peer

Are there currently solutions where your server can act as the peer of a WebRTC connection?
The reason I am interested in WebRTC is not the peer-to-peer part of it, but because it enables you to use UDP. You could let players participate in a fast-paced game like Quake without needing any plugins.
It seems that essentially this same question was asked before, but surely things must now be quite different as 2 years have passed.
Yes, it is possible to deploy your WebRTC peer code on server. But since you need to run it on server, it's essentially different from how you run the WebRTC code within the browser - i.e. through a Java Script.
For server based WebRTC peer, you would need to use the WebRTC native code available on platforms - Windows, Mac OS X, Linux, Android and iOS. You can get the WebRTC native code from - https://webrtc.org/native-code/development/
Follow the instructions here to download and build the environment. Sample applications are also present in the repository at the locations - src/webrtc/examples and src/talk/examples
In summary you have use the WebRTC source code that is embedded in the browser in your application code and call the relevant methods / API for the WebRTC functionality.
I have answered similar question at: WebRTC Data Channel server to clients UDP communication. Is it currently possible?
We have implemented the exact same thing: a server/client way of using WebRTC. Besides we also implemented data port multiplexing, so that server would only need to expose one data port for all rtcdata channels.
Here is 2018 update for you: your off-the-shelf solutions are:
Red 5 Pro, Wowza, Kurento, Unreal Media Server, Flashphoner
Also note that in modern public networks TCP is not much slower than UDP;
but UDP may have a considerable packet loss, so try WebRTC-TCP for your Quake idea.

WebRTC Relay Server / Broadcast multiple clients

I've got WebRTC peer to peer working but when I want to broadcast a single camera to multiple clients obviously peer to peer isn't suitable.
I've found solutions like
http://lynckia.com
and
http://www.medooze.com/products/mcu/webrtc-support.aspx
But the first I can't get setup (and it seems to have cross browser issues)
the second just feels like we're hitting a nail with a nuclear missile.
All I need is a relay, I don't need to decode / recode streams.
I just need
The Broadcaster to connect to the server (peer to peer)
The clients to connect to the server (peer to peer)
The server to relay the stream from the broadcaster to the clients.
Is there any software out there that offers this solution that I've missed? is there an alternative working and scalable alternative?
Thanks
Jitsi Video Bridge works pretty much exactly how you describe.
On your server you can run Janus, to which your broadcaster can provide a stream via RTP.
Have a look at an example configuration file.
After writing a configuration file which defines how the server receives stream from the broadcaster, you should be able to launch janus in the background via a command line interface tool:
$ janus --daemon --config=config_file.conf
Also, see streaming test demo.
Note: I have not tested this thoroughly.
Have a look at this github-repo inspired from muaz khan's WebRTC p2p scalable broadcast. This can work great on LAN. On internet, I am not sure how well it can work as of now though we are improving it on the go.
If you just want to broadcast from a peer to a set of peers, if they don't care about the latency, the best solution is to covert WebRTC to live streaming, without transcoding just muxing:
Peer(Publisher) ---WebRTC--> Server --RTMP/HLS/DASH--> Peers/Players
If this works good for you, SRS is able to covert WebRTC to live streaming.
Because live streaming allows you to use CDN or TCP to deliver the streams, and the latency is about 3~5s, so this solution is only available when Peers/Players never need to communicate to the Peer(Publisher).
If you want all those peers to talk to each other, it's very complex and need a WebRTC SFU cluster to do this, there will be a huge number of streams. For example, if allows 100 peers to talk to each other, there will be 100x100 = 10k streams.
It's too complicated, so I don't think there is good open-source solution right now(at 2022.02).

HLS(HttpLiveStreaming) vs RTP(Real-time Transport Protocol) on UDP for mobile P2P?

I'm testing Audio/Video P2P connection between mobile devices.
Studying WebRTC, I've noticed NAT traversal(uses STUN server) and UDP-hole-punching is the key to make P2P possible.
On the other hand, I've noticed HLS(HttpLiveStreaming) on iOS devices is very optimized for A/V live streaming, and widely available even with Android4.x(3.x unstable)
So, here is my question if I use HLS for mobile P2P:
a) HLS is a protocol on TCP(HTTP) not UDP, so isn't there a performance drawback?
See: TCP vs UDP on video stream
b) How about NAT traversal? Will it be easier since HLS is HTTP(port:80)?
I have read wikipedia http://en.wikipedia.org/wiki/HTTP_Live_Streaming
Since its requests use only standard HTTP transactions, HTTP Live
Streaming is capable of traversing any firewall or proxy server that
lets through standard HTTP traffic, unlike UDP-based protocols such as
RTP. This also allows content to be delivered over widely available
CDNs.
c) How about android device compatibility? Is there a lot of problems to invoke StreamingLive distribution?
Thanks.
The reason why firewalls are not an issue for HLS is that it's a client-server protocol where all requests are done via HTTP on port 80. If you are implementing a P2P application, you won't be able to attach it to a port below 1024 unless you have root privileges.
This means that exchanging data via HLS (port 80) won't work for P2P. Unless you have a translation server in the middle, which defeats the purpose of P2P.
Comparing HTTP Live Streaming to P2P video streaming over UDP/RTP is almost like comparing apples and oranges. More like oranges and tangerines... read on.
HTTP Live Streaming was designed as client-server protocol without P2P or NAT traversal consideration. The idea being that the streaming server is already over HTTP/TCP and accessible from the public internet as if it was just an ordinary web server. The key features of HLS is its ability to dynamically switch the bitrate based on how well the client receives the stream. If the client connection to the server hiccups trying to stream down a 1080p video, it can transparently switch to sending a lower bitrate video (and likely switch back to streaming at higher bitrate if network conditions improve). Good example: Netflix.
WebRTC and ICE were designed to stream real time video bidirectionally between devices that might both behind NATs. As such, traversing a NAT through UDP is much easier than TCP. UDP lends itself to real-time (less latency) than TCP. Most video-chat clients (ala Skype) have dynamic bandwidth adjustments built in to their codecs and protocols to achieve something similar to what HLS does.
I suppose you could combine TCP NAT traveral and HLS together. Doing HLS over UDP implies that you build a TCP like reliability layer on top of your UDP stream.
Hope this helps
http://www.garymcgath.com/streamingprotocols.html
HTTP Live Streaming
The new trend in streaming is the use of HTTP with protocols that
support adaptive bitrates. This is theoretically a bad fit, as HTTP
with TCP/IP is designed for reliable delivery rather than keeping up a
steady flow, but with the prevalence of high-speed connections these
days it doesn't matter so much. Apple's entry is HTTP Live Streaming,
aka HLS or Cupertino streaming. It was developed by Apple for iOS and
isn't widely supported outside of Apple's products. Long Tail Video
provides a testing page to determine whether a browser supports HLS.
Its specification is available as an Internet Draft. The draft
contains proprietary material, and publishing derivative works is
prohibited.
The only playlist format allowed is M3U Extended (.m3u or .m3u8), but the format of the streams is restricted only by the implementation.
I could achieve P2P on top of HLS using WebRTC on a Android with a Mozilla Firefox browser and two others desktop browsers (Chrome and Firefox) on the same swarm.
Here's a screenshot of a presentation that I've made on the University: https://www.dropbox.com/s/zyfgs4o8al9ovd0/Screenshot%202014-07-17%2019.58.15.png
This screenshot was made by acessing http://bem.tv/demo.html.
If you want to know more about, this is my masters project and I'm publishing my advances on http://bem.tv and http://github.com/bemtv.