Do I need SIP + WebRTC - webrtc

I am working on the webRTC application. Which can receive a call from browsers, The caller's source can be from any phone number or the extension dialed from the webRTC application. I am using the FreeSwitch server for this purpose.
Can anyone help me to know if this is achievable using only WebRTC or do I need SIP + webRTC like sip.JS, jsSIP

You can create a calling application using WebRTC without SIP but you will need to create or choose some form of signalling protocol. WebRTC can transport the audio and video packets for you but it does not specify how to set up the connection between two peers.
Given you're intending to use FreeSWITCH you may find that using SIP is the easiest option for you. FreeSWITCH plus one of the SIP javascript libraries you've mentioned solves your signalling requirements.

Related

React Native - Connecting to remote WebRTC stream

We have mobile application that historically has used RTSP streaming to allow a user to watch a live stream, which currently is published via Wowza Streaming Engine. We have had a need to lower stream latency, so have gravitated towards WebRTC to achieve this.
The problem is that there seems to be a lack of documentation, or examples regarding the implementation of a react-native WebRTC viewer which connects to a remote stream.
Does anyone out there have any documentation, or code examples for this kind of implementation?
I do note there is a react-native-webrtc library, however, all examples demonstrate connecting two peers on mobile phones with their video cameras i.e. Like facetime. We are after an example demonstrating someone on a phone connecting to a remote streaming server with a video feed.
Cheers,
If you want a webrtc client to connect to a server you need a server doing webrtc with the proper signaling that fit your need. Webrtc don't care which signaling you use, so you have to choose it or choose a the platform you need.
There are a lot of different media server, or library that support webrtc in server side all having there specific signaling(ex: Freeswitch, Kurento etc), or no signaling (ex: Mediasoup). Few will have a react native version as Media Streaming is not really something in the javascript/UI side but you can do something with the webrtc react-native lib.
Twillio has a lot of supported platform and could be a good start if you search a ready to use solution.

Can Janus WebRTC server implement server-side peer?

I've been reading about Janus, looked at the examples. I'm looking for a webRTC component that I can use in the following way:
Receive RTP video packets from some external sender
Become a WebRTC peer and connect to an external WebRTC signaling server, STUN, TURN, the usuals
Send the incoming RTP packets as a coherent video via the WebRTC peer connection to some other peer on a browser on the Internet
Is Janus the right tool? Maybe there are other tools? Would appreciate some directions..
Thanks!
I am not sure about Janus.
You can achieve these functionalities with LM Tools (lmtools.com) with easy configuration. It can receive RTP packets from external sender and can send those packets as per WebRTC specification to other peer.
Please note LM tools is not free software like JSON, though you can have free trial for 1 month.
Disclaimer: I work for LM Tools.
Can Janus WebRTC server implement server-side peer?
Yes, can can do that. What you are looking for is an RTP forwarding and you will get more context, and expert opinion from their community friendly google group page.
I hope you are looking for a Gateway solution.
(RTP/RTCP separate streams are converted to webrtc RTP/RTCP mux)
For this you need to make changes in the Janus code or use some plugin supporting RTP/RTSP.
Current Janus server relays RTP/RTCP and messages between browsers.
https://janus.conf.meetecho.com/docs/

VoIP App development in xamarin with Xmpp Server

I want to develop a VoIP app with Xamarin and Xmpp server.
So far the only things that I have found is the openfire and "jitsi meet" for the server side and matrix for the client side. But the matrix has nothing to do with voice streaming and is just for text messaging and "jitsi meet" doesn’t have any sdk for .net client side.
I also have found the red5pro but this has client sdks just for native android and ios development platform and has nothing for Mono.
So what Should I look for?!
First, let's clarify some basics:
openfire is a XMPP server. Basically, this is all you need on the server side for basic VoIP support.
Alternatives include ejabberd and Prosody.
jitsi meet essentialy already is a VoIP app, so if you want to develop your own, you don't really need that.
"Jitsi Videobridge" on the other hand can be used to provide a relay server for video conferences. For your first steps with a simple VoIP app, you wont need that either, but if you want your users to be able to create video conferences with many participants, then this helps.
(Explanation: Normally, when you create a P2P-Video conference, you
have two options: First, all users send their video data to all
participants (everybody needs lots of bandwidth), or you pick one
participant ("host") that receives the video streams of every
participant end sends them to every other participant. In the second
case, a normal participant only has to upload his stream once and
download n streams, whereas the host does most of the work - so only
that one user needs high bandwidth.
Jitsi Videobridge can run on a server and act as this conference host (usually a server has a much better bandwidth than a home user), so that none of the participants has to act as a host.
In simple VoIP applications (without video), this may not be neccessary, as audio streams are usually much smaller than video streams.)
Now, as I said above, in order to write a VoIP app, you basically only need a XMPP server (openfire, prosody and ejabberd should all be sufficient for this use case), a client library that supports Jingle and client libraries for the RTP media streams (transfer and display).
Jingle is the name of a XMPP protocol extension that enables the negotiation of P2P data streams as they are needed for a VoIP call.
The relevant protocol specifications:
XEP 0166: Jingle
XEP-0167: Jingle RTP Sessions
So what you need to find is a XMPP library with support for the jingle protocol. The C# Matrix XMPP SDK (not to be confused with the "Matrix protocol", which is a different protocol and has nothing to do with XMPP except for having a common goal) is one example of such a library. According to their web site, there is support for Jingle, but I couldn't find any documentation about it.
However, as I mentioned above, Jingle is only about how to negotiate data streams, not the data streams and VoIP itself.
So what that library probably helps you with is parsing of the Jingle XMPP messages that are needed to set up a RTP data stream.
For displaying and transfering the RTP stream, however, you need additional libraries. For that, have a look at the following SO questions and answers:
Open Source .net C# library for Real Time transport Protocol
Streaming Avi files from C# using RTP
I hope I could give you some useful hints...

SIP-WebRTC gateway/bridge: Kurento OR openwebrtc OR Intel CS for webrtc

I am researching implementation of a WebRTC-SIP gateway/bridge. That is, for example, to make a WebRTC call to a SIP end point via a SIP server like Asterisk. I know that Asterisk already supports this but I need an intermediary server for various needs like logging, recording, integration with local auth/signalling and other app modules. I looked at Kurento, Openwebrtc (Ericson) and the lesser known Intel's Collaboration Suite for WebRTC.
I need a server-side solution to interact with my Node Application server. Specifically, the server-API should be able to generate a SDP for a RTP end point and convert WebRTC SDP to the more generic SDP used by Legacy SIP servers or have a way to bridge these two end-points. I feel comfortable that this is possible with Kurento (saw a post on except that I am not aware of any jsSip/sipML5 kind of API for Kurento. Kurento itself is not meant to provide signalling. For e.g., if the SDP generated by Kurento for the rtpEndpoint in Kurento has to be used in a SIP call/INVITE, how would one implement it? For that matter, how would one initiate a SIP INVITE, for example, from Kurento? Are there third-party modules to do this?
Has anyone used the any of the servers listed above for a similar use case?
This is a programming question. I am looking for server APIs to implement a WebRTC to SIP gateway/bridge for media transcoding (if required), SDP transformation and SIP signalling.

Integrating Asterisk with WebRTC - ground up

I am trying to integrate Asterisk with webRTC. There was a query posted here but it barely provides any solution.
I already have a basic webRTC infrastructure in place which I have tested for proof-of-concept. I use socket.io for signalling, COTURN for STUN/TURN with node.js and supporting modules for my web server.
I use MySQL for session persistence.
My asterisk installation works fine with SIP phones and a PRI card for my PSTN interface. My Asterisk, webserver and other supporting servers run on the same box.
There are instructions on Asterisk here and on sipjs here (and other similar products site) to integrate Asterisk with WebRTC.
From my reading there, it appears that Asterisk has a builtin webserver for wss support, uses pjproject for ICE, TURN/STUN servers, among other things.
I see that taking the approach here would mean duplicating the infrastructure.
I would like to implement an audio gateway from WebRTC to a SIP or DAHDI channel. This is essentially an audio call to a PSTN number or a SIP end-point from the browser.
The way I see it is that with what I have in place, I will need the following:
A codec transcoder for audio (Browser codec to Asterisk codec),
possibly Kurento.
Some way to convert a WebRTC SDP to an Asterisk
SDP.
Some way to "register" a logical webRTC peer to the SIP
proxy(Asterisk).
Some intermediate module for Asterisk to think of a
WebRTC peer as a SIP end point.
Anything else?
I think this must have been implemented before. I am unable to find any solution or discussion in this direction.
Am I on the wrong track?
Am I reinventing the wheel?
Any guidance will be most appreciated.
There is nothing to be "implemented" here. All the listed points are already implemented in Asterisk.
The links you mentioned discusses mostly old versions of Asterisk. I recommend to use a recent guide for WebRTC on Asterisk 13.
A codec transcoder for audio (Browser codec to Asterisk codec),
possibly Kurento.
Transcoding is built-in Asterisk by default. However WebRTC has support also for G.711 (PCMU and PCMA) so most probably you never have to transcode.
Some way to convert a WebRTC SDP to an Asterisk SDP.
This is already handled by Asterisk and all the popular WebRTC SIP clients (sip.js, webphone, sipml5) using RFC 7118 (WebSocket for SIP protocol). Instead of using socket.io with your custom protocol, I would highly recommend to use this. (Socket.io is using websocket anyway in all modern browsers and when webrtc is not available webrtc will be missing too)
Some way to "register" a logical webRTC peer to the SIP
proxy(Asterisk).
This is like the usual SIP REGISTER on websocket mentioned above
Some intermediate module for Asterisk to think of a WebRTC peer as a
SIP end point.
Nothing extra is needed for this. Follow the guide which I have mentioned above to setup a WebRTC externsion (it is like other SIP extension and WebRTC can talk with SIP once configured).
Note that most probably you don't even need TURN and STUN for this if your Asterisk has a public static IP. (Except some basic STUN which is part of the ICE protocol and already built in Asterisk)