Can WebRTC be used for backend data transmission for IOT? - webrtc

We are building a data transmission backend from IOT device to our backend system built in backend and hosted on cloud. Found this W3 working draft that mentions the use case.
Does WebRTC have any advantage over traditional API data push from IOT device?
Most of use cases covered in explanations on internet are for peer to peer communication for which Webrtc is perfect fit.

I see a lot of developers using WebRTC for their IoT projects! Recently it has been our biggest group of contributors to Pion.
The nice thing about using DataChannels (vs other APIs)
Support in the browser (no need for a backend to bridge protocols)
You have different delivery options (Out-of-order or lossy for better performance)
Mandatory security. WebRTC it always over DTLS, many other protocols is optional
Available in lots of languages (C, C++, Python, Java, Go...) these aren't just FFI implementations, but first-class and pleasant to use.

Related

SIP-WebRTC gateway/bridge: Kurento OR openwebrtc OR Intel CS for webrtc

I am researching implementation of a WebRTC-SIP gateway/bridge. That is, for example, to make a WebRTC call to a SIP end point via a SIP server like Asterisk. I know that Asterisk already supports this but I need an intermediary server for various needs like logging, recording, integration with local auth/signalling and other app modules. I looked at Kurento, Openwebrtc (Ericson) and the lesser known Intel's Collaboration Suite for WebRTC.
I need a server-side solution to interact with my Node Application server. Specifically, the server-API should be able to generate a SDP for a RTP end point and convert WebRTC SDP to the more generic SDP used by Legacy SIP servers or have a way to bridge these two end-points. I feel comfortable that this is possible with Kurento (saw a post on except that I am not aware of any jsSip/sipML5 kind of API for Kurento. Kurento itself is not meant to provide signalling. For e.g., if the SDP generated by Kurento for the rtpEndpoint in Kurento has to be used in a SIP call/INVITE, how would one implement it? For that matter, how would one initiate a SIP INVITE, for example, from Kurento? Are there third-party modules to do this?
Has anyone used the any of the servers listed above for a similar use case?
This is a programming question. I am looking for server APIs to implement a WebRTC to SIP gateway/bridge for media transcoding (if required), SDP transformation and SIP signalling.

What is the technical difference between webRTC communication and VoIP

can anyone help me what is the technical difference between WebRTC communication and the VoIP communication?
The question doesn't exactly make sense because it makes the assumption that VoIP is a technical stack, but it's not - it's a concept. The concept of sending Voice (V) over (o) Internet Protocols (IP). This means that different technology stacks can be used for accessing/capturing the media, establishing connections, negotiating streams, and transmitting streams.
WebRTC is one such stack (set of APIs, methods, and standards) for VoIP.
VOIP - Voice over Internet Protocol was a concept which came with popularity of internet. This involved using the internet to route voice telephony data, basically using existing IP infrastructure to transport audio streams without having dedicated circuit switched telephony. Over the time popular VOIP applications like Skype, Vonage and many in enterprise telephony came in.
VOIP had two parts one signalling , basically controller part and other actual media.
Actual media usually but not necessarily followed RTP (Real Time) protocol. RTP could carry both voice and video. Problem with RTP has been that browsers don't support it natively and it is not secure. You usually needed some sort of plugin to have VOIP work inside browser.
With WebRTC now popular browsers like FF, Chrome and Opera support a variation of RTP which is secure and can be natively invoked. Using WebRTC and browser Javascript you can send Voice, Video and Screen (it's video only) data to any other browser, which is cool.
VOIP : Voice over internet protocol uses DSL/Cable Modem voice over Wi-Fi/3G (VoWiFi/3G), voice over LTE (VoLTE), and Rich Communication Suite (RCS). VoIP is cloud-based, calls are sent as digital data and no cables are needed to send the call so any kind of Internet connection can be used to make calls and from a plethora of devices.
Web RTC: Web Real time communication use only OS browsers to communicate.
WebRTC requires the use of two main component JavaScript APIs.
WebRTC is an extension of VoIP to the browser world. It can reuse the existing VoIP infrastructure with incremental upgrades. This is good news for VoIP, as adoption of WebRTC only serves to increase overall VoIP proliferation.
Also, WebRTC is ideal for low-cost browser-based contact center applications. VoIP can serve embedded operator-driven VoLTE applications. Consequently, between WebRTC and VoIP, they can support wide range of consumer and enterprise applications.

Open standard for native RTC with no plugins

Recently I have worked using WebRTC and I'm wondering if it would make more sense to implement a Real Time Communication open standard at a native level.
Let's say that instead of a web browser API we have a native API that any native app, including the browser can leverage.
Part of the promise of WebRTC is to have RTC on the browser without plugins but why stop there, why not have RTC on any device with media capabilities without plugins. There are many devices with media capabilities that will not run a web browser, e.g., wearables. It seems to me that the browser itself has become the plugin and I think we need to get rid of it as far as RTC is concerned.
It sounds like OpenWebRTC is going in a similar direction but so far they are only working inside the browser.
Are there open standards for native RTC? So far it looks like RTCWeb is only concerned about the browser.
Are there any projects/initiatives for native implementations of an open standard for RTC?
webrtc definition. webrtc is stuck into two parts, complementary but separated. the W3C consortium is standardizing a JS API for browsers named webRTC. The IETF is standardizing the underlying protocols and what happen on the wire for interoperability, it is named rtcweb.
the IETF's rtcweb group defines everything you need to interoperate with a browser, without being a browser yourself, i.e. for gateways, devices, .... It has been made explicit at the latest meeting in hawaii last november, and there is for example a corresponding draft.
On the client side, the implementation of webRTC JS API is done on top of c/C++ implementations. Those "native" (as in non-browser, C/C++) APIs can be use directly for servers, embeddable devices, gateway, ect, or can also be wrapped in different languages (obj-c, java) to provide "native" (as in mobile native) APIs.
Note that BOTH openWebrtc.io and webrtc.org have a full implementation of webRTC in C/C++ that you can use. openWebrtc provides iOS wrappers, and webkit wrappers (for safari), but do not provide data channel support, ORTC API support, nor compile under windows. webrtc.org supports all desktop OSes, and provide wrappers for both iOS and Android. The build tools are specific to google's chrome though, unlike openWebRTC which uses standard auto tools, github, ...
HTH
Currently there is no effort in this direction. The guys in the webrtc standardization committee have their hands full standardizing just the javascript API. As you know the current spec is not final and is currently still worked on. And now ORTC will generate even more work.
There are many reasons why no one is currently trying to standardize any form of native RTC. Here are some that come to my mind:
What exactly is native? Javascript is native for the browsers. The chrome version of webrtc is in C++ but the OpnWebRTC one is in C. Android developers use mostedly Java, iOS developers use ObjectiveC. Should there be standards for all these languages? That's going to take forever.
As I said standardization committee already have their hands full.
There is still quite a lot of experimentation that goes on with WebRTC. Standardization may prevent this.
The API of the native libraries tend to be very similar to the JS API.

HLS(HttpLiveStreaming) vs RTP(Real-time Transport Protocol) on UDP for mobile P2P?

I'm testing Audio/Video P2P connection between mobile devices.
Studying WebRTC, I've noticed NAT traversal(uses STUN server) and UDP-hole-punching is the key to make P2P possible.
On the other hand, I've noticed HLS(HttpLiveStreaming) on iOS devices is very optimized for A/V live streaming, and widely available even with Android4.x(3.x unstable)
So, here is my question if I use HLS for mobile P2P:
a) HLS is a protocol on TCP(HTTP) not UDP, so isn't there a performance drawback?
See: TCP vs UDP on video stream
b) How about NAT traversal? Will it be easier since HLS is HTTP(port:80)?
I have read wikipedia http://en.wikipedia.org/wiki/HTTP_Live_Streaming
Since its requests use only standard HTTP transactions, HTTP Live
Streaming is capable of traversing any firewall or proxy server that
lets through standard HTTP traffic, unlike UDP-based protocols such as
RTP. This also allows content to be delivered over widely available
CDNs.
c) How about android device compatibility? Is there a lot of problems to invoke StreamingLive distribution?
Thanks.
The reason why firewalls are not an issue for HLS is that it's a client-server protocol where all requests are done via HTTP on port 80. If you are implementing a P2P application, you won't be able to attach it to a port below 1024 unless you have root privileges.
This means that exchanging data via HLS (port 80) won't work for P2P. Unless you have a translation server in the middle, which defeats the purpose of P2P.
Comparing HTTP Live Streaming to P2P video streaming over UDP/RTP is almost like comparing apples and oranges. More like oranges and tangerines... read on.
HTTP Live Streaming was designed as client-server protocol without P2P or NAT traversal consideration. The idea being that the streaming server is already over HTTP/TCP and accessible from the public internet as if it was just an ordinary web server. The key features of HLS is its ability to dynamically switch the bitrate based on how well the client receives the stream. If the client connection to the server hiccups trying to stream down a 1080p video, it can transparently switch to sending a lower bitrate video (and likely switch back to streaming at higher bitrate if network conditions improve). Good example: Netflix.
WebRTC and ICE were designed to stream real time video bidirectionally between devices that might both behind NATs. As such, traversing a NAT through UDP is much easier than TCP. UDP lends itself to real-time (less latency) than TCP. Most video-chat clients (ala Skype) have dynamic bandwidth adjustments built in to their codecs and protocols to achieve something similar to what HLS does.
I suppose you could combine TCP NAT traveral and HLS together. Doing HLS over UDP implies that you build a TCP like reliability layer on top of your UDP stream.
Hope this helps
http://www.garymcgath.com/streamingprotocols.html
HTTP Live Streaming
The new trend in streaming is the use of HTTP with protocols that
support adaptive bitrates. This is theoretically a bad fit, as HTTP
with TCP/IP is designed for reliable delivery rather than keeping up a
steady flow, but with the prevalence of high-speed connections these
days it doesn't matter so much. Apple's entry is HTTP Live Streaming,
aka HLS or Cupertino streaming. It was developed by Apple for iOS and
isn't widely supported outside of Apple's products. Long Tail Video
provides a testing page to determine whether a browser supports HLS.
Its specification is available as an Internet Draft. The draft
contains proprietary material, and publishing derivative works is
prohibited.
The only playlist format allowed is M3U Extended (.m3u or .m3u8), but the format of the streams is restricted only by the implementation.
I could achieve P2P on top of HLS using WebRTC on a Android with a Mozilla Firefox browser and two others desktop browsers (Chrome and Firefox) on the same swarm.
Here's a screenshot of a presentation that I've made on the University: https://www.dropbox.com/s/zyfgs4o8al9ovd0/Screenshot%202014-07-17%2019.58.15.png
This screenshot was made by acessing http://bem.tv/demo.html.
If you want to know more about, this is my masters project and I'm publishing my advances on http://bem.tv and http://github.com/bemtv.

iOS client app with Mac server

I am attempting to build a client/server game architecture and would like to begin testing the game using my local Mac as the server. I have found several articles on Bonjour, but that is for local network traffic only. My goal is to make this application work over the Internet, connecting to a hosted application on a static address to facilitate turn data. However, I'm at a loss as to which Cocoa APIs to use for this purpose. Should I use NSConnection, NSStream subclasses, or good 'ol C sockets and whatnot. The game state is already built in Objective-C and is ready to be set in motion once I have the server facilities ready. Any insight?
NSConnection, NSStrean and C sockets are build for different needs. You need to specify the needs of your game and the kind of service in order to get more help. If you want to develop a client-server application that relies on the Internet and not on the local network, Bonjour will not be able to help.
C sockets, and Cocoa APIs that wrap around them are intended to operate on an open network stream between the client and the server. The advantage of having an open stream is that you can have the server send data to the client without the client having requested for it. NSURLConnection in Cocoa works differently. With it, you can perform HTTP requests and receive formatted responses from a server.
If your application is based on HTTP requests, I recommend you take a look an NSURLConnection, or AFNetworking, as a 3rd party alternative. If your application relies on open streams, I recommend you take a look at CFNetwork from Apple (C wrapper around BSD sockets that originates from the days Macs had Carbon, with great performance, stability and versatility) and GCDAsyncSocket, a 3rd party library wrapped around BSD Sockets, supports Crand Central Dispatch, is Objective-C ready, and does the job wonderfully.
I hope I helped.
I suggest you to use sockets, since they're not hard to use and are a standard way. I've even written an asynchronous wrapper class around BSD sockets: https://github.com/H2CO3/TCPHelper
This is for simple, one-to-one TCP protocol connections, supporting both IPv4 and IPv6. You can send and receive raw NSData and possibly build a protocol around it.
Foundation classes such as NSURLConnection are not particularily for this purpose; rather than to interact with standard HTTP servers (I suppose you don't want to implement a full HTTP server for a game).
NSNetServices may suit you just like CFNetwork, but the latter is a bit harder to use. If you'd like to use Foundation classes, I'd recommend NSNetServices.
Hope it helps.
There are many different ways to accomplish this. It really depends on how you'll be passing the data and what it will be used for.
First, I would setup a hostname that you can use for development purposes with your game. You can use anything like http://dyn.com/dns/ to setup this for your Mac. Then you can enable a compiler setting to switch out the development / production URL's.
Next, I would recommend using TCP sockets for your game (using CocoaAsyncSocket - https://github.com/robbiehanson/CocoaAsyncSocket). This method should work fine your your use case. Since you are doing turn-based data (and since all of that data is vital) I would not recommend using UDP sockets (but those would work if you were solely passing position, video, or audio data where a dropped packet might not matter).