Open standard for native RTC with no plugins - native

Recently I have worked using WebRTC and I'm wondering if it would make more sense to implement a Real Time Communication open standard at a native level.
Let's say that instead of a web browser API we have a native API that any native app, including the browser can leverage.
Part of the promise of WebRTC is to have RTC on the browser without plugins but why stop there, why not have RTC on any device with media capabilities without plugins. There are many devices with media capabilities that will not run a web browser, e.g., wearables. It seems to me that the browser itself has become the plugin and I think we need to get rid of it as far as RTC is concerned.
It sounds like OpenWebRTC is going in a similar direction but so far they are only working inside the browser.
Are there open standards for native RTC? So far it looks like RTCWeb is only concerned about the browser.
Are there any projects/initiatives for native implementations of an open standard for RTC?

webrtc definition. webrtc is stuck into two parts, complementary but separated. the W3C consortium is standardizing a JS API for browsers named webRTC. The IETF is standardizing the underlying protocols and what happen on the wire for interoperability, it is named rtcweb.
the IETF's rtcweb group defines everything you need to interoperate with a browser, without being a browser yourself, i.e. for gateways, devices, .... It has been made explicit at the latest meeting in hawaii last november, and there is for example a corresponding draft.
On the client side, the implementation of webRTC JS API is done on top of c/C++ implementations. Those "native" (as in non-browser, C/C++) APIs can be use directly for servers, embeddable devices, gateway, ect, or can also be wrapped in different languages (obj-c, java) to provide "native" (as in mobile native) APIs.
Note that BOTH openWebrtc.io and webrtc.org have a full implementation of webRTC in C/C++ that you can use. openWebrtc provides iOS wrappers, and webkit wrappers (for safari), but do not provide data channel support, ORTC API support, nor compile under windows. webrtc.org supports all desktop OSes, and provide wrappers for both iOS and Android. The build tools are specific to google's chrome though, unlike openWebRTC which uses standard auto tools, github, ...
HTH

Currently there is no effort in this direction. The guys in the webrtc standardization committee have their hands full standardizing just the javascript API. As you know the current spec is not final and is currently still worked on. And now ORTC will generate even more work.
There are many reasons why no one is currently trying to standardize any form of native RTC. Here are some that come to my mind:
What exactly is native? Javascript is native for the browsers. The chrome version of webrtc is in C++ but the OpnWebRTC one is in C. Android developers use mostedly Java, iOS developers use ObjectiveC. Should there be standards for all these languages? That's going to take forever.
As I said standardization committee already have their hands full.
There is still quite a lot of experimentation that goes on with WebRTC. Standardization may prevent this.
The API of the native libraries tend to be very similar to the JS API.

Related

Porting web application to Chrome OS (with multi-monitor support)

I'm in the process of porting an application to ChromeOS with the requirement that it should look and feel as native as possible. This means in particular that it should allow things such as multi-monitor support and USB support.
One possibility would be to implement it as a web application (since we have already a web client), but in this case I would need to add support for native features (again, multi-monitor support and USB device access), so I wonder what needs to be done in this case. My wild guess as a ChromeOS developer newbie is that I would need to extend the code with ChromeOS JavaScript features, and I don't know if this is possible.
Another possible approach would be to write an Android application, since I see that ChromeOS added support for Android applications (in this case I would have to write the code from scratch).
Finally, another option would be to write native code, which could be possible for example relying on a Crouton development environment, and reuse the code of a native C application.
What approach would you recommend to build a ChromeOS application starting from a web application or from a native one?
What approach would guarantee access to native features (multi-monitor support and USB)?
I discovered two APIs which should help with multi-monitor support:
https://developer.chrome.com/extensions/windows
https://developer.chrome.com/apps/system_display
The system.display API allows one to discover and monitor the current monitor layout, while the windows API allows one to create several windows in the same application. By combining these ones I should be able to create a window for each monitor in case I go with the chrome implementation route.
Given that I already have a native implementation for Linux, Crostini (as opposed to Crouton) is also very appealing since it provides a deeper level of integration with virtually no changes to the code and no need to maintain two different versions of the web client, with the only downside that it requires the user to create a Linux environment and manually install the application, also it is not supported on all chromebook devices and on many it will be never supported.
I still need to check what is the performance overhead. Also the level of integration with USB I/O might be higher than the one achievable by using the chrome API.

What are the pros and cons of implementing webRTC?

I would like to implement a video / audio call feature from a browser. The goal is to allow two users to communicate remotely without having to install a third part (when I say third part, I'm talking about a software or an extension on a browser).
I know WebRTC, which is very popular today and free. However, it is very difficult to implement and the documentation is difficult to understand (not very easy for a beginner).
Here is the official webRTC documentation, and honestly, where to start? https://webrtc.org/start/
If you have an experience about WebRTC, is it possible to share with positive or negative points? This would be very useful for the community.
Moreover, if you have experience with another library, I think it would be interesting to hear it.
There is no other way to develop a call service in a website without the use of WebRTC today.
The alternatives are:
Use WebRTC
Use Flash (which is... dead)
Use a plugin (which is... dying as a mechanism in browsers)
Use an app you download (not exactly a service in a website)
Node.js is the way to go, but you will need to learn some new technology, especially when it comes to the backend.
The servers you will need are:
1. The traditional web application server
2. A signaling server (the one you plan on using Node.js for - you can use that for the web application server as well)
3. A STUN/TURN server (for NAT traversal)
4. Maybe a media server, depending on your use case
For some alternative open source and commercial products, you can check this WebRTC Developer Tools Landscape

Launching Chromecast from WebRTC

As indicated in an original question: Cast Sender running on Android Chrome or Firefox
Is there a way to launch Chromecast via WebRTC?
I have no doubt that when launched CC will be as big as the original iPhone, and the only thing preventing full HTML5 functionality is using the DIAL protocol.
As I already know this isn't possible, #ali-naddaf could you enter this as a feature request by any chance please?
WebRTC is working great on our dev CC devices, the only thorn is having to use the Chrome extension to launch it before using WebRTC and WebSockets to take over. Avoiding DIAL for launching pure web apps would be a real killer deal IMO.
BTW, I haven't been able to find a formal way to make feature requests so this Q/A mechanism was my approach.

Is it possible to use WebRTC to streaming video from Server to Client?

In WebRTC, I always see the implementation about peer-to-peer and how to get video streaming from one client to another client. How about server-to-client?
Is it possible for WebRTC to streaming video file from server-to-client?
(I am thinking about using WebRTC Native C++ API to create my own server application to connect to the current implementation on chrome or firefox browser client application.)
OK, if it is possible, will it be faster than many current video streaming services?
Yes it is possible as the server can be one of the peers in that peer-to-peer session.
If you respect the protocols and send the video in SRTP packets using VP8, the browser will play it. To help you build these components on other applications or servers, you can check this page and this project as a guide.
Now, comparing WebRTC with other streaming services... It will depend on several variables like the Codec or the protocol. But, for instance, comparing WebRTC (SRTP over UDP with VP8 Codec) against Flash (RTMP over TCP with H264 Codec), I would say that WebRTC wins.
The player will be Flash Player against the native <video> tag.
The transport would be TCP against UDP.
But of course, everything depends on what you are sending to the client.
I have written some apps and plugins using the native WebRTC API, and there isn't a lot of information out there yet, but here are a few useful resources to get you started:
QT Example: http://research.edm.uhasselt.be/jori/qtwebrtc
Native to Browser example: http://sourcey.com/webrtc-native-to-browser-video-streaming-example/
I started with the WebRTC Native C++ to Browser Video Streaming Example but it doesnot build anymore with the actual WebRTC Native Code.
Then I made modifications merging into a standalone process :
management of the peerConnection (the peerconnection_server)
access to Video4Linux capture (the peerconnection_client).
Removing the stream from browser to the WebRTC Native C++ client give a simple solution to access throught a WebRTC browser to a Video4Linux device that is available from GitHub webrtc-streamer.
Live Demo
We are attempting to replace MJPEGs with Webrtc for our server software and have a prototype module for doing this using a smattering of components tied to the Openwebrtc project. It has been an absolute bear to do, and we have frequent ICE negotiation errors (even over a simple LAN), but it mostly works.
We also built a prototype with the Google Webrtc module, but it had many dependencies. I find it easier to work with the Openwebrtc modules because Google's stuff is so tightly tied to general peer-to-peer scenarios on the browser.
I compiled the following from scratch:
libnice 0.1.14
gstreamer-sctp-1.0
usrsctp
Then I have to interact with libnice a bit directly to gather candidates. Also have to write out the SDP files by hand. But the amount of control--being able to control the source of the pipeline--makes it worthwhile. The resulting pipeline (with two clients off one server source) is below:
Of course. I'm writting a program using native WebRTC api which can join the conference as a peer and record both video and audio.
see: How to stream audio from browser to WebRTC native C++ application
and you can definitely streaming media from native app.
I'm sure you can use dummy_audio_file to streaming audio from local file, and you can find a way to access the video streaming progress by your own.
Yes it is. We have developed an load test tool to publish and play for Ant Media Server. This tool can broadcast media file. We used the same native WebRTC library used in Ant Media Server.
Sure it's possible, it allows covert live streaming to WebRTC, for example:
OBS/FFmpeg ---RTMP---> Server ---WebRTC--> Chrome/Client
For this scenario, it allows the ultra low latency live streaming, about 600~800ms, to play the live streaming by WebRTC. Please take a look at this demo.

Developing an out-of-process browser plugin on Mac OS X v10.6 -- restriction against platform APIs?

I'm currently developing a browser plugin for MacOSX 10.6, and am planning to use the netscape API for portability across browsers and architectures. According to Apple's documentation, as of 10.6 such plugins run out-of-process to improve the integrity of the browser session. What I'm concerned about is the following directive they give in the documentation:
Use platform APIs sparingly. Wherever possible, you should use new
plug-in APIs to do what you need. If no such APIs exist, file bugs requesting them.
I'm not sure what the nature of this directive is. Is this advice to improve portability of the plugin, a reminder that accessing the operating system's other APIs can open up the possibility of crashing the client or corrupting a user's data, or an indication that access to the platform APIs is in some way "broken?"
Its portability advice. The NPAPI is, although not officially standardized, fairly stable and already wraps some platform specific APIs for you.
If you try to use NPAPI whenever possible, you avoid quite some porting as e.g. it happened relatively recently with Apple effectively deprecating Carbon when transitioning to 64 bit.