As indicated in an original question: Cast Sender running on Android Chrome or Firefox
Is there a way to launch Chromecast via WebRTC?
I have no doubt that when launched CC will be as big as the original iPhone, and the only thing preventing full HTML5 functionality is using the DIAL protocol.
As I already know this isn't possible, #ali-naddaf could you enter this as a feature request by any chance please?
WebRTC is working great on our dev CC devices, the only thorn is having to use the Chrome extension to launch it before using WebRTC and WebSockets to take over. Avoiding DIAL for launching pure web apps would be a real killer deal IMO.
BTW, I haven't been able to find a formal way to make feature requests so this Q/A mechanism was my approach.
Related
I'm trying to build a live video streaming application from a usb camera to an application running on a remote desktop. I've researched protocols like RTMP, RTSP, WebRTC. According to my understanding I can't use webRTC since it's only compatible in the browser and I'm not building my application for a browser here. Please help me choose the right protocol and also the media server.
You can, and many applications do, use WebRTC outside the browser. WebRTC implementations are available for many different platforms including iOS, Android and embedded systems.
You can even use Headless Chrome if you want to use the Chrome APIs without the visual parts of the browser.
I would like to implement a video / audio call feature from a browser. The goal is to allow two users to communicate remotely without having to install a third part (when I say third part, I'm talking about a software or an extension on a browser).
I know WebRTC, which is very popular today and free. However, it is very difficult to implement and the documentation is difficult to understand (not very easy for a beginner).
Here is the official webRTC documentation, and honestly, where to start? https://webrtc.org/start/
If you have an experience about WebRTC, is it possible to share with positive or negative points? This would be very useful for the community.
Moreover, if you have experience with another library, I think it would be interesting to hear it.
There is no other way to develop a call service in a website without the use of WebRTC today.
The alternatives are:
Use WebRTC
Use Flash (which is... dead)
Use a plugin (which is... dying as a mechanism in browsers)
Use an app you download (not exactly a service in a website)
Node.js is the way to go, but you will need to learn some new technology, especially when it comes to the backend.
The servers you will need are:
1. The traditional web application server
2. A signaling server (the one you plan on using Node.js for - you can use that for the web application server as well)
3. A STUN/TURN server (for NAT traversal)
4. Maybe a media server, depending on your use case
For some alternative open source and commercial products, you can check this WebRTC Developer Tools Landscape
I have a chrome extension running in my browser. I also have a Mac OSX app I wrote in Swift/Objective-c in Xcode. I am wondering how this chrome extension can talk to the Mac OSX app on the same computer.
I am aware of the Chrome Extension API, but do not know how I can capture the information from that is sent by Chrome in Swift. Does anyone know how to do this?
Thanks
There are two broad approaches you can take.
Native Messaging API. This does have the limitation that Chrome must launch the process (and communicate to it via STDIO) - you cannot attach to an existing process. The upside - the communication channel is pretty secure.
Your native app can expose a web server (or better yet, a WebSockets server) on a local port. The extension can then try to connect to this port and talk to your app. The downside is that anything (at least on the machine) can connect to your native app.
This is a frequently used approach; for example, 1Password or various IDE integrations work this way.
You could combine the two approaches to launch the app with a "launcher" Native Host if it's not running.
Recently I have worked using WebRTC and I'm wondering if it would make more sense to implement a Real Time Communication open standard at a native level.
Let's say that instead of a web browser API we have a native API that any native app, including the browser can leverage.
Part of the promise of WebRTC is to have RTC on the browser without plugins but why stop there, why not have RTC on any device with media capabilities without plugins. There are many devices with media capabilities that will not run a web browser, e.g., wearables. It seems to me that the browser itself has become the plugin and I think we need to get rid of it as far as RTC is concerned.
It sounds like OpenWebRTC is going in a similar direction but so far they are only working inside the browser.
Are there open standards for native RTC? So far it looks like RTCWeb is only concerned about the browser.
Are there any projects/initiatives for native implementations of an open standard for RTC?
webrtc definition. webrtc is stuck into two parts, complementary but separated. the W3C consortium is standardizing a JS API for browsers named webRTC. The IETF is standardizing the underlying protocols and what happen on the wire for interoperability, it is named rtcweb.
the IETF's rtcweb group defines everything you need to interoperate with a browser, without being a browser yourself, i.e. for gateways, devices, .... It has been made explicit at the latest meeting in hawaii last november, and there is for example a corresponding draft.
On the client side, the implementation of webRTC JS API is done on top of c/C++ implementations. Those "native" (as in non-browser, C/C++) APIs can be use directly for servers, embeddable devices, gateway, ect, or can also be wrapped in different languages (obj-c, java) to provide "native" (as in mobile native) APIs.
Note that BOTH openWebrtc.io and webrtc.org have a full implementation of webRTC in C/C++ that you can use. openWebrtc provides iOS wrappers, and webkit wrappers (for safari), but do not provide data channel support, ORTC API support, nor compile under windows. webrtc.org supports all desktop OSes, and provide wrappers for both iOS and Android. The build tools are specific to google's chrome though, unlike openWebRTC which uses standard auto tools, github, ...
HTH
Currently there is no effort in this direction. The guys in the webrtc standardization committee have their hands full standardizing just the javascript API. As you know the current spec is not final and is currently still worked on. And now ORTC will generate even more work.
There are many reasons why no one is currently trying to standardize any form of native RTC. Here are some that come to my mind:
What exactly is native? Javascript is native for the browsers. The chrome version of webrtc is in C++ but the OpnWebRTC one is in C. Android developers use mostedly Java, iOS developers use ObjectiveC. Should there be standards for all these languages? That's going to take forever.
As I said standardization committee already have their hands full.
There is still quite a lot of experimentation that goes on with WebRTC. Standardization may prevent this.
The API of the native libraries tend to be very similar to the JS API.
In WebRTC, I always see the implementation about peer-to-peer and how to get video streaming from one client to another client. How about server-to-client?
Is it possible for WebRTC to streaming video file from server-to-client?
(I am thinking about using WebRTC Native C++ API to create my own server application to connect to the current implementation on chrome or firefox browser client application.)
OK, if it is possible, will it be faster than many current video streaming services?
Yes it is possible as the server can be one of the peers in that peer-to-peer session.
If you respect the protocols and send the video in SRTP packets using VP8, the browser will play it. To help you build these components on other applications or servers, you can check this page and this project as a guide.
Now, comparing WebRTC with other streaming services... It will depend on several variables like the Codec or the protocol. But, for instance, comparing WebRTC (SRTP over UDP with VP8 Codec) against Flash (RTMP over TCP with H264 Codec), I would say that WebRTC wins.
The player will be Flash Player against the native <video> tag.
The transport would be TCP against UDP.
But of course, everything depends on what you are sending to the client.
I have written some apps and plugins using the native WebRTC API, and there isn't a lot of information out there yet, but here are a few useful resources to get you started:
QT Example: http://research.edm.uhasselt.be/jori/qtwebrtc
Native to Browser example: http://sourcey.com/webrtc-native-to-browser-video-streaming-example/
I started with the WebRTC Native C++ to Browser Video Streaming Example but it doesnot build anymore with the actual WebRTC Native Code.
Then I made modifications merging into a standalone process :
management of the peerConnection (the peerconnection_server)
access to Video4Linux capture (the peerconnection_client).
Removing the stream from browser to the WebRTC Native C++ client give a simple solution to access throught a WebRTC browser to a Video4Linux device that is available from GitHub webrtc-streamer.
Live Demo
We are attempting to replace MJPEGs with Webrtc for our server software and have a prototype module for doing this using a smattering of components tied to the Openwebrtc project. It has been an absolute bear to do, and we have frequent ICE negotiation errors (even over a simple LAN), but it mostly works.
We also built a prototype with the Google Webrtc module, but it had many dependencies. I find it easier to work with the Openwebrtc modules because Google's stuff is so tightly tied to general peer-to-peer scenarios on the browser.
I compiled the following from scratch:
libnice 0.1.14
gstreamer-sctp-1.0
usrsctp
Then I have to interact with libnice a bit directly to gather candidates. Also have to write out the SDP files by hand. But the amount of control--being able to control the source of the pipeline--makes it worthwhile. The resulting pipeline (with two clients off one server source) is below:
Of course. I'm writting a program using native WebRTC api which can join the conference as a peer and record both video and audio.
see: How to stream audio from browser to WebRTC native C++ application
and you can definitely streaming media from native app.
I'm sure you can use dummy_audio_file to streaming audio from local file, and you can find a way to access the video streaming progress by your own.
Yes it is. We have developed an load test tool to publish and play for Ant Media Server. This tool can broadcast media file. We used the same native WebRTC library used in Ant Media Server.
Sure it's possible, it allows covert live streaming to WebRTC, for example:
OBS/FFmpeg ---RTMP---> Server ---WebRTC--> Chrome/Client
For this scenario, it allows the ultra low latency live streaming, about 600~800ms, to play the live streaming by WebRTC. Please take a look at this demo.