I'm looking to use Chromium OS for a specific business application, but I need access to local serial and USB ports. My reading of the Chromium docs says NPAPI plugins are not supported in Chrome OS, only PPAPI (Pepper). I'm a bit confused about PPAPI, as all docs talk about this in the context of Native Client, which as it's a sandboxed environment, cannot access local resources.
So my basic question is: Is it possible to write a PPAPI browser plugin that will work like a regular NPAPI browser plugin to access local resources?
Currently, no. NPAPI plugins are on the way out, Native Client is a sandboxed environment, and Pepper's interfaces are designed to mirror those available elsewhere on the web platform (e.g. Javascript). So a good place to look for future interfaces would be draft web standards and HTML5/JS addons, e.g. for gamepads (https://wiki.mozilla.org/GamepadAPI) or cameras (http://developers.whatwg.org/video-conferencing-and-peer-to-peer-communication.html )
Related
I'm trying to build a live video streaming application from a usb camera to an application running on a remote desktop. I've researched protocols like RTMP, RTSP, WebRTC. According to my understanding I can't use webRTC since it's only compatible in the browser and I'm not building my application for a browser here. Please help me choose the right protocol and also the media server.
You can, and many applications do, use WebRTC outside the browser. WebRTC implementations are available for many different platforms including iOS, Android and embedded systems.
You can even use Headless Chrome if you want to use the Chrome APIs without the visual parts of the browser.
I have a chrome extension running in my browser. I also have a Mac OSX app I wrote in Swift/Objective-c in Xcode. I am wondering how this chrome extension can talk to the Mac OSX app on the same computer.
I am aware of the Chrome Extension API, but do not know how I can capture the information from that is sent by Chrome in Swift. Does anyone know how to do this?
Thanks
There are two broad approaches you can take.
Native Messaging API. This does have the limitation that Chrome must launch the process (and communicate to it via STDIO) - you cannot attach to an existing process. The upside - the communication channel is pretty secure.
Your native app can expose a web server (or better yet, a WebSockets server) on a local port. The extension can then try to connect to this port and talk to your app. The downside is that anything (at least on the machine) can connect to your native app.
This is a frequently used approach; for example, 1Password or various IDE integrations work this way.
You could combine the two approaches to launch the app with a "launcher" Native Host if it's not running.
I already had an Java application that is downloaded as part of java applets from the web page, that can interact with the USB Storage device where it need to save and retrieve data from the device.
Since the chrome has stopped supporting out of sandbox running applets, I need to develop some new technique to interact with the USB Storage Device from my web page. My application architect does not want to achieve same using JNLP which is one options that I have learnt by reading some articles.
Now I would like you peoples to help me on build some applications, so that I can use the same in my webpage to access USB storage device(Pen drive) independent of OS (Operating System) and Independent of Browser(Cross browser support-mostly should work on IE, firefox , chrome and safari).
I am ok with application build on native (OS dependent), but should be there a way to communicate with that application from any browsers installed over that machine.
Recently I have worked using WebRTC and I'm wondering if it would make more sense to implement a Real Time Communication open standard at a native level.
Let's say that instead of a web browser API we have a native API that any native app, including the browser can leverage.
Part of the promise of WebRTC is to have RTC on the browser without plugins but why stop there, why not have RTC on any device with media capabilities without plugins. There are many devices with media capabilities that will not run a web browser, e.g., wearables. It seems to me that the browser itself has become the plugin and I think we need to get rid of it as far as RTC is concerned.
It sounds like OpenWebRTC is going in a similar direction but so far they are only working inside the browser.
Are there open standards for native RTC? So far it looks like RTCWeb is only concerned about the browser.
Are there any projects/initiatives for native implementations of an open standard for RTC?
webrtc definition. webrtc is stuck into two parts, complementary but separated. the W3C consortium is standardizing a JS API for browsers named webRTC. The IETF is standardizing the underlying protocols and what happen on the wire for interoperability, it is named rtcweb.
the IETF's rtcweb group defines everything you need to interoperate with a browser, without being a browser yourself, i.e. for gateways, devices, .... It has been made explicit at the latest meeting in hawaii last november, and there is for example a corresponding draft.
On the client side, the implementation of webRTC JS API is done on top of c/C++ implementations. Those "native" (as in non-browser, C/C++) APIs can be use directly for servers, embeddable devices, gateway, ect, or can also be wrapped in different languages (obj-c, java) to provide "native" (as in mobile native) APIs.
Note that BOTH openWebrtc.io and webrtc.org have a full implementation of webRTC in C/C++ that you can use. openWebrtc provides iOS wrappers, and webkit wrappers (for safari), but do not provide data channel support, ORTC API support, nor compile under windows. webrtc.org supports all desktop OSes, and provide wrappers for both iOS and Android. The build tools are specific to google's chrome though, unlike openWebRTC which uses standard auto tools, github, ...
HTH
Currently there is no effort in this direction. The guys in the webrtc standardization committee have their hands full standardizing just the javascript API. As you know the current spec is not final and is currently still worked on. And now ORTC will generate even more work.
There are many reasons why no one is currently trying to standardize any form of native RTC. Here are some that come to my mind:
What exactly is native? Javascript is native for the browsers. The chrome version of webrtc is in C++ but the OpnWebRTC one is in C. Android developers use mostedly Java, iOS developers use ObjectiveC. Should there be standards for all these languages? That's going to take forever.
As I said standardization committee already have their hands full.
There is still quite a lot of experimentation that goes on with WebRTC. Standardization may prevent this.
The API of the native libraries tend to be very similar to the JS API.
I'm currently developing a browser plugin for MacOSX 10.6, and am planning to use the netscape API for portability across browsers and architectures. According to Apple's documentation, as of 10.6 such plugins run out-of-process to improve the integrity of the browser session. What I'm concerned about is the following directive they give in the documentation:
Use platform APIs sparingly. Wherever possible, you should use new
plug-in APIs to do what you need. If no such APIs exist, file bugs requesting them.
I'm not sure what the nature of this directive is. Is this advice to improve portability of the plugin, a reminder that accessing the operating system's other APIs can open up the possibility of crashing the client or corrupting a user's data, or an indication that access to the platform APIs is in some way "broken?"
Its portability advice. The NPAPI is, although not officially standardized, fairly stable and already wraps some platform specific APIs for you.
If you try to use NPAPI whenever possible, you avoid quite some porting as e.g. it happened relatively recently with Apple effectively deprecating Carbon when transitioning to 64 bit.