Porting web application to Chrome OS (with multi-monitor support) - google-chrome-os

I'm in the process of porting an application to ChromeOS with the requirement that it should look and feel as native as possible. This means in particular that it should allow things such as multi-monitor support and USB support.
One possibility would be to implement it as a web application (since we have already a web client), but in this case I would need to add support for native features (again, multi-monitor support and USB device access), so I wonder what needs to be done in this case. My wild guess as a ChromeOS developer newbie is that I would need to extend the code with ChromeOS JavaScript features, and I don't know if this is possible.
Another possible approach would be to write an Android application, since I see that ChromeOS added support for Android applications (in this case I would have to write the code from scratch).
Finally, another option would be to write native code, which could be possible for example relying on a Crouton development environment, and reuse the code of a native C application.
What approach would you recommend to build a ChromeOS application starting from a web application or from a native one?
What approach would guarantee access to native features (multi-monitor support and USB)?

I discovered two APIs which should help with multi-monitor support:
https://developer.chrome.com/extensions/windows
https://developer.chrome.com/apps/system_display
The system.display API allows one to discover and monitor the current monitor layout, while the windows API allows one to create several windows in the same application. By combining these ones I should be able to create a window for each monitor in case I go with the chrome implementation route.
Given that I already have a native implementation for Linux, Crostini (as opposed to Crouton) is also very appealing since it provides a deeper level of integration with virtually no changes to the code and no need to maintain two different versions of the web client, with the only downside that it requires the user to create a Linux environment and manually install the application, also it is not supported on all chromebook devices and on many it will be never supported.
I still need to check what is the performance overhead. Also the level of integration with USB I/O might be higher than the one achievable by using the chrome API.

Related

What are the pros and cons of implementing webRTC?

I would like to implement a video / audio call feature from a browser. The goal is to allow two users to communicate remotely without having to install a third part (when I say third part, I'm talking about a software or an extension on a browser).
I know WebRTC, which is very popular today and free. However, it is very difficult to implement and the documentation is difficult to understand (not very easy for a beginner).
Here is the official webRTC documentation, and honestly, where to start? https://webrtc.org/start/
If you have an experience about WebRTC, is it possible to share with positive or negative points? This would be very useful for the community.
Moreover, if you have experience with another library, I think it would be interesting to hear it.
There is no other way to develop a call service in a website without the use of WebRTC today.
The alternatives are:
Use WebRTC
Use Flash (which is... dead)
Use a plugin (which is... dying as a mechanism in browsers)
Use an app you download (not exactly a service in a website)
Node.js is the way to go, but you will need to learn some new technology, especially when it comes to the backend.
The servers you will need are:
1. The traditional web application server
2. A signaling server (the one you plan on using Node.js for - you can use that for the web application server as well)
3. A STUN/TURN server (for NAT traversal)
4. Maybe a media server, depending on your use case
For some alternative open source and commercial products, you can check this WebRTC Developer Tools Landscape

Programmable Control of Niagara

How to make an external program control Tridium Niagara framework? I see two options; Which one is correct:
1) Niagara allows addition of 3rd party code to provide an API, and someone else has already done that and we can use it.
2) Niagara allows 3rd party code to do API but we have to write our own.
Niagara installations can be configured to process many different network control protocols driven by an external process across the network, for example by BACnet. The Niagara instance can be configured internally in many different ways to respond to control from across the network.
Niagara 4.x prominently features Web GUIs, including Javascript client widgets and server Javascript, or the server can respond to the Web GUI activity with its other configuration and scripting methods.
For any real complexity beyond the bundled network drivers or HTTP, 3rd party modules coded in Java are used. These would typically be coded as Niagara drivers, processing data over serial or sockets.
Niagara's APIs are mostly open. But Niagara is a complex environment. Completing Tridium's week-long developer training/certification is typically required to produce a proper module.
There are some external API's that Tridium built into AX. oBIX and Bajascript.
I've written external oBIX programs in both Java and Python to pull data from a remote Jace. You'll have to add the oBIX service and export the points you want to see.
Bajascript is a javascript library Tridium uses to interact with the system as well. I believe they released Bajascript 2.0 not to long ago. http://www.bajascript.com
If those don't do what you like, you'll more than likely need to write your own API to handle it.

Open standard for native RTC with no plugins

Recently I have worked using WebRTC and I'm wondering if it would make more sense to implement a Real Time Communication open standard at a native level.
Let's say that instead of a web browser API we have a native API that any native app, including the browser can leverage.
Part of the promise of WebRTC is to have RTC on the browser without plugins but why stop there, why not have RTC on any device with media capabilities without plugins. There are many devices with media capabilities that will not run a web browser, e.g., wearables. It seems to me that the browser itself has become the plugin and I think we need to get rid of it as far as RTC is concerned.
It sounds like OpenWebRTC is going in a similar direction but so far they are only working inside the browser.
Are there open standards for native RTC? So far it looks like RTCWeb is only concerned about the browser.
Are there any projects/initiatives for native implementations of an open standard for RTC?
webrtc definition. webrtc is stuck into two parts, complementary but separated. the W3C consortium is standardizing a JS API for browsers named webRTC. The IETF is standardizing the underlying protocols and what happen on the wire for interoperability, it is named rtcweb.
the IETF's rtcweb group defines everything you need to interoperate with a browser, without being a browser yourself, i.e. for gateways, devices, .... It has been made explicit at the latest meeting in hawaii last november, and there is for example a corresponding draft.
On the client side, the implementation of webRTC JS API is done on top of c/C++ implementations. Those "native" (as in non-browser, C/C++) APIs can be use directly for servers, embeddable devices, gateway, ect, or can also be wrapped in different languages (obj-c, java) to provide "native" (as in mobile native) APIs.
Note that BOTH openWebrtc.io and webrtc.org have a full implementation of webRTC in C/C++ that you can use. openWebrtc provides iOS wrappers, and webkit wrappers (for safari), but do not provide data channel support, ORTC API support, nor compile under windows. webrtc.org supports all desktop OSes, and provide wrappers for both iOS and Android. The build tools are specific to google's chrome though, unlike openWebRTC which uses standard auto tools, github, ...
HTH
Currently there is no effort in this direction. The guys in the webrtc standardization committee have their hands full standardizing just the javascript API. As you know the current spec is not final and is currently still worked on. And now ORTC will generate even more work.
There are many reasons why no one is currently trying to standardize any form of native RTC. Here are some that come to my mind:
What exactly is native? Javascript is native for the browsers. The chrome version of webrtc is in C++ but the OpnWebRTC one is in C. Android developers use mostedly Java, iOS developers use ObjectiveC. Should there be standards for all these languages? That's going to take forever.
As I said standardization committee already have their hands full.
There is still quite a lot of experimentation that goes on with WebRTC. Standardization may prevent this.
The API of the native libraries tend to be very similar to the JS API.

Hybrid desktop/modern ui apps

As far as I understand, Microsoft wants to allow "having both desktop and modern ui GUIs" only available for web browsers (am I mistaken here ?).
Does that mean common apps will be developped twice ? With e.g Skype being available both as pure desktop app and pure modern ui app ? And if a user installs both, these both instances will share no data ?
I can't imagine them doing a shift towards gesture friendly uis/hybrid ui, and leaving full blown desktop apps (not toy/phone-like/game apps, that can live in one space only) with no integration/entry points inside modern ui. Or maybe they want to participate in that "kill full-blown desktop apps" movement ?
So is there a model for a desktop app developped in whatever GUI toolkit, that wants to have some minimal integration with a small HTML/CSS/JS frontend in modern ui, like for e.g providing a dashboard of favorite or recently accessed files, contacts, etc ?
Your first statement of "only in a browser" is not correct. Desktop applications don't change their current design paradigms. You can have browser-based apps on the desktop, of course. But full clients are still supported and still viable as a real solution to problems.
Your takeaway from that comment should be that desktop applications are not deprecated as people assert. The reality is, desktop applications are still the only solutions to many problems.
Your second question of shared data is not correct. Skype shares lots of data with its app companion. Not because of shared local storage, however; it is because of the services that it shares. My account and contacts are on the server. So, they share a lot.
Your takeaway from that comment should be that Windows 8 apps should not highly leverage local storage but should be built as service-oriented clients. To that end, your desktop applications should have already started to leverage this architecture, too.
Your third question (which is very cryptic) seems to be asking if a desktop application and a companion Windows 8 app can share or integrate with each other. The answer is yes. Not only can they share the same service, but file associates, custom protocols, and some of the non-Store manifest capabilities allow for this explicitly. Line of business applications should have a companion app, if you ask me. The integration points are many - though not every. But there is no other way to leverage the new capabilities of Windows 8 without introducing a companion app - even if that app does very little.
Your takeaway from that comment should be that Windows desktop applications and companion Windows apps are the preferred and anticipated development approach.
Best of luck, thanks for the question.

Taking control of downloads without using Browser Extensions

I've seen download manager programs including IDM taking control of downloads in browsers without having extensions in them and they are calling it (Advanced Browser Integration).
I was wondering if anyone can suggest an approach for a similar situation?
IDM only works on Windows and does his Advanced Browser Integration tricks using Windows Filtering Platform which is a windows specific service.
If you want to do something similar on Windows, you should study that platform.
On unix systems, as far as I know, there isn't anything like the Windows Filtering Platform. Packet filtering and other firewall like functionality happens in the kernel and there are multiple implementations of that: which is running (if any) depends on how the user decided to configure the system (even if ipf is almost guaranteed to be the used one on BSD and BSD derived systems).
On Mac Os X specifically you probably want to check Network Kernel Extensions. I'm not sure they are sufficient to do what you want to do, but I suspect they are.