Capabilities of Airwatch's Workspace ONE Web browser - airwatch

My company is evaluating using Airwatch for Mobile Device Management. We have some webapps (PWA / using Service Workers) we use internally. In other posts I read that the administrators may limit the use of the devices' default browsers Safari and Chrome and only allow Airwatch's Workspace ONE Web browser for internal web pages.
Now, my questions are: Does the Workspace ONE Web browser support Progressive Web Apps with service workers?
And additionally, is this browser based on another one, so I'm able to check easily what Workspace ONE Web is capable of (caniuse)?

I would recommend that you review the capabilities directly from VMware. Starting here: https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/9.7/vmware-airwatch-guides-97/GUID-AW97-Features_Matrix.html
PWA's may function differently on Workspace ONE - Web if there are specific controls implemented that could impede the PWA functionality, like local storage. However, you'll likely be able to determine if this is a risk to your project by reading the documentation.

Related

How do I launch/publish my website? ASP.NET Core

I'm new to web development and just built my first website with .Net Core. It's primarily HTML, CSS, and JavaScript with a little C# for a contact form.
Without recommending any service providers (question will be taken down), how do I go about deploying the website? The more details the better as I have no idea what I'm doing haha.
Edit: I am definitely going to go with a service provider, however the business I am building the website for doesn't have a large budget so I want to find the best provider at the lowest cost.
Daniel,
As you suspect, this is a bit of a loaded question as there are so many approaches. One approach is to use App Services within Microsoft Azure. You can create a free trial Azure account to start that includes a 200.00 credit, which is more than enough to do all of this for free. Then, using the Azure Management Portal, create an App Service (also free) on an App Service Plan in a region that makes sense for you (i.e. US West). Once you do that, you can download what is called a Publish Profile from within the App Service's Management Portal in Azure.
If you're using Visual Studio, for example, you can then right click your project and "Publish" it (deploy to the cloud, or the App Service you just created). One option in that process is to import an Azure Publish Profile, which you can do with the one you just downloaded. This makes it really simple. The Publish Profile is really just connection information to your Azure App Service (open it in Notepad to see). It will chug for a bit and then publish and load the app for you. You can also get to the hosted version of your app by clicking the Url of the app in the App Service management portal on the main page.
This may be oversimplifying what you need to do, but this is a valid direction to take. AWS and others have similar approaches.
Again, tons of ways to do this, but this is a free approach. :-) I don't consider Azure a Service Provider in the sense that you asked us not to. Instead, I wanted to outline one turn-key approach with specific details on how to get there.
You can find specific steps in a lot of places, such as this link:
https://www.geeksforgeeks.org/deploying-your-web-app-using-azure-app-service/
DanielG's answer is useful, but you mentioned you don't want use any services from service provider.
Usually, there are only three ways to deploy the program,
first one is the app service provided by the service provider mentioned by DanielG,
**Benefits of using service provider products:**
1. Very friendly to newbies, follow the documentation to deploy the application in a few minutes.
2. It offers a very stable, scalable service that monitors the health of our website.
3. We can get their technical support.
**Shortcoming**
It is a paid service, and although Azure's service has a free quota, it will run out.
**Suggestion**
It is recommended that websites that are officially launched use the services of service providers.
second one is to use fixed IP for access (it seems that fixed iPv4 IP is not provided in network operations),
**Benefits of using fixed IP:**
If there is a fixed IP address, or if the carrier supports iPv6, we can deploy our website, and the public network can access it. And if you have domain, it also can support https.
**Shortcoming**
1. There are cybersecurity risks and are vulnerable to attack.
2. Without perfect website health monitoring, all problems need to be checked by yourself, and it is very troublesome to achieve elastic expansion.
**Suggestion**
It is generally not recommended because there is no fixed IP under normal circumstances. Broadband operators used to offer it, but now it doesn't.
If you are interested, you can try ipv6 to test.
the last one is to use tools such as ngrok or frp for intranet penetration.
**Benefits of using intranet penetration:**
Free intranet penetration services such as ngrok, the URL generated by each run is not fixed, and there are some limitations, such as a new URL will be generated after a certain period of time, which is enough for testing.
Of course you can purchase the service of this tool, which provides fixed URLs and supports https.
**Shortcoming (same as the second one)**
**Suggestion**
The functional implementation is the same as the second suggestion, and the physical devices used by the website are all their own. The intranet penetration tool (ngrok, or frp) solves the problem of not having a fixed IP, providing a URL that you can access.
There are few users and the demand for web services is not high, so it is recommended that individual users or small business users use ngrok and frp in this scenario. Generally suitable for OA use in small businesses.

Which Google OAuth Flow is good for web application not reachable from the internet?

I wrote a small application which needs access to Google API with OAuth2 and which is typically run on maybe a Raspberry Pie to reduce power consumption since it needs to run pretty much 24/7. The device my application runs on is typically connected a LAN at home.
The user controls the application using a web interface from a PC/Tablet/... in the same network. However the web application is reachable from the LAN only, it cannot (and should not for security reasons) be accessed over the Internet, because it hides behind a NAT and/or a firewall.
The documentation states that I have the following options:
Web server applications
This forces me to use a redirect URL which must be known in advance. Since my app is most likely accessed by a dynamic private IP address, there is no way I know the URL in advance.
Installed applications
Yes, that would work. I just need people to copy and paste the returned code into a web form of mine. However that is somewhat uncool.
Client-side (JavaScript) applications
This does not give me a refresh token which I totally need.
Applications on limited-input devices
Polling? Well... If it works... However it requires the user to match a code shown on the device with a code displayed in the webbrowser. If I use that I can just as well ask the user to copy&paste the code returned by the installed app mode.
As far as I can see the copy&paste the code with installed app is my best chance. Is it really? Or is there is possibility to get along without that bit?

Web UI to manage computer machines in the network

I'm looking for a platform with Web UI access that allows me to do the following:
Maintain a list of computers and add / remove based on their IP address.
Provide the SSH information for each computer machine.
Monitor if the machines are up ( ping ? )
Restart the machines with a web UI using the ssh information on the backend of the application.
I'm close to start making such an app myself since I can't seem to find anything close to that in the internet. Any clues if such an application exists ?
You might want to take a look at MeshCentral: https://meshcentral.com/ - you can add systems that you are managing and do some remote operations.
http://info.meshcentral.com/: Meshcentral is open source and is both a peer-to-peer technology with a wide array of uses and web service that is targeted for remote monitoring and management of computers and devices. Users can manage all their devices from a single web site, no matter the location of the computers or if they are behind routers or proxies.
If you are looking for source code you could take a look at the "Open Manageabilty Developer's Toolkit" http://opentools.homeip.net/open-manageability. This tool was built for managing systems with Intel Active Management Technology, but it does a lot of what you are looking for. You can download the source and see if you can use any of it if you decide to write your own UI.

Share settings between related Windows Store Apps

We are currently planning to develop a suite of Windows Store Apps. They are independent and fully work alone, but they are related and act in concert. If a user has several of them, they should share some of their settings (and data), so that the user does not have to manually change these settings in every single one of them.
Is such a scenario even intented?
And how to implement it?
Registry: Does not work. Apps cannot access the registry.
ApplicationData (LocalFolder, LocalSettings etc.): Does not work. Apps cannot access the data of other apps.
Cloud services: Kind of works, but only when the machine is online. Our apps should work offline, too. And we would need to create/rent such a cloud service, which would cause additional costs.
KnownFolder.DocumentsLibrary: This –currently– looks like the only solution to me. The apps are already saving and sharing data there, so let's just save our settings there, too. But the name of the shared folder is one of the settings! And Windows Store Apps cannot create hidden files, so the user can see the settings file. This makes this solution a bit... "rough".
Any other ideas or additional information I have missed?
If you want them to sync with each other instantly, even when the device is offline, then that's your only option. Windows 8 Apps are not intended to share settings.
So much want of sharing.
Roaming API will only share with the SAME app, the SAME user, ANY W8 device.
SkyDrive will only share across ANY app, the SAME user, ANY device.
Using Azure (or any web service) will share across ANY app, ANY user, ANY device.
Don't do this
Don't use the register, the API is not supported
Don't use the file system, the boundaries cause your app to be brittle
Don't use ApplicationData.AnyFolder, this is restricted to a single app GUID
You had might as well get "instant" out of your language, man. That just doesn't happen. But you can have fast (let's call it near instant); you can use Sockets or SignalR to connect your client to some service out there with nearly instant responses. A less sophisticated approach would be to poll from your client, too. It has served developers for decades.

How to separate development of client-side web UI and the server side

I'm in the process of providing a Web UI as an alternative to our current desktop UI for our C/S enterprise application.
When developing the client-side in our desktop version, UI developers could connect to any server so they only needed the client-side environment.
When developing a Web UI (Client-side JavaScript in the browser), we are bound by the browser's "Same origin policy" so the UI must talk to the same server from which the UI code is downloaded.
As far as I see it till now, the development scenario for the UI guys is:
Developer installs server on local
machine and runs it.
Developer edits the HTML+JS+CSS files on local installation.
Developer has to reinstall/update server on local machine each time there's a need to test UI code against new server behaviour.
This does not seem too comfortable, at least compared to our previous C/S style development.
Are there any other ways you can suggest to that will not require UI developers from installing and updating server side components on their development machine ?
Or anything else related that can simplify the development process ?
Thanks :-)
Editing in some clarifications:
I'm mostly interested in the aspects of UI coding, not UI design.
I need a lot of server interaction - getting data from RESTful web services, which are developed in parrallel - hence the need to have an up-to-date server
You haven't specified the development platform.
As far as pure HTML/JS/CSS is concerned, you don't need a server. The UI developer can fine tune UI components locally.
The moment you want to talk/integrate to Server (via AJAX, JSP, ASP...) then you need to connect a development server as now your changes have to be served by Server.
Most of UI fine tuning can also be done from Firebug
In our office when changes to styling are required we save the page as a local copy and send it to the UI designer, he makes his changes and we integrate them. So the UI designer don't have to maintain a development environment.
JSONP lets you work around the same-origin problem (with server support) -- check it out! If the front-end-in-the-browser developers are using a good framework suc as jQuery or (my favorite) Dojo, JSONP should be no harder for them than plain JSON.
Develop on a shared server, but depending on the size of the team.. that's challeging with respect to version control.
Or deploy automatically generated virtual machines with nightly builds, so the devs don't have to install, but always use a recent version.
In the case of UI developers depending on a common REST server, the UI development can be done on the local machine and the REST service should be on a central server. When changes are made to the REST service these should be deployed to the central server (when stable), so all developers can use the newest version (this also helps with testdata).
You could try using a proxy on the developer's machine where some paths redirect to the server and some paths redirect to local folders.
Hmm, I actually didn't really get any information on what kind of technology you're using. If - with UI Developers - you mean designers, which have to take care about the CSS, layout etc, then we do it the same as lud0h said. We (developers) send the UI designers a copy of the server-side produced HTML pages. They then edit the HTML pages according to accessibility guidelines, CSS and layout and send us back the outcome of their work. We use their HTML pages then for integrating them in our web applications.
If you don't just mean tuning CSS, but also to write JavaScript / Ajax functionality you HAVE to use a server with which you're communicating. As you said, normally this is done by having a local environment which is similar to the server-one. In .Net Visual Studio '08 provides an internal webserver, alternatively you have to install IIS locally. In Java environments you have to install Tomcat and related technologies. In my eyes this is a must. What you have to have is
Versioning system (CVS, SVN,...) where developers commit regularly (minutes/hours)
local environments where developers checkout the source from the repository and develop
Test server where you deploy on a daily basis (could be like daily builds) in order to test your running product
I guess this should be what a professional development environment should consist of. The difference to C/S application development is that web UI and web-client code are not that separable as a Client UI in C/S environment from the server-side. Unless you develop with technologies like GWT or Silverlight which are quite similar to C/S, just running inside the browser, but communicating over RPC calls or web services.
//Edit:
What I nearly forgot. Don't do something like developing on the server directly, meaning that all of the developers access the server's filesystem where the code, UI etc. lies!!
You can use CORS. a new technique just like Ajax, but with ability to make calls on other domains. so you will need only one UI on one server. think this can help you.