I am new to Worklight and am currently doing proof of concepts to understand the features and strengths of the platform to create mobile web apps, hybrid apps and native apps.
Can IBM Worklight also be used for developing static information websites for multiple mobile devices?
Even if all you want to do is serve dynamic content form your server to the mobile device there are some advantages to use Worklight, for example by wrapping your site in a hybrid shell you can gain the presence in application stores (Apple iTunes and Google Play).
You can check "Module 45.1 – Worklight App as a Container For Server Generated Pages" ftp://public.dhe.ibm.com/software/mobile-solutions/worklight/docs/v505/Module_45_1_-_Worklight_App_as_a_Container_for_Server_Generated_Pages.pdf for more information about how to do it.
If you will not use your static site as the resource of the content but will use the Worklight application you will have a few advantages
1) Will work offline
2) Faster response time (no round trips (HTTP requests) to get the whole HTML, CSS, JavaScript, images)
At the end of the day Worklight application are for applications, where there is a interaction between backends and the client and usage of device capabilities (like location, camera, etc.) and not only static content.
Can it be used to create static sites? Yes. Is that a good use of the software license? Probably not. There is a lot more power in Worklight than just creating a static site. I would suggest really understanding responsive web design and using that to create your mobile friendly sites.
Related
I see WebRTC is the the best way for developing it. But there are some paid frameworks in market for establisting video chat between wide range of clients like Web-Web, Web-Mobile(IOS, Andriod, Windows, etc.,).
Web-Web communication flow is very simple to implement. Now, I want the same for Web-to-Mobile and vice versa without using any external frameworks built on top of Native WebRTC. Please suggest me some best approach to achieve this.
The latest Chrome on Android is WebRTC friendly, that means if you have a web app that implements WebRTC. It will be working on Android's Chrome.
If you decided to create you own native app that implements WebRTC. Here are some great sources.
iOS WebRTC: https://webrtc.org/native-code/ios/
Android WebRTC: https://webrtc.org/native-code/android/
Follow the instructions in each allow you to build the native WebRTC framework that you can later on import them into your native projects.
The WebRTC APIs are somewhat related to the ones you are using in your web application. You need to do more documentation reading for those as you are using the official framework that built from the source, not a third library.
Before starting you need to review and test platform to make sure it works fine for all your target user categories. You can do that by reviewing references and also testing some existing apps for user types you plan to support.
As you mentioned wide range of clients, you need to identify the limitations of WebRTC technology. You can also evaluate other technologies: in example you could reliably serve most client types with mobile and web apps that use RTMP.
Being said that IBM Mobilefirst is very advanced tool in developing hybrid mobile applications, I am curious to know if we can develop both the mobile application and the responsive mobile application using single code base. I know that there are different environments being provided out of box by Mobilefirst i.e mobile browser, desktopBrowser, Android etc.. I feel it kind of opens an option to developer to develop both mobile and responsive web application. But following questions and functionalities makes me rethink about going forward with this approach for practical implementation.
1) How far will the MobileFirst be reusable and flexible in terms of:
* implementing session management for both applications
* Authentication and Authorisation for both applications:
- When I said Authorisation, I meant user level preferences
2) What are the steps that need to be followed to setup a project which effectively uses the all the key features of the worklight for satisfying the above mentioned requirement.
3) Post development what are the steps that need to be followed to successfully deploy mobile and web application (Both of them will be using adapters to talk to services) into production.
Very sorry for making the question so theoretical. I felt very interesting and wanted to know.
1) How far will the MobileFirst be reusable and flexible in terms of:
* implementing session management for both applications
* Authentication and Authorisation for both applications:
- When I said Authorisation, I meant user level preferences
Depends which version of MFPF you are using.
Pre-7.1, all session management is the same in the server.
7.1 forward the server is session independent and currently Mobile Web and Desktop Browser are not supported; read more about session independent in the IBM Knowledge Center
Authentication works the same for both pretty much
2) What are the steps that need to be followed to setup a project which effectively uses the all the key features of the worklight for satisfying the above mentioned requirement.
There is not feature parity between the supported mobile environment and web environment, so the answer will depend on which particular features you will end up using. The IBM Knowledge Center contains a feature parity table.
3) Post development what are the steps that need to be followed to successfully deploy mobile and web application (Both of them will be using adapters to talk to services) into production.
That's got nothing to do with any of the environment you'll choose to use. It's the same for all. Yet again, read in the IBM Knowledge Center.
There are two environments that one can add to an IBM MobileFirst Platform Foundation 6.3 project, Mobile web app and Desktop Browser web page:
The intended purpose is obvious - they can be used to add a mobile browser web page and a desktop browser web page respectively. As I understand it, the contents are hosted from the MobileFirst server itself and are accessed over HTTP(S) through a browser, unlike the mobile environments, which are hosted inside a hybrid container.
However, what is the actual technical difference between these two environments (if any)? Are they just names or do they actually do different things? For example, does one inject CSS that the other doesn't? The default hybrid resources (CSS/JS/HTML) generated when one adds both of these environments is essentially the same.
They are indeed mostly the same.
Mobile Web is intended to be a "dedicated" environment for viewing in the mobile browser app of your smartphone/tablet.
Desktop browser is intended to be a "dedicated" environment for viewing in your desktop browser, or as a URL to be used in a Facebook app, or something you could embed as part of a webpage, ...
You could also create a Mobile Web environment and use Media Queries to cater for different screen sizes, etc.
Going through the IBM Worklight product documentation,the product looks great for building hybrid or native applications. However for building mobile web (with responsive web design) what are the specific advantages one can get from worklight?
For (the) Mobile Web (environment), I don't think that at this time there is much left.
However, you do still:
get to use Worklight Adapters and its extensive integration abilities, which do make it easier to connect to various backends
use Cordova to access some device native capabilities
use the WL Client JavaScript API
Are there any initiatives to implement/agree upon a standard API for connectivity between web browsers and client hardware.
Example: The iPhone has a GPS/Camera/Accellerometer in it. It'd be very cool if my web app could communicate with them (rather than me having to write a thick ObjectiveC application).
The closest thing I've seen to that is the Android phone API, which lets your programs access its hardware (relatively) painlessly. Google's pushing for it to become the new standard, but its hardly the same thing as a web-app (which, by most definitions, runs entirely in your browser?).
The upcoming version of FireFox has an API to read your lat/long off a GPS device.
To add to my own question; Yahoo provides a geolocation service called FireEagle that could act as a mediator and provide similar functionality.
In essence the phone communicates with a central Yahoo server updating its location. Your web app can then determine your approx location from that central server.