Worklight:Can we use two-phase commit in Java code which called from JS adapter? - ibm-mobilefirst

According to below, we can call Java code from javascript adapter.
Calling Java code from a JavaScript adapter
http://www-01.ibm.com/support/knowledgecenter/?lang=en#!/SSZH4A_6.2.0/com.ibm.worklight.dev.doc/devref/t_calling_java_code_from_a_javas.html
We plan to install worklight server on WAS full profile. WAS full profile supports two-phase commit.
Transaction support in WebSphere Application Server< br/>
http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/cjta_trans.html?cp=SSAW57_8.5.5%2F3-2-7-3&lang=en
To call java code from adapter, we need to deploy it on "Worklight server". Can we use two-phase commit in java code? Are there any limitations when using java code on worklight server?
Thanks in advance!

The only limitation I am aware of is that the WAS security context is not propagated to Worklight adapter's thread. But generally speaking, the same capabilities exist and the same servlet API is available.
You can read more about Java Vs JavaScript in adapters, in this question: Worklight Adapters - Java vs JavaScript
That said, two-phase commit was never tested in practice, so it may work and may not work... for the same reason as the security context mentioned above. As a transaction is usually associated with a thread, and that thread is not available for Worklight adapters which are using their own thread pool.
This limitation mentioned above may be removed in a future release of Worklight, which in turn may make it possible to use the two-phase commit feature.

Related

Replacing a polling event source when upgrading MobileFirst JavaScript adapter from 7.1 to 8.0

In our existing MFP 7.1 project, we're relying on the polling event source in a JavaScript adapter to create a scheduler which enables an interval-specific operation such as watching over a database table for new records to process at server side. The implementation was based on the following guide:
http://www.ibm.com/support/knowledgecenter/SSHS8R_7.1.0/com.ibm.worklight.dev.doc/devref/t_configuring_a_polling_event_source.html
However, we discovered that polling event source is nowhere to be found in the MFP 8.0 documentation and the following document states that polling event source is no longer supported:
https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/product-overview/release-notes/deprecated-discontinued/
We would like to know what is the recommended approach to migrate from 7.1 to 8.0 when dealing with polling event source such as this, and what is the alternative way suggested if there is no possible way in MFP 8.0. Thanks.
Polling is indeed not supported in MobileFirst Foundation 8.0.
https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/upgrading/migrating-push-notifications/
I don't have any official alternative, but since polling is the checking of some backend for new content and if true, then have a notification dispatched, you can still create some service of your own to check your backend if there is a new "record" or new otherwise new content, and if true, the construct a JSON for that notification and send it.
In v8.0 you have multiple REST endpoints you could use together with Confidential clients to send it.
http://www.ibm.com/support/knowledgecenter/SSHS8R_8.0.0/com.ibm.worklight.apiref.doc/rest_runtime/c_restapi_runtime.html
https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/authentication-and-security/confidential-clients/
You can also take a look at the following way of constructing a mechanism to send notifications using Node.js: https://mobilefirstplatform.ibmcloud.com/blog/2016/10/18/using-mff-8-push-service-rest-api-in-a-nodejs-based-server/

How to call TWS Beans from another Java EE server

How to call TWS beans from another Java EE server like JBoss or even WAS Liberty Profile?
I've no direct experience with JBoss or Liberty, but we have tried several times from Tomcat without success. Maybe it's possible with Liberty but as said I've never tried it.
This is one of the reason we are moving to REST APIs that makes interoperability much easier. REST APIs has been introduced on TWSd with 9.3 FP2, but are still not available on TWSz.
If you need them on TWSz, you can try to open an RFE to push this new feature.
If you don't have a product/release that natively support REST APIs, a possible pattern is to implement your own REST APIs based on J2EE APIs and deploy them as an additional WAR on the engine/connector WAS, and the call these REST APIs from your JBoss, Liberty.

Worklight + WebSphere eXtreme Scale

I tried the integration of these products based on this article and I hit the same problem already documented in the article.
"invocation of javascript function 'getRSSFeeds' has failed: Could not initialize class com.ibm.websphere.objectgrid.ObjectGridManagerFactory
FWLSE0101E: Caused by: [project ExtremeScaleInWorklight]java.lang.NoClassDefFoundError: Could not initialize class com.ibm.websphere.objectgrid.ObjectGridManagerFactory"
It seems that it is caused by a Java class collision of log4j.
My solution was to create a separate Liberty server and install the WXS client for Liberty. This solved the problem, but then I cannot use the WL Development Server anymore which turns the development less efficient.
What is the best way to develop this kind of solution?
I have seen this integration of products on several slides, but I can't find an official guide on how to achieve this. Is there any?
Have You tries to get the IBM WebSphere eXtremeSCale Liberty profile developer tools 8.6 also installed in your WL Development Server ?
SO WXS has two components Client ( libraries) and Serer side components. They can be housed in the same JVM -- for tests, in production this does not really make sense. Serer side hosts storing of objects and enforcing the 'grid management' policies that you may employ using the xml confg files.
perhaps you can use IBM WebSphere eXtremeSCale Liberty profile developer tools 8.6 also installed in your WL Development Server and include then in the classpath.

how to notify user when glassfish is down

I am using Glassfish v3.0.1 for my project. However, Glassfish seems to be down many times. Therefore, I want to develop a mechanism that notifies me whenever Glassfish is down. Is there any option in Glassfish? If not, how can I achieve this? Further, how can I understand why Glassfish goes down? I cannot find proper explanations in logs.
I'm not aware of any options in Glassfish itself and I doubt there are any (it's usually hard for a process to know when it's dead :-). Write a script that tries to connect to the service (for example, using wget or curl) or use a system monitoring tool that watches processes.
To find out why Glassfish terminates, you must debug the problem. Here are some tipps:
Add/enable more logging
Search the source code for System.exit(). This can terminate an Java app without any trace of why it happens. (this might help, too)
Check the standard output of the process
Look for crash dumps; see the documentation of the Java VM which you're using.

Can a Java application server (WebLogic) manage a native executable?

Is it possible (...knowing full well that this is crazy and seriously ill-advised...) to have a J2EE application running in a Java app server (using weblogic presently), and have a native executable process started, used, and stopped as part of this Java application's lifecycle? (Note: this is not JNI, it's actually a separate native process. It's unix/linux, but should also run on windows.) I haven't found any docs on the subject -- and for good reason, probably.
Background: The native process is actually some monolithic 3rd party software package that is un-hackable and there's no API other than stdin/stdout. The Java app requires the native app to perform certain services. I can easily wrap the native process via ProcessBuilder and start/stop and communicate with it (using stdin/stdout). For testing purposes I have a simple exe (C++) that communicates via stdin/stdout and can receive "start", "shutdown" and performs a simple "echo" service. (The "start" is a no-op, but simply returns "ok" if the native process started successfully.)
So, ideally, when the app server is started/shutdown, and/or the deployed Java app is started/shutdown, the associated native process can also be started/shutdown. And ideally, this can happen cleanly & reliably (no lingering processes after shutdown, all startup failures logged, the lifecycle timing issues synchronized).
If this actually worked, then "part 2" of the question would be if this could actually work in a cluster/failover environment. The native process could be tied to a platform and software-specific monitoring & management service, but I'd like to have everything bundled and managed with the Java app, if possible.
If Glassfish or any other OSGi type environment would make this simpler, please feel free to let me know (it could be an option... I'd prefer Glassfish, but WLS is the blanket mandate.)
I'm trying to put together a proof-of-concept, but any clear answer "yes, I've done it" or "no, it won't work" would be much appreciated & a huge time-saver (with supporting doc links, if you have them).
Edit: just to clarify (the subject may be misleading): there is a considerable Java application running as well (which I've written & can freely modify as necessary); the 3rd party native process just performs a service that the Java application requires. I'm not merely trying to manage a native process via an app server.
The answer to part 1 is yes, it is absolutely possible to have a Java application server manage a native system process. It sounds like you've pretty much figured this out for yourself, if you're thinking about using a ProcessBuilder to spawn the external program and interact with it. That's pretty much the way to do it.
I have used exactly that kind of setup in the past to implement a media transcoding service on top of a Java server (the Java server spawned transcoding jobs via ffmpeg processes, monitoring their status and reporting back to the rest of the application on success/failure/etc.). How cleanly it can all be done depends upon how you implement it and upon the behavior of your external app (i.e. is it guaranteed to respond gracefully and quickly to a shutdown request?), but it will be very difficult (if not impossible) to get it completely perfect. At a minimum, if someone does a kill -9 on your Java server process, there is no way for you to gracefully shut down the native process, at least not until the server is restarted and you see that the native process is already running.
The second part depends upon exactly what you mean by "work in a cluster/failover environment". In terms of managing the native process, if you can start it and interact with it in Java then you can also manage it in Java. But if you mean you want perfect failover behavior such that if the node with the native process on it goes down then a new node automatically resumes the process in the exact same state as it was before, then that may be very difficult or even impossible. But, if you abstract out interactions with the external process so that it just appears as a service that your Java code interacts with (for instance, perhaps by sending requests to some facade class that understands how to interact with and manage the external process) then you should be able to get some fairly good results.
The transcoding service that I implemented ran in a clustered environment (using JBoss/Tomcat), and the way it worked was that when a transcoding job was requested a message would be dispatched. This message would be received by a coordinating class that would manage the queue of transcode requests, spawning jobs as worker processes became available. The state of the queue was replicated across the cluster, so if the node running the ffmpeg processes went down the currently scheduled jobs would be remembered, and then resumed as soon as a suitable node was available again (the transcoding service was configurable so that it could be enabled/disabled per node). In practice the system proved to be quite robust.