MobileFirst 7.0 WLAPP deployment not synchronizing on other application server - ibm-mobilefirst

We are using MobileFirst 7.0 and running on 2 Tomcat application servers. When we deploy new WLAPP, it will deploy on 1 of the app server but not synchronizing to other app server until we restart the tomcat service.
Is there a flag / option to automatically synchronize on both app servers?
Thank you.
Best regards,
JM

The synchronization should happen automatically and it is controlled by property
cluster.data.synchronization.taskFrequencyInSeconds in the server.xml. Try reducing this value if it set too high.
Also check the property ibm.worklight.admin.environmentid to make sure it is specified correctly as per the documentation below
https://www.ibm.com/support/knowledgecenter/#!/SSHS8R_7.0.0/com.ibm.worklight.deploy.doc/admin/t_using_JNDI_lookup_to_override_WL_properties.html

Related

Why Mobilefirst 7.1 cannot auto recover / reconnect after network got lost / disconnected

IBM MobileFirst 7.1 is not auto recovering after a network failure / lost of connection even though all services/connections are back to normal.
We have a clustered / farm setup with 2 web and app servers (Tomcat). Both app servers are able to serve incoming transactions. We have this incident where-in there is a network failure/lost connection and during that time, all transactions are pointing to 1 app server. Although all connections went back to normal, this 1 app server still unable to connect to the configuration DB. What we did is turn-off this failure server and try the app which is now pointing to the other app server and the app works. We tried to restart the failure app server, test the app and is now accepting transactions. The question is, why it does not auto recover and Tomcat service needs to be restarted? Is MobileFirst 7.1 designed/built in such behavior (not auto recover)?
The expectation is, it should auto recover.
Please help and advise what can be checked/adjusted.
Thanks in advance.
Best regards,
Jonathan
The default DB configuration (datasource configuration) provided with MFP is not designed to auto recover when there is a DB connectivity issue. You
should be able to configure the MFP for auto-reconnect by providing correct data source configuration. See an article on how this is done for different app servers : https://www.techpaste.com/2016/04/jndi-autoreconnect-java-application-servers/

Runtime synchronization failed in MobileFirst 7.1 on Bluemix container with cloudant noSql DB

I followed the tutorial instructions :
Install MobileFirst Platform Server 7.1 on Bluemix (https://mobilefirstplatform.ibmcloud.com/labs/administrators/7.1/bluemix/)
I used Cloudant NoSQL DB as database.
It works well for several days.
But after a weekend without use, it doesn't work and i have this message on MobileFirst Operations console: Runtime synchronization failed.
console message
I tried to restart the container and the database application server (liberty) but i've always the same message.
I have to remove the container and repeat the whole procedure.
This is the third time it happens.
Try setting JNDI ibm.worklight.admin.farm.reinitialize value to true in server.xml. This will re-initalize the farm entries in other words it will clear the stale entries when the application crashes.
Reference : List of JNDI Properties for MFP Administration
seems like you are using the Cloudant shared plan. The shared plan response is not guaranteed like the dedicated Plan. To account for this vagarancies, there was a fix released to 7.1 that you should apply that adds the resiliency needed for the non response from the Cloudant shared plan. Pl apply the latest iFix and this should get solved.

Mule ESB Instance Monitoring

what is the best way to monitor the Mule ESB instances. Is there a way i can get alerted when my mule instance goes down for some reason. I have 4 instances of Mule running and how will I come to know if 1 of them got down due to some reason.
Thanks!
I assume you are running community edition? (Enterprise edition provides a Management Console which allows you to define alerts). If you are using CE, then you are able to enable JMX monitoring on the instances and then use one of many ways to verify based on JMX info, whether your server is running. One way is to write your own application that retrieves JMX data programmatically and act accordingly.
HTH
If you are using Mule EE, you can use MMC to monitor all your instances as Gabriel has already suggested. My suggestion would be to install MMC inside tomcat on a separate server. This is to ensure that even if your Mule Server crashes or goes down, your MMC is still running and can send you alerts about your Mule server downtime. You can refer below link for details on how to setup server down and up alerts.
https://developer.mulesoft.com/docs/display/current/Working+With+Alerts
Additionally I would recommend to use MMC with database persistence to ensure you have ability to recover MMC workspace even if your MMC server crashes. You can refer about MMC setup with DB persistence at below link.
https://developer.mulesoft.com/docs/display/current/Configuring+MMC+for+External+Databases+-+Quick+Reference
If you don't have Mule EE, you may want to explore other tools or customer alerting applications as suggested by Gabriel.
HTH
You can set up a JMX agent by adding the following lines into your "conf/wrapper.conf" file :
wrapper.java.additional.19=-Dcom.sun.management.jmxremote
wrapper.java.additional.20=-Dcom.sun.management.jmxremote.port=10055
wrapper.java.additional.21=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.22=-Dcom.sun.management.jmxremote.ssl=false
wrapper.java.additional.23=-Djava.rmi.server.hostname=127.0.0.1
don't forget to change the values accordingly. Also you can implement SSL authentication with a few extra lines.
Now once your monitoring platform is set up you can always activate Java pollers and start the server.

Behavior of WL.server.createEventSource on a Worklight Cluster Environment

Let's assume I have a cluster of 2 worklight servers sharing the same WL runtime.
On that runtime, I've installed a application with a adapter that is a create event source function.
Just like this IBM article.
https://www.ibm.com/developerworks/community/blogs/worklight/entry/configuring_a_polling_event_source_to_send_push_notifications?lang=en
My question is, what will happen on a cluster environment.
Will repeated work ensue?
By other words, would my two WL Servers will be pooling for events?
Or perhaps that functionality is writing a task on the WL DB that the WL Servers poll regularly to check for work if no instance is taking care of it, so that only a server at a time would be "the event source"?
I'm working with IBM Worklight 6.2 and Websphere Liberty Profile 8.5.5
Thanks in advance!
Here's my attempt to answer this after some consultation:
My question is, what will happen on a cluster environment. Will
repeated work ensue? By other words, would my two WL Servers will be
pooling for events?
While the Worklight Servers share the same runtime, they are still considered as 2 instances. This means that each of them will attempt to perform a polling action. This is considered OK.
However, it is important to note that the backend system that is being polled should likely be smart enough to handle such a situation where 2 polling attempts are done for the same message.
If the backend doesn't know how to handle polling properly, the same message can be pulled more than once. This is true even of you have a single eventsource running. So this is something to keep in mind.

jboss - how to automate retrying deployment of war file

When running jboss 7.1 as a windows service (or not), it occasionally takes more than one try to successfully deploy a war file. This is not a problem when starting jboss manually since restarts are easy. However, when jboss runs as a windows service and it is restarted automatically (due to a windows patch), jboss itself may launch, but the war may not.
Is there any way to cause jboss to retry deploying the war after it fails the first time - for example, by changing a setting in standalone.xml?
There are to ways to fix your problem.
1) go to standalone.xml (or whatever configuration you are running), find deployment-scanner and add/modify attribute deployment-timeout in seconds
2) Deploy your application as managed deployment, you can do that if you deploy trough admin console or via cli with deploy command. This way deployment will then be "managed" and will always be deployed and wont be using deployment scanner and its timeouts.
I recommend you to use deploy as managed deployment as deployment scanner is not really recommend to be used in production environments as it adds additional IO load on filesystem.
It is great for development / testing scenarios but should be avoided in production if possible.