Why Mobilefirst 7.1 cannot auto recover / reconnect after network got lost / disconnected - ibm-mobilefirst

IBM MobileFirst 7.1 is not auto recovering after a network failure / lost of connection even though all services/connections are back to normal.
We have a clustered / farm setup with 2 web and app servers (Tomcat). Both app servers are able to serve incoming transactions. We have this incident where-in there is a network failure/lost connection and during that time, all transactions are pointing to 1 app server. Although all connections went back to normal, this 1 app server still unable to connect to the configuration DB. What we did is turn-off this failure server and try the app which is now pointing to the other app server and the app works. We tried to restart the failure app server, test the app and is now accepting transactions. The question is, why it does not auto recover and Tomcat service needs to be restarted? Is MobileFirst 7.1 designed/built in such behavior (not auto recover)?
The expectation is, it should auto recover.
Please help and advise what can be checked/adjusted.
Thanks in advance.
Best regards,
Jonathan

The default DB configuration (datasource configuration) provided with MFP is not designed to auto recover when there is a DB connectivity issue. You
should be able to configure the MFP for auto-reconnect by providing correct data source configuration. See an article on how this is done for different app servers : https://www.techpaste.com/2016/04/jndi-autoreconnect-java-application-servers/

Related

weblogic server goes down when any database glitches occurs

Dears,
I am facing a problem with my weblogic 12c Server. I have 3 managed nodes and one admin server. I have configured an Oracle Datasource. Whenever any network glitches or database downtime happens, weblogic server also goes down completely. Is there any configuration which I can make to hold on the server until everything comes back.
Servers are trying to restart automatically.But it is in STARTING state. I have to again start all servers and it is taking time. Your suggestions will help me a lot. Thanks in advance.
If you don't want your servers to restart automatically, the server failure action can probably be tweaked:
{managed_server} -> Configuration -> Overload -> Failure Action
However, probably you should be more interested in exploring if you are missing connection timeout config for your datasource.

MobileFirst 7.0 WLAPP deployment not synchronizing on other application server

We are using MobileFirst 7.0 and running on 2 Tomcat application servers. When we deploy new WLAPP, it will deploy on 1 of the app server but not synchronizing to other app server until we restart the tomcat service.
Is there a flag / option to automatically synchronize on both app servers?
Thank you.
Best regards,
JM
The synchronization should happen automatically and it is controlled by property
cluster.data.synchronization.taskFrequencyInSeconds in the server.xml. Try reducing this value if it set too high.
Also check the property ibm.worklight.admin.environmentid to make sure it is specified correctly as per the documentation below
https://www.ibm.com/support/knowledgecenter/#!/SSHS8R_7.0.0/com.ibm.worklight.deploy.doc/admin/t_using_JNDI_lookup_to_override_WL_properties.html

Behavior of WL.server.createEventSource on a Worklight Cluster Environment

Let's assume I have a cluster of 2 worklight servers sharing the same WL runtime.
On that runtime, I've installed a application with a adapter that is a create event source function.
Just like this IBM article.
https://www.ibm.com/developerworks/community/blogs/worklight/entry/configuring_a_polling_event_source_to_send_push_notifications?lang=en
My question is, what will happen on a cluster environment.
Will repeated work ensue?
By other words, would my two WL Servers will be pooling for events?
Or perhaps that functionality is writing a task on the WL DB that the WL Servers poll regularly to check for work if no instance is taking care of it, so that only a server at a time would be "the event source"?
I'm working with IBM Worklight 6.2 and Websphere Liberty Profile 8.5.5
Thanks in advance!
Here's my attempt to answer this after some consultation:
My question is, what will happen on a cluster environment. Will
repeated work ensue? By other words, would my two WL Servers will be
pooling for events?
While the Worklight Servers share the same runtime, they are still considered as 2 instances. This means that each of them will attempt to perform a polling action. This is considered OK.
However, it is important to note that the backend system that is being polled should likely be smart enough to handle such a situation where 2 polling attempts are done for the same message.
If the backend doesn't know how to handle polling properly, the same message can be pulled more than once. This is true even of you have a single eventsource running. So this is something to keep in mind.

ODBC connection re establishes after application pool recycle

I have a web service application which connects to databases through odbc sql native client and SQL Server drivers. all of a sudden the application stopped connecting to the database throwing the error 08001. But when i did the application pool recycle it started working. Now it is happening intermittently and became a headache for me. It cant be a memory problem as it happened immediately after app pool reclycle once. but agian got corrected after one more app pool recycle. i dont know what is happening as none of the error logs give any clue:(. Please help me...
the first step is to be able to diagnose what is going on. You cannot fix what you cannot measure. To do this I would enable pooling in the data source console for the driver, then add the counters to the performance monitor to see what the connection pool is doing.
I'm not sure what the realtionship between IIS applocation pool processes and odbc connections is but we are seeing some unexpected behaviour in this area. Also the odbc connection performance counters are visible if I connect to the driver through a locally installed console application but I cannot see any performance counter activity for connections made via the web service app pool in IIS? ODD!?

MQ With WLS Foreign Server

I am facing two issues when i try to connect to MQ which is deployed on a Remote Server from Weblogic Server(WLS) by creating a Foreign Server.
1. When I try to connect to MQ Queuemanager in Bindings mode(after importing the .Bindings file) i keep getting the below error in WLS console:
java.lang.UnsatisfiedLinkError: no mqjbnd05 in java.library.path
If i Switch the Transport to Client i keep getting:
JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Client' and host name 'localhost'. Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
Has anyone seen this, and are there any performance implications which dictate the use of client over bindings and vice versa?
TIA
Finally i was able to resolve this, i had to recreate the .bindings file in the client mode, with changes to the IVTsetup.bat which is most likely present in
C:\Program Files\IBM\WebSphere MQ\java\bin, I had to run this
def qcf(psQCF) TRANSPORT(CLIENT) HOST(SMEKA) PORT(1415) CHANNEL(ps_SRV_CHANNEL) QMGR(psQM)
to generate the .bindings file.
Refer to this link for more details:
http://publib.boulder.ibm.com/infocenter/wbihelp/v6rxmx/index.jsp?topic=/com.ibm.wbia_adapters.doc/doc/peoplesoft/peopleso103.htm
Where the question states that I try to connect to MQ which is deployed on a Remote Server from Weblogic Server I assume this means that WLS and WMQ are on two different hosts. If that is the case, then a bindings mode connection (which relies on shared memory segments) won't work.
The client mode connection appears to be using a CF that is pointed to localhost rather than the IP or hostname of the WMQ server. This would work for an application on the same host as the queue manager but not when the app and QMgr are on separate servers.
As far as choosing between client and bindings mode, the answer is that if the QMgr is local use bindings. This provides highest reliability, best performance and XA transactionality. When using client mode, two-phase XA commit is not supported without the Extended Transactional Client. Per the JMS specification, there is an ambiguity that can exist if an app loses the connection during a COMMIT call. Depending on how the app handles this it's possible to end up with duplicate messages. (The JMS spec refers to these as "functionally duplicate.") This ambiguity is much less likely to occur with a bindings mode connection since there is no network latency and not even any traversal of the IP stack or interface. So use bindings mode where possible.
UPDATE:
Removed note about Extended Transactional Client being a chargeable component. As of April 24th, XTC is free of charge for all versions of WMQ on all platforms.