Mulesoft MMC Deployment unreconciled - mule

We have a cluster server for MuleSoft. All the applications are deployed correctly and working perfectly but the deployments are showing in red on on-premise MMC.
Each deployment says "Deployment unreconciled" when hover over the deployment. Mule_ee.logs doesn't give anything specific. Where could I find the logs to see why is MMC giving this message.

Which version of Mule MMC you are using - we had the same issue where MuleSoft suggested to upgrade the MMC to the latest version ie 3.8.2. This seems to be a known bug in the MMC version 3.5.2.
The deployment showing red because of the following reasons -
- Servers are shown as down in MMC but they're up and running.
- Alerts for server down are triggered even though the server was never down.
- Any operation over the servers is throwing an error as they're detected as being down.
- Applications may appear as offline when they're actually deployed.
Most of the above listed problems are caused by problems in the network that communicates MMC with the Mule instances. MMC does a ping over the instances which is in charge of updating the servers statuses. In order to avoid the mentioned problems, the frequency of the ping can be changed so it allows the tweaking for a value that suits the current network.
In order to do this you need to the edit the file /WEB-INF/classes/META-INF/applicationContext.xml and modify the bean with id="pinger" to include a new constructor argument of type int (last argument). The value will define the socket timeout of the server status call. Value is expressed in ms-
<bean id="pinger" class="com.mulesoft.mmc.heartbeat.Pinger">
<constructor-arg ref="serverManager" />
<constructor-arg ref="statusServiceAdaptor" />
<constructor-arg ref="eventManager" />
<constructor-arg ref="pingServerExecutor" />
<constructor-arg type="int"><value>10000</value></constructor-arg>
</bean>
After making this change, please restart MMC and check if the servers still experience the same problem.
Note: the default value of the ping is 1000 (one second) for MMC version 3.6.x and 5000 for MMC 3.7.x.
Regards,
Sanjeet Pandey

The yellow icon (for unreconciled) wait until it is deployed.
See documentation below
https://docs.mulesoft.com/tcat-server/v/7.1.0/deploying-applications

Related

How to monitor ActiveMQ queue in Apache James

We're using Apache James 3.0-beta4 which uses embedded ActiveMQ 5.5.0 for FIFO message queue, and sometimes messages get stuck. Therefore, we need to monitor it. Is there any way to monitor an ActiveMQ queue like message size and most recent message-id in the queue (if possible).
In the JAMES spring-server.xml I found that:
<amq:broker useJmx="true" persistent="true" brokerName="james" dataDirectory="filesystem=file://var/store/activemq/brokers" useShutdownHook="false" schedulerSupport="false" id="broker">
<amq:destinationPolicy>
<amq:policyMap>
<amq:policyEntries>
<!-- Support priority handling of messages -->
<!-- http://activemq.apache.org/how-can-i-support-priority-queues.html -->
<amq:policyEntry queue=">" prioritizedMessages="true"/>
</amq:policyEntries>
</amq:policyMap>
</amq:destinationPolicy>
<amq:managementContext>
<amq:managementContext createConnector="false"/>
</amq:managementContext>
<amq:persistenceAdapter>
<amq:amqPersistenceAdapter/>
</amq:persistenceAdapter>
<amq:plugins>
<amq:statisticsBrokerPlugin/>
</amq:plugins>
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0" />
</amq:transportConnectors>
</amq:broker>
also one old part from readme:
- Telnet Management has been removed in favor of JMX with client shell
- More metrics counters available via JMX
...
* Monitor via JMX (launch any JMX client and connect to URL=service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi)
which is confusion on how to use it.
This is part of the bigger "monolith" project which now is recreated for microservices but still need to be supported ;) All was fine till mid of March.
It looks like ActiveMQ management and monitoring is not possible because JMX is disabled.

How to start ActiveMQ WebConsole at the same server every time?

I have 3 virtual machines, each one of them running zookeeper and activemq.
Every time I start ActiveMQ, the ActiveMQ WebConsole starts in a different server. I wanto to start the ActiveMQ WebConsole at the same server everytime, so I don't need to figure out which of them is running the webconsole through the logs.
This is how my jetty.xml is configured:
<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
<!-- the default port number for the web console -->
<property name="host" value="0.0.0.0"/>
<property name="port" value="8161"/>
</bean>
This is not possible as the embedded web server runs on the broker that is the master.
You can look at alternative web consoles that allows remote management, such as hawtio that can connect to remote servers. You can start hawtio on your local computer, or have it run on some other host, or start it separately on one of those 3 nodes etc.
http://hawt.io/
Running a local Hawt.io like Claus advices is a great option.
If you want to stick with the web console, you can actually have it connect to the current master broker.
You will need to start the console in non embedded mode and set (at least) three system properties. That is, typically this involves deploying the web-console .war inside a Tomcat or similar.
webconsole.jms.url=failover:(tcp://serverA:61616,tcp://serverB:61616)
webconsole.jmx.url=service:jmx:rmi:///jndi/rmi://serverA:1099/jmxrmi,service:jmx:rmi:///jndi/rmi://serverB:1099/jmxrmi
webconsole.type=properties
An old article that discuss using the embedded web consoles for failover as well. I don't know if it applies in all details to current versions.

Mule ESB Instance Monitoring

what is the best way to monitor the Mule ESB instances. Is there a way i can get alerted when my mule instance goes down for some reason. I have 4 instances of Mule running and how will I come to know if 1 of them got down due to some reason.
Thanks!
I assume you are running community edition? (Enterprise edition provides a Management Console which allows you to define alerts). If you are using CE, then you are able to enable JMX monitoring on the instances and then use one of many ways to verify based on JMX info, whether your server is running. One way is to write your own application that retrieves JMX data programmatically and act accordingly.
HTH
If you are using Mule EE, you can use MMC to monitor all your instances as Gabriel has already suggested. My suggestion would be to install MMC inside tomcat on a separate server. This is to ensure that even if your Mule Server crashes or goes down, your MMC is still running and can send you alerts about your Mule server downtime. You can refer below link for details on how to setup server down and up alerts.
https://developer.mulesoft.com/docs/display/current/Working+With+Alerts
Additionally I would recommend to use MMC with database persistence to ensure you have ability to recover MMC workspace even if your MMC server crashes. You can refer about MMC setup with DB persistence at below link.
https://developer.mulesoft.com/docs/display/current/Configuring+MMC+for+External+Databases+-+Quick+Reference
If you don't have Mule EE, you may want to explore other tools or customer alerting applications as suggested by Gabriel.
HTH
You can set up a JMX agent by adding the following lines into your "conf/wrapper.conf" file :
wrapper.java.additional.19=-Dcom.sun.management.jmxremote
wrapper.java.additional.20=-Dcom.sun.management.jmxremote.port=10055
wrapper.java.additional.21=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.22=-Dcom.sun.management.jmxremote.ssl=false
wrapper.java.additional.23=-Djava.rmi.server.hostname=127.0.0.1
don't forget to change the values accordingly. Also you can implement SSL authentication with a few extra lines.
Now once your monitoring platform is set up you can always activate Java pollers and start the server.

MQ With WLS Foreign Server

I am facing two issues when i try to connect to MQ which is deployed on a Remote Server from Weblogic Server(WLS) by creating a Foreign Server.
1. When I try to connect to MQ Queuemanager in Bindings mode(after importing the .Bindings file) i keep getting the below error in WLS console:
java.lang.UnsatisfiedLinkError: no mqjbnd05 in java.library.path
If i Switch the Transport to Client i keep getting:
JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Client' and host name 'localhost'. Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
Has anyone seen this, and are there any performance implications which dictate the use of client over bindings and vice versa?
TIA
Finally i was able to resolve this, i had to recreate the .bindings file in the client mode, with changes to the IVTsetup.bat which is most likely present in
C:\Program Files\IBM\WebSphere MQ\java\bin, I had to run this
def qcf(psQCF) TRANSPORT(CLIENT) HOST(SMEKA) PORT(1415) CHANNEL(ps_SRV_CHANNEL) QMGR(psQM)
to generate the .bindings file.
Refer to this link for more details:
http://publib.boulder.ibm.com/infocenter/wbihelp/v6rxmx/index.jsp?topic=/com.ibm.wbia_adapters.doc/doc/peoplesoft/peopleso103.htm
Where the question states that I try to connect to MQ which is deployed on a Remote Server from Weblogic Server I assume this means that WLS and WMQ are on two different hosts. If that is the case, then a bindings mode connection (which relies on shared memory segments) won't work.
The client mode connection appears to be using a CF that is pointed to localhost rather than the IP or hostname of the WMQ server. This would work for an application on the same host as the queue manager but not when the app and QMgr are on separate servers.
As far as choosing between client and bindings mode, the answer is that if the QMgr is local use bindings. This provides highest reliability, best performance and XA transactionality. When using client mode, two-phase XA commit is not supported without the Extended Transactional Client. Per the JMS specification, there is an ambiguity that can exist if an app loses the connection during a COMMIT call. Depending on how the app handles this it's possible to end up with duplicate messages. (The JMS spec refers to these as "functionally duplicate.") This ambiguity is much less likely to occur with a bindings mode connection since there is no network latency and not even any traversal of the IP stack or interface. So use bindings mode where possible.
UPDATE:
Removed note about Extended Transactional Client being a chargeable component. As of April 24th, XTC is free of charge for all versions of WMQ on all platforms.

Why are my WebLogic clustered MDB app deployments in warning state?

I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.