I'm looking for the advice of how to manually (i.e. without using Runtime Manager - RM) deploy a mule application package on the on-premises Mule cluster. The official documentation suggests using the RM for the purpose either via the gui or cli or api. However, the RM is not available on our environment.
I can manually deploy the package on a single node by copying it to the /apps folder. But this way the application is only deployed on a single node, not on the cluster.
I've tried using the AMC agent rest API for the purpose with the same result - it only deploys on a single node.
So, what's the correct way of manually deploying a mule application on the Mule servers cluster without using Anypoint RM?
We are on Mule 4.4 EE.
Copy the application jar file into the apps directory of every node. Mule clusters do not transfer applications between nodes.
Alternatively ou can use the Runtime Manager Agent instead however it also works in a per node basis. You need to send the same request to each node to deploy.
Each connector may or may not be cluster aware. Read each connector documentation to understand how they behave. In particular the documentation of the VM connector states:
When running in cluster mode, persistent queues are instead backed by the memory grid. This means that when a Mule flow uses VM Connector to publish content to a queue, Mule runtime engine (Mule) decides whether to process that message in the same origin node or to send it out to the cluster to be picked up and processed by another node.
You can register the multiple nodes through AMC agent on the cloudhub control plane and create a server group and deploy code through control plain runtime manager it does the job of deployment to same app in n nodes
Related
what is the best way to monitor the Mule ESB instances. Is there a way i can get alerted when my mule instance goes down for some reason. I have 4 instances of Mule running and how will I come to know if 1 of them got down due to some reason.
Thanks!
I assume you are running community edition? (Enterprise edition provides a Management Console which allows you to define alerts). If you are using CE, then you are able to enable JMX monitoring on the instances and then use one of many ways to verify based on JMX info, whether your server is running. One way is to write your own application that retrieves JMX data programmatically and act accordingly.
HTH
If you are using Mule EE, you can use MMC to monitor all your instances as Gabriel has already suggested. My suggestion would be to install MMC inside tomcat on a separate server. This is to ensure that even if your Mule Server crashes or goes down, your MMC is still running and can send you alerts about your Mule server downtime. You can refer below link for details on how to setup server down and up alerts.
https://developer.mulesoft.com/docs/display/current/Working+With+Alerts
Additionally I would recommend to use MMC with database persistence to ensure you have ability to recover MMC workspace even if your MMC server crashes. You can refer about MMC setup with DB persistence at below link.
https://developer.mulesoft.com/docs/display/current/Configuring+MMC+for+External+Databases+-+Quick+Reference
If you don't have Mule EE, you may want to explore other tools or customer alerting applications as suggested by Gabriel.
HTH
You can set up a JMX agent by adding the following lines into your "conf/wrapper.conf" file :
wrapper.java.additional.19=-Dcom.sun.management.jmxremote
wrapper.java.additional.20=-Dcom.sun.management.jmxremote.port=10055
wrapper.java.additional.21=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.22=-Dcom.sun.management.jmxremote.ssl=false
wrapper.java.additional.23=-Djava.rmi.server.hostname=127.0.0.1
don't forget to change the values accordingly. Also you can implement SSL authentication with a few extra lines.
Now once your monitoring platform is set up you can always activate Java pollers and start the server.
I have a curiosity and I was searching for it without any result. In GlassFish documentation it is written:
If the GlassFish Server instance on which the application client is
deployed participates in a cluster, the GlassFish Server finds all
currently active IIOP endpoints in the cluster automatically. However,
a client should have at least two endpoints specified for
bootstrapping purposes, in case one of the endpoints has failed.
but I am asking myself how this list is created.
I've done some tests with a stand-alone client that is executed in a JVM and does some RMI calls on an application that is deployed in a GlassFish cluster and I can see from the logs that the IIOP endpoints list is completed automatically and it is set as com.sun.appserv.iiop.endpoints system property but if I stop a server instance or start another during the execution of the client the list remains the one that was created when the JVM was started.
GlassFish clustering is managed by the GMS (Group Management Service) which usually uses UDP Multicast, but can use TCP where that is not available.
See section 4 "Administering GlassFish Server Clusters" in the HA Administration Guide (PDF)
The Group Management Service (GMS) enables instances to participate in a cluster by
detecting changes in cluster membership and notifying instances of the changes. To
ensure that GMS can detect changes in cluster membership, a cluster's GMS settings
must be configured correctly.
I am new to Mule ESB.
I have downloaded Mule Management Console(MMC).I have developed one simple application in which message(s) are put into queue which are then read by another queue. Now what i want to have two MMC instances running and on stopping one mule instance all the message flows through second instance.
Please let me know how to approach about it. Can I do it on my local machine? I mean can i create cluster on my local machine.
Mule based clustering.
Refer
http://www.mulesoft.org/documentation/display/current/Mule+High+Availability+HA+Clusters
http://www.mulesoft.org/documentation/display/current/1+-+Installing+the+Demo+Bundle
Hope this helps.!
Mule server doesn't allow you to link itself with two MMC instances at the same time thats because you register your mule server with MMC refer this: http://www.mulesoft.org/documentation/display/current/Architecture+of+the+Mule+Management+Console.
There is a way to have other MMC instance but that MMC instance has to be passive when the other is already running, MuleSoft has documented this refer it here: http://www.mulesoft.org/documentation/display/current/Persisting+MMC+Data+to+Oracle
So, in your case, when one mmc instance is stopped you can start the other one.
We create and publish a service through the web console (/publisher). This is environment is a 2 node WSO2 AM cluster with a loadbalancer on top of it (HAProxy).
When we invoke the service (SoapUI), via this load balancer, one request succeeds and the next one fails and so on.
IMHO: The cluster configuration should be correct. I can see the published service on both nodes if I start the /publisher app on each node.
axis2.xml:
- Hazelcast clustering is enabled
- Using multicast
master-datasources.xml
- pointing to Oracle database
api-manager/xml
- pointing to jdbc string in master-datasources.xml
Does anyone have some tips.
It seems that the Synapse Artifact is not deployed on two Gateways correctly. Can you go to /repository/deployment/server/synapse-config/default/api and see if the both node have an xml file for the published API. If you haven't enabled Deployment Synchroniser the artifact will only be created in one node.
I am trying to write scripts(using java) to deploy my mule application on top of the cluster. So that, application get deployed on the Mule ESB servers under cluster.
Already I have written a code to deploy my mule application on Mule ESB server using MMC Rest API(http://www.mulesoft.org/documentation/display/current/MMC+REST+API)
Now my next target is to deploy application on MMC cluster.
Can any one please suggest me a way to deploy mule application on cluster from java code(using API).
Thanks in Advance.
The MMC REST API allows to deploy to a cluster the same way as you deploy to a standalone server:
http://www.mulesoft.org/documentation/display/current/Deployments
Instead of Java code ... why don't you try Maven ... Maven Script directly create application zip and deploy to mmc cluster ... All you need to write the script in .pom file instead of java class
There is a maven plug in that you can use to deploy via MMC:
https://github.com/NicholasAStuart/Maven-Mule-REST-Plugin
mule-mmc-rest-plugin:deploy
This will:
delete an existing mule application archive from the MMC Repository if version contains "SNAPSHOT"
upload the mule application archive to the MMC Repository
delete an existing deployment having the same application name
create a new deployment this the uploaded archive, with target the
given serverGroup
perform a deploy request to make MMC deploy into target server group
I used it and it works (but you may need to make it some customizations)