Is there any API available to deploy an application on mule management console(MMC) cluster - mule

I am trying to write scripts(using java) to deploy my mule application on top of the cluster. So that, application get deployed on the Mule ESB servers under cluster.
Already I have written a code to deploy my mule application on Mule ESB server using MMC Rest API(http://www.mulesoft.org/documentation/display/current/MMC+REST+API)
Now my next target is to deploy application on MMC cluster.
Can any one please suggest me a way to deploy mule application on cluster from java code(using API).
Thanks in Advance.

The MMC REST API allows to deploy to a cluster the same way as you deploy to a standalone server:
http://www.mulesoft.org/documentation/display/current/Deployments

Instead of Java code ... why don't you try Maven ... Maven Script directly create application zip and deploy to mmc cluster ... All you need to write the script in .pom file instead of java class

There is a maven plug in that you can use to deploy via MMC:
https://github.com/NicholasAStuart/Maven-Mule-REST-Plugin
mule-mmc-rest-plugin:deploy
This will:
delete an existing mule application archive from the MMC Repository if version contains "SNAPSHOT"
upload the mule application archive to the MMC Repository
delete an existing deployment having the same application name
create a new deployment this the uploaded archive, with target the
given serverGroup
perform a deploy request to make MMC deploy into target server group
I used it and it works (but you may need to make it some customizations)

Related

How to manually deploy Mule application package on the on-premises cluster?

I'm looking for the advice of how to manually (i.e. without using Runtime Manager - RM) deploy a mule application package on the on-premises Mule cluster. The official documentation suggests using the RM for the purpose either via the gui or cli or api. However, the RM is not available on our environment.
I can manually deploy the package on a single node by copying it to the /apps folder. But this way the application is only deployed on a single node, not on the cluster.
I've tried using the AMC agent rest API for the purpose with the same result - it only deploys on a single node.
So, what's the correct way of manually deploying a mule application on the Mule servers cluster without using Anypoint RM?
We are on Mule 4.4 EE.
Copy the application jar file into the apps directory of every node. Mule clusters do not transfer applications between nodes.
Alternatively ou can use the Runtime Manager Agent instead however it also works in a per node basis. You need to send the same request to each node to deploy.
Each connector may or may not be cluster aware. Read each connector documentation to understand how they behave. In particular the documentation of the VM connector states:
When running in cluster mode, persistent queues are instead backed by the memory grid. This means that when a Mule flow uses VM Connector to publish content to a queue, Mule runtime engine (Mule) decides whether to process that message in the same origin node or to send it out to the cluster to be picked up and processed by another node.
You can register the multiple nodes through AMC agent on the cloudhub control plane and create a server group and deploy code through control plain runtime manager it does the job of deployment to same app in n nodes

Dependence on External Maven and Git Resources

Application information:
Spring Cloud Data Flow Server Cloudfoundry 1.0.0.RELEASE (DIY built with Spring Cloud Config Server dependencies)
Spring Cloud Config Server
PCF Elastic Runtime 1.7.x
I'm curious about the extent to which applications and the deployer depend on the Git repo and Maven artifact repository I'm binding my SCDF instance and my Spring Cloud Config Server instance to in PCF.
My suspicion is that the Maven repo is only used at deployment time, when an artifact needs to be downloaded for installation and deployment in the PCF space. Also, I'm thinking the Git repo is probably cloned by the Config Server whenever an application initialization, or refresh event occurs that would require the need to re-read the configuration information stored in Git.
Is this true, or are there ongoing dependencies that would require high availability for these external resources? My question is related to disaster recover planning activities, and how quickly these specific resources need to be recovered for Spring Cloud Data Flow and its deployed streams to continue working under adverse conditions.
My suspicion is that the Maven repo is only used at deployment time, when an artifact needs to be downloaded for installation and deployment in the PCF space.
Yes - The applications are resolved and downloaded upon stream deployment request and the resolved apps are cached and reused upon redeployments.
I'm thinking the Git repo is probably cloned by the Config Server whenever an application initialization
True - For a given URI of a configuration source, the server will clone the repository and make its configurations available to all the client applications bound to it.
These two capabilities are driven by application bootstrap event. As for the config-server, if you're running it as a service in Cloud Foundry, it's up to the platform to reliably serve the properties to the bound applications.

How do i migrate my application from WebSphere to JBOSS EAP 6.2.0.GA?

I have my web application currently hosted on WebSphere Application Server 7.0.0.0.
Now i want to migrate my application to JBOSS EAP 6.2.0. GA.
Has anyone done it before? I need help on below issues.
I want to create following equivalent components in JBOSS.
1) Oracle data source
--> To create Oracle Data Source, we first definitely need to create Oracle JDBC Provider. So also need to know how to create equivalent to this in JBOSS.
2) Queue
3) Activation Specification
4) Shared library to contain configuration file and third party jars.
How to deploy applications on JBOSS knowldge would be an added advantage.
Yeah, have done some googling and found below links,
http://www.redhat.com/f/pdf/jboss/JBoss_WebSphereMigrationGuide.pdf
https://docs.jboss.org/author/display/AS72/How+do+I+migrate+my+application+from+WebSphere+to+AS+7
But the links doesnt have any practicle knowledge.
I have tried migrating from websphere to JBOSS\WildFly 10.
Not sure about the other versions of JBoss but wildfly 10 has a configuration xml which you can use to configure your server.
I configured the database, queues, queue factories, namespace bindings using this configuration xml which is available as part of the server installation itself.
The file is present in this location
YOUR_SERVER_INSTALLATION_HOME/opt/jboss/wildfly/standalone/configuration/standalone.xml
There are multiple configurations that are possible and you can customize them as per your need as well. You can refer the below documentation for customization.
https://docs.jboss.org/author/display/WFLY10/Subsystem+configuration

WSO2 AM 1.6: Publish service to cluster

We create and publish a service through the web console (/publisher). This is environment is a 2 node WSO2 AM cluster with a loadbalancer on top of it (HAProxy).
When we invoke the service (SoapUI), via this load balancer, one request succeeds and the next one fails and so on.
IMHO: The cluster configuration should be correct. I can see the published service on both nodes if I start the /publisher app on each node.
axis2.xml:
- Hazelcast clustering is enabled
- Using multicast
master-datasources.xml
- pointing to Oracle database
api-manager/xml
- pointing to jdbc string in master-datasources.xml
Does anyone have some tips.
It seems that the Synapse Artifact is not deployed on two Gateways correctly. Can you go to /repository/deployment/server/synapse-config/default/api and see if the both node have an xml file for the published API. If you haven't enabled Deployment Synchroniser the artifact will only be created in one node.

jboss - how to automate retrying deployment of war file

When running jboss 7.1 as a windows service (or not), it occasionally takes more than one try to successfully deploy a war file. This is not a problem when starting jboss manually since restarts are easy. However, when jboss runs as a windows service and it is restarted automatically (due to a windows patch), jboss itself may launch, but the war may not.
Is there any way to cause jboss to retry deploying the war after it fails the first time - for example, by changing a setting in standalone.xml?
There are to ways to fix your problem.
1) go to standalone.xml (or whatever configuration you are running), find deployment-scanner and add/modify attribute deployment-timeout in seconds
2) Deploy your application as managed deployment, you can do that if you deploy trough admin console or via cli with deploy command. This way deployment will then be "managed" and will always be deployed and wont be using deployment scanner and its timeouts.
I recommend you to use deploy as managed deployment as deployment scanner is not really recommend to be used in production environments as it adds additional IO load on filesystem.
It is great for development / testing scenarios but should be avoided in production if possible.