How to configure a running Mule service's properties dynamically? - mule

I would like a recommendation/idea on a method to configure properties for a running Mule service dynamically, i.e. I want the service to pick up the new settings without the need to restart Mule. Typically the kind of properties/settings I would like to change are FTP connector user ID, passwords, service URLs etc.
Any idea would be welcome.
Regards, Ola

Use the URI endpoint format do dynamically address endpoints. In simple cases you may be able to use the message properties in a TemplateEndpointRouter
Otherwise You need to write a component that composes the URI and sends the message to the dynamic endpoint using the MuleEventContext or MuleClient.
See here:
http://www.mulesoft.org/documentation/display/MULE2USER/Outbound+Routers#OutboundRouters-TemplateEndpointRouter
http://www.mulesoft.org/documentation/display/MULE2USER/Using+the+Mule+Client#UsingtheMuleClient-PerforminganEventRequestCall
http://www.mulesoft.org/documentation/display/MULE2USER/Mule+Endpoint+URIs

Mule exposes all service configuration via JMX, but I don't see any obvious way to reconfigure the connectors without a restart. They are internally managing pools of connections.
If there is a limited, you can create connectors for each and reconfigure the routes via jmx attributes.
If it is to be fully dynamic you likely need to implement your own service component to manage the ftp connection. Exposing the connection management, configuration, and restarting via JMX should be pretty straight forward.

Related

PCF / Cloud connector for Rabbit management API

All,
I'm running a simple SpringBoot app in PCF using a Rabbit on-demand service. The auto reconfiguration of the ConnectionFactory for the internal Rabbit service works just fine.
However I need a list of all queues on the Rabbit host. AFAIK this is only available through a call to the Rabbit management plugin (a REST API), see RabbitManagementTemplate::getQueues. This class expects an http URI with credentials.
I know the URI+credentials are exposed through the vcap.service variables as "http_api_uri', but I wonder if there's a more elegant way to get an instance of RabbitManagentTemplate with Spring magic cloud connectors / auto reconfiguration instead of manually reading the env vars and writing custom bean config.
It seems the ConnectionFactory only knows about the AMQP interface, and cannot create a RabbitManagementTemplate?
Thanks!
Spring Cloud Connectors won't help you here. It doesn't support setting up RabbitManagementTemplate, only a ConnectionFactory.
You don't have to parse the env yourself, you can use the flattened properties that Boot provides such as vcap.services.rabbitmq.credentials.http_api_uri. But you'll need to configure a RabbitManagementTemplate yourself using those Boot properties.

Starting the mule without mule-config.xml and to load the mule-config programatically

I am having a requirement like interacting the mules from different machines.I used the tcp inbound endpoint it is working perfectly if I configure everything in the mule-config.xml.Now my problem is i dont want to use the mule-config.xml but I want to load it programmatically.if any one have the solution pls update..
Thask.
The whole point of using Mule is that you can configure your flows simplistically, rather than writing complex code yourself. Having said that you can programmatically configure endpoints provided they are
outbound endpoints.
OR you using mulerequester:request
You cannot programmatically configure inbound endpoint.

Best way to deploy a WCF service on AWS when using MSMQ

We have a set of WCF services that use to MSMQ. We use the static web.config file to indicate to the services where the MSMQ host is.
Moving to AWS, we now need to dynamically specify the MSMQ host address. We figure we can pick between 2 options:
1) Write a script to update the web.config files when spinning up the AWS instances.
2) Drop the config files and implement a helper function that will resolve the MSMQ host address at runtime.
Anyone has any insight on what approach would be better or be considered best practice?
Thanks!
We ended up using solution #1.
This was a trivial script to write and now we can use environment variables anywhere in our web.config files (not just to set the MSMQ endpoints).
Keeping the MSMQ config in the web.config files also allows us to change queue technology if/when needed by using other bindings (ex: RabbitMQ) with no changes to the source code.

Can a WCF Service access other ServiceHosts running in the same process?

I would like to create a service whose job is to monitor other services that are running within the same process, and then report basic information like health or service dependencies. I'm having trouble figuring out the best way for my monitoring service to access detailed information about the other services without having to have each service publish its metadata or expose some custom endpoint the monitoring service can communicate with. If I load the configuration and read through it I can get most of the way there but this approach has a few weaknesses:
Getting the absolute URI for each endpoint can be difficult,
especially when using IIS hosting or fileless activation.
Any configuration that was done programmatically would not be able to be read by the monitoring service
What I'd like to be able to do is to somehow access the ServiceDescription to get all the information I need about each ServiceHost, without requiring any work on the part of the service designer to give it to me. Is something like this possible?
If you've checked Channs links and are convinced you need to roll your own health monitoring infrastructure, you'll probably need to either derive from ServiceHost or go all out and derive from ServiceHostFactoryBase or possibly do both depending on what you need to implement. They'll give you access to the ServiceDescription instance for each service as it is spun up.
One alternative would be to use WCF's built-in health monitoring and performance monitoring capabilities. This works at the individual service level though.

WCF Intermediary to enable calls between 2 endpoints behind routers without router configuration

I'm developing a synchronization service using WCF and Sync Framework, and I have it working when the endpoints can communicate directly.
The next step I need to implement is to synchronize 2 endpoints where they both are behind routers and the router ip is changing constantly. I am thinking about a publicly available intermediary that should forward the calls between the 2 endpoints. My biggest problem is that I cannot rely on the users to configure the port forwarding on routers so I cannot directly open a connection from the other endpoint or the intermediary.
My idea is based on FogCreek's CoPilot, and other remote assistance solutions (LogMeIn, TeamViewer, etc) which works without any router configuration.
How would you implement it?
You need something like relay in Azure. I would try implement it this way:
Your intermediary will provide two operations:
Push - client will call this operation when publishing new data for synchronization. Data will be stored on service until other client downloads them.
Pull - client will call this operation regulary to download any published data stored on intermediary.
Routers with changing IP should not be a problem, because client will be always initiating connection.
If you are not limited to HTTP protocol you can implement this with Net.Tcp binding and use duplex communication. In such case your intermediary will be able to forward synchronized data immediately. But this solution can have additional complexity when dealing with sessions and connections.