Configure Mulesoft shared distribution memory - mule

I am working on Mulesoft application which I have deployed in Mule servers of two different physical machines.The servers are binded together to form a cluster.
In clustering mode, the servers are said to share common distributed memory such that if one machine goes down,the other machine takes up the task of the first machine.So,they maintain common distributed memory between them.
Is there any way to configure the memory for the common distributed memory the cluster leverages?
As the traffic/number of applications gets added up,I guess,there will be need to lift the threshold memory up for the respective cluster.
Or if not,do we ever have to modify the memory volume at all that Mulesoft cluster uses ?
Please help me out.
Thanks

In clustered scenarios all object stores are replaced with clustered object stores. Clustered object stores use the shared memory grid created by the clustering code to persist information (meaning that there is no file system level persistence), in case of an outage with a node, other nodes in the cluster should remain active and maintain the OS information in the shared memory grid, thus making the persistence in the file system unnecessary
Additionally, since object stores use the name of the application as part of storage information, if you want to keep them across re-deployments, the newly deployed application must have the same name as the previous one. Please see below as a reference:
Scenario a:
1. Current application name: test
2. New application name: test
- Object store values will be preserved from 1 to 2.
Scenario b:
1. Current application name: test-v1
2. New application name: test-v2
- Object store values will not be preserved from 1 to 2.
Note, In-memory store – Prior to Mule 3.5.0, in-memory store was the default. As of Mule 3.5.0, persistent store is the default.

Mulesoft has a felicity of active-active server here we need not to bother about which server has to work when one server is down another will work. The memory is similar to jvm memory consumption.

Related

redis cluster - is a proxy or cluster supporting library necessary to interact with a cluster?

So, I'm designing a distributed system with multiple redis instances to break up a large amount of streaming writes, but finding it difficult to get a clear picture of how things work.
From what I've read, it seems that a properly configured cluster will automatically shard and redirect requests made on the 'wrong instance' ( say key 'A' maps to instance 1 but is set on instance 2, it will be redirected to instance 1 ) Am I correct in assuming this?
If so, what advantages does an extra proxy and/or library cluster support give me over simply just connecting to one redis instance and letting it do all the work of figuring out where the SETS and GETS should be done?
Cluster support on the client means the client learns where the data is stored and remembers it, next time it tries to read or write a key it goes straight to the correct instance, which improves performance.
Its like calling directory enquires first every time you want to call a business vs just knowing the number of the business.

Are Mule ESB object stores persistent across redeploys?

Mule ESB CE supports object stores, which can be set to persistent. From here I know also, that the stores are application-specific if defined in the application XMLs.
Unfortunately, I was unable to find any information if any data will be lost when:
Mule is restarted
Mule is killed
The application is re-deployed
I'm almost sure that (1) has no impact on the data. I suppose that the object store is also kill-agnostic. What about application being redeployed? I think there are 2 scenarios here:
Object store is defined on app-level
Object store is defined on domain-level
Am I right that in the 1st scenario data will be lost, while the latter will retain the data across application redeploys?
I'm working on Mule 3.5.0 CE.
Any help & references will be appreciated.
For 1,2 and 3 data should be persistent and available upon restart/redeploy etc. The only issue is changing the application name since object stores use the application name as part of the persisted storage information, so if you want the data to be available across redeploys, the newly deployed application must have the same name as the previous one.
In no cases data will be lost from the queue until it's tried (depends upon configuration) and it goes to DLQ.

IBM Worklight 6.2. Analytics topology. Master and data Nodes

I'm reading about production topology for the Analytics part of Worklight 6.2.
https://www-01.ibm.com/support/knowledgecenter/api/content/SSZH4A_6.2.0/com.ibm.worklight.monitor.doc/monitor/t_setting_up_production_cluster.html
It explains that nodes can act both as Master Node or as Data Node or only as one of them.
My question is why we should configure dedicated nodes, Master OR Data instead of configuring all the nodes for both Master AND Data.
I assume the the node (only one) acting as master will provide worst performance in its Data role but on the other hand the configuration will be simpler and the high availability will be higher.
Thank you.
Your assumption is correct.
A master node is responsible for handling communication between the data nodes. The data nodes will be responsible for indexing data. Having dedicated master and data nodes will allow them to focus their processing time and memory on their specific tasks. However, as you mentioned, in some cases its not worth doing this to complicate the configuration.
Another reason is that its not necessary to put a master node on a high performing machine. You can reserve the better machines for the data nodes.
The analytics console uses Elasticsearch under the covers. It would be worth looking up the benefits and drawbacks of choosing master and data nodes in Elasticsearch since it is an open source library and there are several resources available for it.
Edit:
As you can imagine, there is no one size fits all configuration. The configuration depends on several factors such as:
How long you wish to keep data stored
How many machines you have to dedicate to analytics
How verbose your client logs have been set
Your preferences between availability and performance
In my personal tests, I typically keep each node as a data and master node. Its possible that in the future we will document how the different configurations affect performance.

Couchbase node failure

My understanding could be amiss here. As I understand it, Couchbase uses a smart client to automatically select which node to write to or read from in a cluster. What I DON'T understand is, when this data is written/read, is it also immediately written to all other nodes? If so, in the event of a node failure, how does Couchbase know to use a different node from the one that was 'marked as the master' for the current operation/key? Do you lose data in the event that one of your nodes fails?
This sentence from the Couchbase Server Manual gives me the impression that you do lose data (which would make Couchbase unsuitable for high availability requirements):
With fewer larger nodes, in case of a node failure the impact to the
application will be greater
Thank you in advance for your time :)
By default when data is written into couchbase client returns success just after that data is written to one node's memory. After that couchbase save it to disk and does replication.
If you want to ensure that data is persisted to disk in most client libs there is functions that allow you to do that. With help of those functions you can also enshure that data is replicated to another node. This function is called observe.
When one node goes down, it should be failovered. Couchbase server could do that automatically when Auto failover timeout is set in server settings. I.e. if you have 3 nodes cluster and stored data has 2 replicas and one node goes down, you'll not lose data. If the second node fails you'll also not lose all data - it will be available on last node.
If one node that was Master goes down and failover - other alive node becames Master. In your client you point to all servers in cluster, so if it unable to retreive data from one node, it tries to get it from another.
Also if you have 2 nodes in your disposal you can install 2 separate couchbase servers and configure XDCR (cross datacenter replication) and manually check servers availability with HA proxies or something else. In that way you'll get only one ip to connect (proxy's ip) which will automatically get data from alive server.
Hopefully Couchbase is a good system for HA systems.
Let me explain in few sentence how it works, suppose you have a 5 nodes cluster. The applications, using the Client API/SDK, is always aware of the topology of the cluster (and any change in the topology).
When you set/get a document in the cluster the Client API uses the same algorithm than the server, to chose on which node it should be written. So the client select using a CRC32 hash the node, write on this node. Then asynchronously the cluster will copy 1 or more replicas to the other nodes (depending of your configuration).
Couchbase has only 1 active copy of a document at the time. So it is easy to be consistent. So the applications get and set from this active document.
In case of failure, the server has some work to do, once the failure is discovered (automatically or by a monitoring system), a "fail over" occurs. This means that the replicas are promoted as active and it is know possible to work like before. Usually you do a rebalance of the node to balance the cluster properly.
The sentence you are commenting is simply to say that the less number of node you have, the bigger will be the impact in case of failure/rebalance, since you will have to route the same number of request to a smaller number of nodes. Hopefully you do not lose data ;)
You can find some very detailed information about this way of working on Couchbase CTO blog:
http://damienkatz.net/2013/05/dynamo_sure_works_hard.html
Note: I am working as developer evangelist at Couchbase

Cache Regions in Velocity/AppFabric using WCF

I have a service based architecture where a web farm full of asp clients hit application server farm of WCF services. Obviously all the database access is done by the WCF services. Now I would like to cache my frequently used database retrieved objects using Velocity at the service tier level. I am considering to make each physical application server also part of the cache cluster.
According to Velocity documentation, if I use regions, objects are stored only at a single host. I actually wouldn't have any problem if each host kept it's own cache provided that I could somehow synchronize them.
So my questions are
If I create one region on one host is it also created on another one?
When I clear a cache region, is it cleared on one host only?
If I subscribe to a region level notification on all the hosts, can I catch events of one host on another one?
In this scenario should I use regions at all or stay away from them?
I hope my questions are clear. Actually I am more interested in a solution to my problem than answers to my questions
Yes you are right in reading the doc that the region will exists only in one host.
" I actually wouldn't have any problem if each host kept it's own cache provided that I could somehow synchronize them."
When you say synchronize, you mean when HA in enabled ? Velocity would actually take care of that if thats what you meant.
For the questions:
1. No.
2. Yes
3. Notifications will be sent to the client. So i am not sure if there is anyway to send notifications to other host.
4. Regions gives Search capabilities and takes away HA from you. In your case, you could use the advantages of HA.
Having regions not necessarily means that you don't have HA. if your create your own cache (and don't use the 'default' one) you can create it with Secondarys = 1 (HA on)
now let’s say you have 4 cache hosts; when you define a region , it will have both primary and secondary hosts. so each action on the region will result it being applied in both.
Shany
Named caches distribute across participating nodes. Named regions live on a single node. Regions can be HA, but they cannot take full advantage of distributed cache scaling, as their object load does not distribute across participating nodes in the cluster. Also, using named caches with HA requires three nodes minimum, rather than two nodes if you used the "default" cache only.