WSO2 ESB High Availability/ clustering Environment system requirements - wso2-esb

I want information on the WSO2 ESB clustering system requirements for production deployment on Linux.
Went through the following link :ESB clustering
Understand that more than 1 copy of the WSO2 ESB would be extracted and set up on single server for Worker nodes and similarly on the other server for Manager (DepSyn and admin) , worker nodes .
Can someone suggest what would be the system requirements of each server in this case ?
system prerequisites link suggests
Memory - 2 GB , 1 GB Heap size
Disk - 1 GB
assuming to handle one ESB instance (worker or manager node).
Thanks in advance,
Sai.

As a minimum, the system requirement would be 2 GB for the ESB worker JVM (+appropriate memory for the OS: assume 2GB for Linux in this case) which would be 4 GB minimum. Of course based on the type of work done and load, this requirement might increase.
The worker manager separation is for the separation of concerns. Hence in a typical production deployment, you might have a single manager node (same specs) and 2 worker nodes where only the worker nodes would handle traffic.

Related

One mule app server in cluster polling maximum message from MQ

My mule application is comprised of 2 nodes running in a cluster, and it listens to IBM MQ Cluster (basically connecting to 2 MQ via queue manager). There are situations where one mule node pulls or takes more than 80% of message from MQ cluster and another mule node picks rest 20%. This is causing CPU performance issues.
We have double checked that all load balancing is proper, and very few times we get CPU performance problem. Please can anybody give some ideas what could be possible reason for it.
Example: last scenario was created where there are 200000 messages in queue, and node2 mule server picked 92% of message from queue within few minutes.
This issue has been fixed now. Got into the root cause - our mule application running on MULE_NODE01 reads/writes to WMQ_NODE01, and similarly for node 2. One of the mule node (lets say MULE_NODE02) reads from linux/windows file system and puts huge messages to its corresponding WMQ_NODE02. Now, its IBM MQ which tries to push maximum load to other WMQ node to balance the work load. That's why MULE_NODE01 reads all those loaded files from WMQ_NODE01 and causes CPU usage alerts.
#JoshMc your clue helped a lot in understanding the issues, thanks a lot for helping.
Its WMQ node in a cluster which tries to push maximum load to other WMQ node, seems like this is how MQ works internally.
To solve this, we are now connecting our mule node to MQ gateway, rather making 1-to-1 connectivity
This could be solved by avoiding the racing condition caused by multiple listeners. Configure the listener in the cluster to the primary node only.
republish the message to a persistent VM queue.
move the logic to another flow that could be triggered via a VM listener and let the Mule cluster do the load balancing.

RabbitMQ problems with node on cluster

I have some problems with my RabbitMQ HA cluster.
Problems are next:
I have 3 nodes in cluster.
Node 2 and 3 joined with node 1.
When I have load - it goes to node 1 and almost all RAM is used.
If I switch nodes all load goes to the next node but RAM usage is less than on node 1.
Memory investigations shows that all RAM in this moment is used by RabbitMQ binary, but binary at the same time uses only 1 GB of memory, but allocated 5 GB.
If I switch nodes back - 1 node back uses more RAM than other nodes.
What is the problem in this cases?
Can anybody help me to solve this issue?
If you need more information or screenshots I can send them to you.
RabbitMQ 3.6.10
Erlang 20.3
Traffic to RabbitMQ goes to it via HAProxy on the same server which is RabbitMQ located.

Transferring results from Zookeeper to webserver

In my project I am calculating about 10-100mbs of data on a zookeeper worker. I then use HTTP PUT to transfer the data from the worker process to my webserver, which eventually gets delivered to the client. Is there anyway using Zookeeper or Curator to transfer that data or am I on my own to get the data out of the Worker process and onto a process outside my ensemble?
I wouldn't recommend to use Zookeeper to transfer data, especially of such relatively large size. It is not really designed to do it. Zookeeper works best when it used to synchronize distributed processes or to store some relatively small configuration data that is shared among multiple hosts.
There is a hard limit of 1 Mb per ZK node and if you try to push it to the limit, Zookeeper clients may get timeouts and go into disconnected state while Zookeeper service processes large chunk of data.

apache hadoop, hbase and nutch components distribution for 4 servers cluster

I have 4 systems. I want to crawl some data. For that first I need to configure cluster. I am confused about placement of components.
should I place all component (hadoop, hive, hbase, nutch) in one machine and add other machines as nodes in hadoop?
Should I place hbase in one machine, nutch in other and hadoop in third and add forth machine as slave of hadoop?
Should HBase be in pseudo distributed mode or full distributed.
How many slaves I sholud add in hbase if I run it as fully distributed mode.
What should be the best way. PLease guide step by step ( For hbase and hadoop)
Say you have 4 nodes n1, n2, n3 and n4.
You can install hadoop and hbase in distributed mode.
If you are using Hadoop 1.x -
n1 - hadoop master[Namenode and Jobtracker]
n2, n3 and n3 - hadoop slaves [datanodes and tasktrackers]
For HBase, you can choose n1 or any other node as Master node, Since Master node are usually not CPU/Memory intensive, all Masters can be deployed on single node on test setup, However in Production its good to have each Master deployment on a separate node.
Lets say n2 - HBase Master, remaining 3 nodes can act as regionservers.
Hive and Nutch can reside on any node.
Hope this helps; For a test setup this should be good to go.
Update -
For Hadoop 2.x, since your cluster size is small, Namenode HA deployment can be skipped.
Namenode HA would require two nodes one each for an active and standby node.
A zookeeper quorum which again requires odd number of nodes so a minimum of three nodes would be required.
A journal quorum again require a minimum of 3 nodes.
But for a cluster this small HA might not be a major concern. So you can keep
n1 - namenode
n2 - ResouceManager or Yarn
and remaining nodes can act as datanodes, try not to deploy anything else on the yarn node.
Rest of the deployment for HBase, Hive and Nutch would remain same.
In my opinion, you should install Hadoop in fully distributed mode, so the jobs could run in parallel manner and much faster, as the MapReduce tasks will be distributed in 4 machines. Of course, the Hadoop's master node should run in one single machine.
If you need to process big amount of data, it's a good choice to install HBase in one single machine and the Hadoop in 3.
You could make all the above very easy using tools/platforms with a very friendly GUI like Cloudera Manager and Hortonworks. They will help you to control and maintain your cluster better but they are also provide Health Monitoring, Cluster Analytics as well as E-Mail notifications for every error occurs in your cluster.
Cloudera Manager
http://www.cloudera.com/content/cloudera/en/products-and-services/cloudera-enterprise/cloudera-manager.html
Hortonworks
http://hortonworks.com/
In these two links, you can find more guidance about how you could costruct your cluster

Real world example of Apache Helix, Zookeeper, Mesos and Erlang?

I am new in
Apache ZooKeeper : ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
Apache Mesos : Apache Mesos is a cluster manager that simplifies the complexity of running applications on a shared pool of servers.
Apache Helix : Apache Helix is a generic cluster management framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes.
Erlang Langauge : Erlang is a programming language used to build massively scalable soft real-time systems with requirements on high availability.
It sounds to me that Helix and Mesos both are useful for Clustering management System. How they are related to ZooKeeper? It'd better if someone give me a real world example for their usage.
I am curious to know How [BOINC][1] are distributing tasks to their clients? Are they using any of the above technologies? (Forget about Erlang).
I just need a brief view on it :)
Erlang was built by Ericsson, designed for use in phone systems. By design, it runs hundreds, thousands, or even 10s of thousands of small processes to handle tasks by sending information between them instead of sharing memory or state. This enables all sorts of interesting features that are great for high availability distributed systems such as:
hot code reloading. Each process is paused, it's relevant module code is swapped out, and it is resumed where it left off, so deploys can happen without restarting or causing significant interruption.
Easy distributed messaging and clustering. Sending a message to a local process or a remote one is fairly seamless in most instances.
Process-local GC. Garbage collection happens in each process independently instead of a global stop-the-world even like java, aiding in low-latency results.
Supervision trees and complex process hierarchy and monitoring/managing.
A few concrete real-world examples that makes great use of Erlang would be:
MongooseIM A highly performant and incredibly scalable, distributed XMPP / Chat server
Riak A distributed key/value store.
Mesos, on the other hand, you can sort of think of as a platform effectively for turning a datacenter of servers into a platform for teams and developers. If I, say as a company, own a datacenter with 10,000 physical servers, and I have 1,000 engineers developing hundreds of services, a good way to allow the engineers to deploy and manage services across that hardware without them needing to worry about the servers directly. It's an abstraction layer over-top of the physical servers to that allows you to share and intelligently allocate resources.
As a user of Mesos, I might say that I have Service X. It's an executable bundle that lives in location Y. Each instance of Service X needs 4 GB of RAM and 2 cores. And I need 8 instances which will be attached to a load balancer. You can specify this in configuration and deploy based on that config. Mesos will find hardware that has enough ram and CPU capacity available to handle each instance of that service and start it running in each of those locations.
It can handle a lot of other more complex topics about the orchestration of them as well, but that's probably a bit in-depth for this :)
Zookeepers most common use cases are Service Discover and configuration management. You can think of it, fundamentally, a bit like a nested key value store, where services can look at pre-defined paths to see where other services currently live.
A simple example is that I have a web service using a shared database cluster. I know a simple name for that database cluster and where the configuration for it lives in zookeeper. I can look up (or repeatedly poll) that path in zookeeper to check what the addresses of the active database hosts are. And on the other side, if I take a database node out of rotation and replace it with a new one, the config in zookeeper gets updated with the new address, and anything continually looking at it will detect this change and change where it's connected to.
A more complex use case for zookeeper is how Kafka uses it (or did at the time that I last used Kafka). Kafka has streams, and streams have many shards. Each consumer of each stream use zookeeper to save checkpoints in each shard after they have read and processed up to a certain point in the stream. That way if the consumer crashes or is restarted, it knows where to pick up in the stream.
I dont know about Meos and Earlang language. But this article might help you with Helix and Zookeeper.
This article tells us:
Zookeeper is responsible for gluing all parts together where Helix is cluster management component that registers all cluster details (cluster itself, nodes, resources).
The article is related to clustering in JBPM using helix and zookeeper.But with this you will get a basic idea on what helix and zookeeper is used for.
And from most of the articles i read online it seems like zookeeper and helix are used together.
Apache Zookeeper can be installed on a single machine or on a cluster.
It can be used to keep track of logs. It can provide various services on a distributed platform.
Storm and Kafka rely on Zookeeper.
Storm uses Zookeeper to store all state so that it can recover from an outage in any of its (distributed) component services.
Kafka queue consumers can use Zookeeper to store information on what has been consumed from the queue.