I am using an AWS EMR cluster with Alluxio installed n every node. I want to now deploy Alluxio in High Availability.
https://docs.alluxio.io/os/user/stable/en/deploy/Running-Alluxio-On-a-HA-Cluster.html#start-an-alluxio-cluster-with-ha
I am following the above documentation, and see that "On all the Alluxio master nodes, list all the worker hostnames in the conf/workers file, and list all the masters in the conf/masters file".
My concern is that since I have an AWS-managed scaling cluster the worker nodes keep added and removed based on cluster loads. How can I maintain a list of constant masters and workers under conf/masters and conf/workers in a managed Scaling cluster?
this conf/workers and conf/masters conf file is only used for intiial setup through scripts. Once the cluster is running, you don’t need to update them any more.
E.g., in an say EMR cluster, you can add a new slave node as Alluxio worker and as long as you specify the correct Alluxio master address, this new Alluxio worker will be able to register itself and serve in the fleet like other workers,
Overview
A RabbitMQ broker is a logical grouping of one or several Erlang
nodes, each running the RabbitMQ application and sharing users,
virtual hosts, queues, exchanges, bindings, and runtime parameters.
Sometimes we refer to the collection of nodes as a cluster.
Why would you do this? I understand to increase durability of messages (if a node goes down, other queues still get the messages). But what about performance? How does cluster improve performance. Won't all consumers/producers connect to the master node's queue anyway? If so, aren't we still getting traffic on a single node regardless? Do we put a load balancer so traffic is directed at different nodes each time?
How does RabbitMQ cluster increase performance?
Well, right after that paragraph, the documentation states the following:
What is Replicated?
All data/state required for the operation of a RabbitMQ broker is
replicated across all nodes. An exception to this are message queues,
which by default reside on one node, though they are visible and
reachable from all nodes. To replicate queues across nodes in a
cluster, see the documentation on high availability (note that you
will need a working cluster first).
So, you would cluster to provide higher capacity in your RabbitMQ broker than a single node can provide alone. Note that clustering by itself is not a high-availability strategy.
Your assertion that message durability is increased is false, as message queues continue to reside on one broker (unless mirroring is used).
By default, contents of a queue within a RabbitMQ cluster are located on a single node (the node on which the queue was declared) [1]
Without mirroring, when that node goes down, messages on it will be lost. The cluster will put the queue onto a different node. RabbitMQ does not handle network partitions well, so this can be a bit of a problem.
"Aren't we still getting traffic on a single node regardless?" - if you only have one queue, then yes. However, a bigger question is "why would you run a message broker with only one queue?" Similarly, if you only create queues on one node, then you will still have one point of failure in the system.
I have a spark setup on Amazon EC2 machines with 2 worker machines running. It reads data from cassandra, do some processing and write to sql server. I have heard about amazon EMR and read about it. I want a managed system where my worker machines are automatically added to my cluster if my job is taking more time and shutdown when my job gets completed.
Can I achieve this through Amazon EMR?
The requirements are:
My worker machines are automatically added to my cluster if my job is taking more time.
Shutdown when my job gets completed.
No. 2 is definitely possible if your job is launched from the steps. There is an option that auto-terminates cluster after the last step is completed. Alternatively, this could also be done programatically with the SDK.
No. 1 is a little more difficult but EMR has three classes of nodes; master, core, and task. Task nodes can be added after cluster creation. The trigger for that would probably have to be done programatically or utilizing another Amazon service, like Lambda.
Apache Spark has recently updated the version to 0.8.1, in which yarn-client mode is available. My question is, what does yarn-client mode really mean? In the documentation it says:
With yarn-client mode, the application will be launched locally. Just like running application or spark-shell on Local / Mesos / Standalone mode. The launch method is also the similar with them, just make sure that when you need to specify a master url, use “yarn-client” instead
What does it mean "launched locally"? Locally where? On the Spark cluster?
What is the specific difference from the yarn-standalone mode?
So in spark you have two different components. There is the driver and the workers. In yarn-cluster mode the driver is running remotely on a data node and the workers are running on separate data nodes. In yarn-client mode the driver is on the machine that started the job and the workers are on the data nodes. In local mode the driver and workers are on the machine that started the job.
When you run .collect() the data from the worker nodes get pulled into the driver. It's basically where the final bit of processing happens.
For my self i have found yarn-cluster mode to be better when i'm at home on the vpn, but yarn-client mode is better when i'm running code from within the data center.
Yarn-client mode also means you tie up one less worker node for the driver.
A Spark application consists of a driver and one or many executors. The driver program is the main program (where you instantiate SparkContext), which coordinates the executors to run the Spark application. The executors run tasks assigned by the driver.
A YARN application has the following roles: yarn client, yarn application master and list of containers running on the node managers.
When Spark application runs on YARN, it has its own implementation of yarn client and yarn application master.
With those background, the major difference is where the driver program runs.
Yarn Standalone Mode: your driver program is running as a thread of the yarn application master, which itself runs on one of the node managers in the cluster. The Yarn client just pulls status from the application master. This mode is same as a mapreduce job, where the MR application master coordinates the containers to run the map/reduce tasks.
Yarn client mode: your driver program is running on the yarn client where you type the command to submit the spark application (may not be a machine in the yarn cluster). In this mode, although the drive program is running on the client machine, the tasks are executed on the executors in the node managers of the YARN cluster.
Reference: http://spark.incubator.apache.org/docs/latest/cluster-overview.html
A Spark application running in
yarn-client mode:
driver program runs in client machine or local machine where the application has been launched.
Resource allocation is done by YARN resource manager based on data locality on data nodes and driver program from local machine will control the executors on spark cluster (Node managers).
Please refer this cloudera article for more info.
The difference between standalone mode and yarn deployment mode,
Resource optimization won't be efficient in standalone mode.
In standalone mode, driver program launch an executor in every node of a cluster irrespective of data locality.
standalone is good for use case, where only your spark application is being executed and the cluster do not need to allocate resources for other jobs in efficient manner.
Both spark and yarn are distributed framework , but their roles are different:
Yarn is a resource management framework, for each application, it has following roles:
ApplicationMaster: resource management of a single application, including ask for/release resource from Yarn for the application and monitor.
Attempt: an attempt is just a normal process which does part of the whole job of the application. For example , a mapreduce job which consists of multiple mappers and reducers , each mapper and reducer is an Attempt.
A common process of summiting a application to yarn is:
The client submit the application request to yarn. In the
request, Yarn should know the ApplicationMaster class; For
SparkApplication, it is
org.apache.spark.deploy.yarn.ApplicationMaster,for MapReduce job ,
it is org.apache.hadoop.mapreduce.v2.app.MRAppMaster.
Yarn allocate some resource for the ApplicationMaster process and
start the ApplicationMaster process in one of the cluster nodes;
After ApplicationMaster starts, ApplicationMaster will request resource from Yarn for this Application and start up worker;
For Spark, the distributed computing framework, a computing job is divided into many small tasks and each Executor will be responsible for each task, the Driver will collect the result of all Executor tasks and get a global result. A spark application has only one driver with multiple executors.
So, then ,the problem comes when Spark is using Yarn as a resource management tool in a cluster:
In Yarn Cluster Mode, Spark client will submit spark application to
yarn, both Spark Driver and Spark Executor are under the supervision
of yarn. In yarn's perspective, Spark Driver and Spark Executor have
no difference, but normal java processes, namely an application
worker process. So, when the client process is gone , e.g. the client
process is terminated or killed, the Spark Application on yarn is
still running.
In yarn client mode, only the Spark Executor are under the
supervision of yarn. The Yarn ApplicationMaster will request resource
for just spark executor. The driver program is running in the client
process which have nothing to do with yarn, just a process submitting
application to yarn.So ,when the client leave, e.g. the client
process exits, the Driver is down and the computing terminated.
First of all, let's make clear what's the difference between running Spark in standalone mode and running Spark on a cluster manager (Mesos or YARN).
When running Spark in standalone mode, you have:
a Spark master node
some Spark slaves nodes, which have been "registered" with the Spark master
So:
the master node will execute the Spark driver sending tasks to the executors & will also perform any resource negotiation, which is quite basic. For example, by default each job will consume all the existing resources.
the slave nodes will run the Spark executors, running the tasks submitted to them from the driver.
When using a cluster manager (I will describe for YARN which is the most common case), you have :
A YARN Resource Manager (running constantly), which accepts requests for new applications and new resources (YARN containers)
Multiple YARN Node Managers (running constantly), which consist the pool of workers, where the Resource manager will allocate containers.
An Application Master (running for the duration of a YARN application), which is responsible for requesting containers from the Resource Manager and sending commands to the allocated containers.
Note that there are 2 modes in that case: cluster-mode and client-mode. In the client mode, which is the one you mentioned:
the Spark driver will be run in the machine, where the command is executed.
The Application Master will be run in an allocated Container in the cluster.
The Spark executors will be run in allocated containers.
The Spark driver will be responsible for instructing the Application Master to request resources & sending commands to the allocated containers, receiving their results and providing the results.
So, back to your questions:
What does it mean "launched locally"? Locally where? On the Spark
cluster?
Locally means in the server in which you are executing the command (which could be a spark-submit or a spark-shell). That means that you could possibly run it in the cluster's master node or you could also run it in a server outside the cluster (e.g. your laptop) as long as the appropriate configuration is in place, so that this server can communicate with the cluster and vice-versa.
What is the specific difference from the yarn-standalone mode?
As described above, the difference is that in the standalone mode, there is no cluster manager at all. A more elaborate analysis and categorisation of all the differences concretely for each mode is available in this article.
With yarn-client mode, your spark application is running in your local machine. With yarn-standalone mode, your spark application would be submitted to YARN's ResourceManager as yarn ApplicationMaster, and your application is running in a yarn node where ApplicationMaster is running.
In both case, yarn serve as spark's cluster manager. Your application(SparkContext) send tasks to yarn.
From reading documentation around YARN, I couldn't find any relevant information about HA of resource manager, node manager and application master in YARN. Are they single point of failures? If so are there any plan to improve?
A YARN cluster is comprised of a potentially large number of machines ("nodes"). To be part of the cluster, each node runs at least one service daemon. The service daemon's type determines the task this node plays in the cluster.
Almost all nodes run a "node manager" service deamon, which makes them "regular" YARN nodes. The node manager takes care of executing a certain part of a YARN job on this very machine, while other parts are executed on other nodes. It makes only sense to run a single node manager on each node. For a 1000 node YARN cluster, there are probably around 999 node managers running. So node managers are indeed redundantly distributed in the cluster. If one node manager fails, others are assigned to take over its tasks.
Every YARN job is an application of its own, and a dedicated application master daemon is started for the job on one of the nodes. For another application, another application master is started on a different node. The application's actual work is executed on even other nodes in the cluster. The application master only controls the overall execution of the application. If an application master dies, the whole application has failed, but other applications will continue. The failed application has to be restarted.
The resource manager daemon is running on one dedicated YARN node, tasked only with starting applications (by starting the related application master), with collecting information about all nodes in the cluster and with assigning computing resources to applications. The resource manager currently isn't build to be HA, but this normally isn't a problem. If the resource manager dies, all applications need to be restarted.