What is yarn-client mode in Spark? - hadoop-yarn

Apache Spark has recently updated the version to 0.8.1, in which yarn-client mode is available. My question is, what does yarn-client mode really mean? In the documentation it says:
With yarn-client mode, the application will be launched locally. Just like running application or spark-shell on Local / Mesos / Standalone mode. The launch method is also the similar with them, just make sure that when you need to specify a master url, use “yarn-client” instead
What does it mean "launched locally"? Locally where? On the Spark cluster?
What is the specific difference from the yarn-standalone mode?

So in spark you have two different components. There is the driver and the workers. In yarn-cluster mode the driver is running remotely on a data node and the workers are running on separate data nodes. In yarn-client mode the driver is on the machine that started the job and the workers are on the data nodes. In local mode the driver and workers are on the machine that started the job.
When you run .collect() the data from the worker nodes get pulled into the driver. It's basically where the final bit of processing happens.
For my self i have found yarn-cluster mode to be better when i'm at home on the vpn, but yarn-client mode is better when i'm running code from within the data center.
Yarn-client mode also means you tie up one less worker node for the driver.

A Spark application consists of a driver and one or many executors. The driver program is the main program (where you instantiate SparkContext), which coordinates the executors to run the Spark application. The executors run tasks assigned by the driver.
A YARN application has the following roles: yarn client, yarn application master and list of containers running on the node managers.
When Spark application runs on YARN, it has its own implementation of yarn client and yarn application master.
With those background, the major difference is where the driver program runs.
Yarn Standalone Mode: your driver program is running as a thread of the yarn application master, which itself runs on one of the node managers in the cluster. The Yarn client just pulls status from the application master. This mode is same as a mapreduce job, where the MR application master coordinates the containers to run the map/reduce tasks.
Yarn client mode: your driver program is running on the yarn client where you type the command to submit the spark application (may not be a machine in the yarn cluster). In this mode, although the drive program is running on the client machine, the tasks are executed on the executors in the node managers of the YARN cluster.
Reference: http://spark.incubator.apache.org/docs/latest/cluster-overview.html

A Spark application running in
yarn-client mode:
driver program runs in client machine or local machine where the application has been launched.
Resource allocation is done by YARN resource manager based on data locality on data nodes and driver program from local machine will control the executors on spark cluster (Node managers).
Please refer this cloudera article for more info.
The difference between standalone mode and yarn deployment mode,
Resource optimization won't be efficient in standalone mode.
In standalone mode, driver program launch an executor in every node of a cluster irrespective of data locality.
standalone is good for use case, where only your spark application is being executed and the cluster do not need to allocate resources for other jobs in efficient manner.

Both spark and yarn are distributed framework , but their roles are different:
Yarn is a resource management framework, for each application, it has following roles:
ApplicationMaster: resource management of a single application, including ask for/release resource from Yarn for the application and monitor.
Attempt: an attempt is just a normal process which does part of the whole job of the application. For example , a mapreduce job which consists of multiple mappers and reducers , each mapper and reducer is an Attempt.
A common process of summiting a application to yarn is:
The client submit the application request to yarn. In the
request, Yarn should know the ApplicationMaster class; For
SparkApplication, it is
org.apache.spark.deploy.yarn.ApplicationMaster,for MapReduce job ,
it is org.apache.hadoop.mapreduce.v2.app.MRAppMaster.
Yarn allocate some resource for the ApplicationMaster process and
start the ApplicationMaster process in one of the cluster nodes;
After ApplicationMaster starts, ApplicationMaster will request resource from Yarn for this Application and start up worker;
For Spark, the distributed computing framework, a computing job is divided into many small tasks and each Executor will be responsible for each task, the Driver will collect the result of all Executor tasks and get a global result. A spark application has only one driver with multiple executors.
So, then ,the problem comes when Spark is using Yarn as a resource management tool in a cluster:
In Yarn Cluster Mode, Spark client will submit spark application to
yarn, both Spark Driver and Spark Executor are under the supervision
of yarn. In yarn's perspective, Spark Driver and Spark Executor have
no difference, but normal java processes, namely an application
worker process. So, when the client process is gone , e.g. the client
process is terminated or killed, the Spark Application on yarn is
still running.
In yarn client mode, only the Spark Executor are under the
supervision of yarn. The Yarn ApplicationMaster will request resource
for just spark executor. The driver program is running in the client
process which have nothing to do with yarn, just a process submitting
application to yarn.So ,when the client leave, e.g. the client
process exits, the Driver is down and the computing terminated.

First of all, let's make clear what's the difference between running Spark in standalone mode and running Spark on a cluster manager (Mesos or YARN).
When running Spark in standalone mode, you have:
a Spark master node
some Spark slaves nodes, which have been "registered" with the Spark master
So:
the master node will execute the Spark driver sending tasks to the executors & will also perform any resource negotiation, which is quite basic. For example, by default each job will consume all the existing resources.
the slave nodes will run the Spark executors, running the tasks submitted to them from the driver.
When using a cluster manager (I will describe for YARN which is the most common case), you have :
A YARN Resource Manager (running constantly), which accepts requests for new applications and new resources (YARN containers)
Multiple YARN Node Managers (running constantly), which consist the pool of workers, where the Resource manager will allocate containers.
An Application Master (running for the duration of a YARN application), which is responsible for requesting containers from the Resource Manager and sending commands to the allocated containers.
Note that there are 2 modes in that case: cluster-mode and client-mode. In the client mode, which is the one you mentioned:
the Spark driver will be run in the machine, where the command is executed.
The Application Master will be run in an allocated Container in the cluster.
The Spark executors will be run in allocated containers.
The Spark driver will be responsible for instructing the Application Master to request resources & sending commands to the allocated containers, receiving their results and providing the results.
So, back to your questions:
What does it mean "launched locally"? Locally where? On the Spark
cluster?
Locally means in the server in which you are executing the command (which could be a spark-submit or a spark-shell). That means that you could possibly run it in the cluster's master node or you could also run it in a server outside the cluster (e.g. your laptop) as long as the appropriate configuration is in place, so that this server can communicate with the cluster and vice-versa.
What is the specific difference from the yarn-standalone mode?
As described above, the difference is that in the standalone mode, there is no cluster manager at all. A more elaborate analysis and categorisation of all the differences concretely for each mode is available in this article.

With yarn-client mode, your spark application is running in your local machine. With yarn-standalone mode, your spark application would be submitted to YARN's ResourceManager as yarn ApplicationMaster, and your application is running in a yarn node where ApplicationMaster is running.
In both case, yarn serve as spark's cluster manager. Your application(SparkContext) send tasks to yarn.

Related

AWS EMR: Run Job Flow where is the driver and Application Master located?

Where does the driver and application master run on EMR 6.9 with boto3.client('emr').run_job_flow(...) in regards to MASTER/CORE/TASK nodes?
This question is not in regards to ssh'ing into the master node and executing spark-submit as described in this blog by aws. I think that is clear which process runs where.
AWS documentation, probably for good reason, says the same thing that Spark say about where the driver and application master run in both client and cluster mode. EMR's default master is yarn so this answer is accurate about how it works
Client mode, driver will be running in the machine where application got submitted and the machine has to be available in the network till
the application completes.
Cluster mode, driver will be running in application master(one per spark application) node and machine submitting the application need
not to be in network after submission
Okay but I am submitting via boto3 api so what is the master node where the driver and AM reside? I would have thought so but this documentation by aws to me makes it sound like the AM could be run on the CORE or the TASK nodes in +6.X.
What I trying to understand by this question is I have a on demand MASTER node that is okay size and Spot TASK nodes that are really small. If either driver and AM are running on the TASK node I would upgrade that instance.

Dynatrace one agent in ecs fargate containers stops but application container is running

Am trying to install one agent in my ECS fargate task. Along with application container i have added another container definition for one agent with image as alpine:latest and used run time injection.
While running the task, initially the one agent container is in running state and after a minute it goes to stopped state same time application container will be in running state.
In dynatrace the same host is available and keeps recreating after 5-10mins frequently.
Actually the issue that I had was task was in draining status because of application issue due to which in dynatrace it keeps recreating... And the same time i used run time injection for my ECS fargate so once the binaries are downloaded and injected to volume, the one agent container definition will stop while the application container keeps running and injecting logs in dynatrace.
I have the same problem and connected via ssh to the cluster I saw that the agent needs to be privileged. The only thing that worked for me was sending traces and metrics through Opentelemetry.
https://aws-otel.github.io/docs/components/otlp-exporter
Alternative:
use sleep infinity in the command field of your oneAgent container.

Scheduler not queuing jobs

I'm trying to test out Airflow on Kubernetes. The Scheduler, Worker, Queue, and Webserver are all on different deployments and I am using a Celery Executor to run my tasks.
Everything is working fine except for the fact that the Scheduler is not able to queue up jobs. Airflow is able to run my tasks fine when I manually execute it from the Web UI or CLI but I am trying to test the scheduler to make it work.
My configuration is almost the same as it is on a single server:
sql_alchemy_conn = postgresql+psycopg2://username:password#localhost/db
broker_url = amqp://user:password#$RABBITMQ_SERVICE_HOST:5672/vhost
celery_result_backend = amqp://user:password#$RABBITMQ_SERVICE_HOST:5672/vhost
I believe that with these configurations, I should be able to make it run but for some reason, only the workers are able to see the DAGs and their state, but not the scheduler, even though the scheduler is able to log their heartbeats just fine. Is there anything else I should debug or look at?
First, you use postgres as database for airflow, don't you? Do you deploy a pod and service for postgres? If it is the case, do you verify that in your config file you have :
sql_alchemy_conn = postgresql+psycopg2://username:password#serviceNamePostgres/db
You can use this github. I used it 3 weeks ago for a first test and it worked pretty well.
The entrypoint is useful to verify that rabbitMq and Postgres are well configured.

DC/OS running a service on each agent

Is there any way of running a service (single instance) on each deployed agent node? I need that because each agent needs to mount a storage from S3 using s3fs
The name of the feature you're looking for is "daemon tasks", but unfortunately, it's still in the planning phase for Mesos itself.
Due to the fact that schedulers don't know the entire state of the cluster, Mesos needs to add a feature to enable this functionality. Once in Mesos it can be integrated with DC/OS.
The primary workaround is to use Marathon to deploy an app with the UNIQUE constraint ("constraints": [["hostname", "UNIQUE"]]) and set the app instances to the number of agent nodes. Unfortunately this means you have to adjust the instances number when you add new nodes.

High Availability of Resource Manager, Node Manager and Application Master in YARN

From reading documentation around YARN, I couldn't find any relevant information about HA of resource manager, node manager and application master in YARN. Are they single point of failures? If so are there any plan to improve?
A YARN cluster is comprised of a potentially large number of machines ("nodes"). To be part of the cluster, each node runs at least one service daemon. The service daemon's type determines the task this node plays in the cluster.
Almost all nodes run a "node manager" service deamon, which makes them "regular" YARN nodes. The node manager takes care of executing a certain part of a YARN job on this very machine, while other parts are executed on other nodes. It makes only sense to run a single node manager on each node. For a 1000 node YARN cluster, there are probably around 999 node managers running. So node managers are indeed redundantly distributed in the cluster. If one node manager fails, others are assigned to take over its tasks.
Every YARN job is an application of its own, and a dedicated application master daemon is started for the job on one of the nodes. For another application, another application master is started on a different node. The application's actual work is executed on even other nodes in the cluster. The application master only controls the overall execution of the application. If an application master dies, the whole application has failed, but other applications will continue. The failed application has to be restarted.
The resource manager daemon is running on one dedicated YARN node, tasked only with starting applications (by starting the related application master), with collecting information about all nodes in the cluster and with assigning computing resources to applications. The resource manager currently isn't build to be HA, but this normally isn't a problem. If the resource manager dies, all applications need to be restarted.