Airflow Scheduler shutting down - ssh

I am running a Airflow (v2.3.3) cluster on on-premise virtual machines and starting the Airflow scheduler, webserver, and workers through SSH. However, after some time of leaving the SSH session, the Airflow scheduler unexpectedly shut down. I have two following questions:
Why does an Airflow scheduler shut down?
How do I make sure that Airflow processes (including the scheduler) are running despite leaving the SSH session?

Related

Script for Graceful Shutdown for Gridgain Ignite Webconsole

As mentioned in ignite documentation, I am starting the ignite webconsole and web agent using the below scripts. We have 3 ignite clusters to monitor (3 web agents):
./gridgain-web-console-linux --server:port 3000
./ignite-web-agent.sh
What is the script to gracefully shutdown the ignite webconsole and webagent components.
At present, I am manually killing the respective process ids.
Need to schedule the startup and shutdown of ignite webconsole on the server.
If Web Console is started from console, you can just press Ctrl-C to terminate it.
If it is detached or needed to be killed automatically, you can use kill <pid>.

Restarting managed servers by clusters without outage

I want to write script for restarting weblogics managed servers, which would do the following:
It would contain loop ,which would restart first nodes of all clusters at one time.
a.)FORCE_SHUTDOWN
b.)wait for status: SHUTDOWN
c.)START managed servers
d.)wait for status: RUNNING
e.)move to next node of each cluster and repeat until all managed servers are restarted.
So in first iteration it would restart all first nodes of each cluster, in second iteration it would restart the second nodes of each cluster and repeat this action until all managed servers are restarted.
I have not started to writing the script yet, I am newbie with weblogic and this is just concept. Do you have any suggestions how to achieve that goal?
Why reinvent the wheel?
rollingRestart
Category: Control Commands
Use with WLST: Online
Description Initiates a rolling restart of all servers in a domain or all servers in a specific cluster or clusters without interrupting
the service. This command provides the ability to sequentially restart
servers.
This operation involves the graceful shutdown of the servers, and the
servers being restarted without interrupting the service for the user.
Syntax
rollingRestart(target, [options])

Celery workers missing heartbeats and getting substantial drift over Ec2

I am testing my celery implementation over 3 ec2 machines right now. I am pretty confident in my implementation now, but I am getting problems with the actual worker execution. My test structure is as follows:
1 ec2 machine is designated as the broker, also runs a celery worker
1 ec2 machine is designated as the client (runs the client celery script that enqueues all the tasks using .delay(), also runs a celery worker
1 ec2 machine is purely a worker.
All the machines have 1 celery worker running. Before, I was immediately getting the message:
"Substantial drift from celery#[other ec2 ip] may mean clocks are out of sync."
A drift amount in seconds would then be printed, which would increase over time.
I would also get messages : "missed heartbeat from celery#[other ec2 ip].
The machine would be doing very little work at this point, so my AutoScaling config in ec2 would shut down the instance automatically once it got to cpu utilization levels very low (<5%)
So to try to solve this problem, i attempted to sync all my machine's clocks (although I thought celery handled this) with this command, which was performed upon start up for all machines:
apt-get -qy install ntp
service ntp start
With this, they all performed well for about 10 minutes with no hitches, after which I started getting missed heartbeats and my ec2 instances stalled and shut down. The weird thing is, the drift increased and then decreased sometimes.
Any idea on why this is happening?
I am using the newest version of celery (3.1) and rabbitmq
EDIT: It should be noted that I am utilizing us-west-1a and us-west-1c availability zones on ec2.
EDIT2: I am starting to think memory problems might be an issue. I am using a t2.micro instance, and running 3 celery workers on the same machine (only 1 instance) which is also the broker, still cause heartbeat misses and stalls.

How can I launch a slave agent via SSH on Jenkins programmatically?

How can I launch a slave agent via SSH on Jenkins programmatically?
Or enable auto refresh such that Jenkins checks automatically to see if a slave is online.
Basically I have a job which reboots one of the slaves. I need some jobs to run on the same slave after it boots up (by chaining another job using the Startup Trigger plugin) without any manual intervention in between these steps.
Jenkins will automatically reconnect to the slave after it's rebooted; the master checks the slave connection every minute or so (I'm not sure of the exact interval without digging into the source code).
As long as the slave configuration is still defined in the Jenkins master, you shouldn't need to do anything on the slave machine.

What is yarn-client mode in Spark?

Apache Spark has recently updated the version to 0.8.1, in which yarn-client mode is available. My question is, what does yarn-client mode really mean? In the documentation it says:
With yarn-client mode, the application will be launched locally. Just like running application or spark-shell on Local / Mesos / Standalone mode. The launch method is also the similar with them, just make sure that when you need to specify a master url, use “yarn-client” instead
What does it mean "launched locally"? Locally where? On the Spark cluster?
What is the specific difference from the yarn-standalone mode?
So in spark you have two different components. There is the driver and the workers. In yarn-cluster mode the driver is running remotely on a data node and the workers are running on separate data nodes. In yarn-client mode the driver is on the machine that started the job and the workers are on the data nodes. In local mode the driver and workers are on the machine that started the job.
When you run .collect() the data from the worker nodes get pulled into the driver. It's basically where the final bit of processing happens.
For my self i have found yarn-cluster mode to be better when i'm at home on the vpn, but yarn-client mode is better when i'm running code from within the data center.
Yarn-client mode also means you tie up one less worker node for the driver.
A Spark application consists of a driver and one or many executors. The driver program is the main program (where you instantiate SparkContext), which coordinates the executors to run the Spark application. The executors run tasks assigned by the driver.
A YARN application has the following roles: yarn client, yarn application master and list of containers running on the node managers.
When Spark application runs on YARN, it has its own implementation of yarn client and yarn application master.
With those background, the major difference is where the driver program runs.
Yarn Standalone Mode: your driver program is running as a thread of the yarn application master, which itself runs on one of the node managers in the cluster. The Yarn client just pulls status from the application master. This mode is same as a mapreduce job, where the MR application master coordinates the containers to run the map/reduce tasks.
Yarn client mode: your driver program is running on the yarn client where you type the command to submit the spark application (may not be a machine in the yarn cluster). In this mode, although the drive program is running on the client machine, the tasks are executed on the executors in the node managers of the YARN cluster.
Reference: http://spark.incubator.apache.org/docs/latest/cluster-overview.html
A Spark application running in
yarn-client mode:
driver program runs in client machine or local machine where the application has been launched.
Resource allocation is done by YARN resource manager based on data locality on data nodes and driver program from local machine will control the executors on spark cluster (Node managers).
Please refer this cloudera article for more info.
The difference between standalone mode and yarn deployment mode,
Resource optimization won't be efficient in standalone mode.
In standalone mode, driver program launch an executor in every node of a cluster irrespective of data locality.
standalone is good for use case, where only your spark application is being executed and the cluster do not need to allocate resources for other jobs in efficient manner.
Both spark and yarn are distributed framework , but their roles are different:
Yarn is a resource management framework, for each application, it has following roles:
ApplicationMaster: resource management of a single application, including ask for/release resource from Yarn for the application and monitor.
Attempt: an attempt is just a normal process which does part of the whole job of the application. For example , a mapreduce job which consists of multiple mappers and reducers , each mapper and reducer is an Attempt.
A common process of summiting a application to yarn is:
The client submit the application request to yarn. In the
request, Yarn should know the ApplicationMaster class; For
SparkApplication, it is
org.apache.spark.deploy.yarn.ApplicationMaster,for MapReduce job ,
it is org.apache.hadoop.mapreduce.v2.app.MRAppMaster.
Yarn allocate some resource for the ApplicationMaster process and
start the ApplicationMaster process in one of the cluster nodes;
After ApplicationMaster starts, ApplicationMaster will request resource from Yarn for this Application and start up worker;
For Spark, the distributed computing framework, a computing job is divided into many small tasks and each Executor will be responsible for each task, the Driver will collect the result of all Executor tasks and get a global result. A spark application has only one driver with multiple executors.
So, then ,the problem comes when Spark is using Yarn as a resource management tool in a cluster:
In Yarn Cluster Mode, Spark client will submit spark application to
yarn, both Spark Driver and Spark Executor are under the supervision
of yarn. In yarn's perspective, Spark Driver and Spark Executor have
no difference, but normal java processes, namely an application
worker process. So, when the client process is gone , e.g. the client
process is terminated or killed, the Spark Application on yarn is
still running.
In yarn client mode, only the Spark Executor are under the
supervision of yarn. The Yarn ApplicationMaster will request resource
for just spark executor. The driver program is running in the client
process which have nothing to do with yarn, just a process submitting
application to yarn.So ,when the client leave, e.g. the client
process exits, the Driver is down and the computing terminated.
First of all, let's make clear what's the difference between running Spark in standalone mode and running Spark on a cluster manager (Mesos or YARN).
When running Spark in standalone mode, you have:
a Spark master node
some Spark slaves nodes, which have been "registered" with the Spark master
So:
the master node will execute the Spark driver sending tasks to the executors & will also perform any resource negotiation, which is quite basic. For example, by default each job will consume all the existing resources.
the slave nodes will run the Spark executors, running the tasks submitted to them from the driver.
When using a cluster manager (I will describe for YARN which is the most common case), you have :
A YARN Resource Manager (running constantly), which accepts requests for new applications and new resources (YARN containers)
Multiple YARN Node Managers (running constantly), which consist the pool of workers, where the Resource manager will allocate containers.
An Application Master (running for the duration of a YARN application), which is responsible for requesting containers from the Resource Manager and sending commands to the allocated containers.
Note that there are 2 modes in that case: cluster-mode and client-mode. In the client mode, which is the one you mentioned:
the Spark driver will be run in the machine, where the command is executed.
The Application Master will be run in an allocated Container in the cluster.
The Spark executors will be run in allocated containers.
The Spark driver will be responsible for instructing the Application Master to request resources & sending commands to the allocated containers, receiving their results and providing the results.
So, back to your questions:
What does it mean "launched locally"? Locally where? On the Spark
cluster?
Locally means in the server in which you are executing the command (which could be a spark-submit or a spark-shell). That means that you could possibly run it in the cluster's master node or you could also run it in a server outside the cluster (e.g. your laptop) as long as the appropriate configuration is in place, so that this server can communicate with the cluster and vice-versa.
What is the specific difference from the yarn-standalone mode?
As described above, the difference is that in the standalone mode, there is no cluster manager at all. A more elaborate analysis and categorisation of all the differences concretely for each mode is available in this article.
With yarn-client mode, your spark application is running in your local machine. With yarn-standalone mode, your spark application would be submitted to YARN's ResourceManager as yarn ApplicationMaster, and your application is running in a yarn node where ApplicationMaster is running.
In both case, yarn serve as spark's cluster manager. Your application(SparkContext) send tasks to yarn.