Killing Sidecar container once main container is terminated in Jobs/Cron Jobs - jobs

We are facing an issue wrt sidecars in Jobs/Cron Jobs. We are using EFK stack for logging and using filebeat as a sidecar container for shipping logs from
app to ElasticSearch. But while implementing this in Batch Jobs, the sidecar container is not getting killed once the main container (main Job script) is Terminated . So the Job will never
go to Completed/Terminated state. Any pointers on how to handle this issue. - To kill sidecar container once the main container is terminated.

Related

Dynatrace one agent in ecs fargate containers stops but application container is running

Am trying to install one agent in my ECS fargate task. Along with application container i have added another container definition for one agent with image as alpine:latest and used run time injection.
While running the task, initially the one agent container is in running state and after a minute it goes to stopped state same time application container will be in running state.
In dynatrace the same host is available and keeps recreating after 5-10mins frequently.
Actually the issue that I had was task was in draining status because of application issue due to which in dynatrace it keeps recreating... And the same time i used run time injection for my ECS fargate so once the binaries are downloaded and injected to volume, the one agent container definition will stop while the application container keeps running and injecting logs in dynatrace.
I have the same problem and connected via ssh to the cluster I saw that the agent needs to be privileged. The only thing that worked for me was sending traces and metrics through Opentelemetry.
https://aws-otel.github.io/docs/components/otlp-exporter
Alternative:
use sleep infinity in the command field of your oneAgent container.

Tekton sidecar: docker daemon failing to start

I have a Tekton pipeline that builds and pushes a Docker image to a private repository. The task that handles this uses a DinD sidecar. Originally, it worked just fine, but it's started failing with the error Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?. This was an intermittent error at first, but now it seems to be happening every time I try to run the pipeline. I tried making it wait until it can connect to the daemon, in case it was a timing issue, but it ends up just waiting forever. What might be preventing the Docker daemon from starting, or preventing the task from connecting to it?
Older Docker-DIND images used to create that socket file, a while ago. Nowadays, you would have to use a TCP socket.
See TektonCD samples to patch your Tasks: https://github.com/tektoncd/catalog/blob/main/task/docker-build/0.1/docker-build.yaml

Two Logstash instances on same Docker container

Am wondering if there is a way two logstash processes with separate configurations can be run on a single Docker container.
My setup has a Logstash process using file as input sending events to Redis and from there to second Logstash process and over to custom http process. So, Logstash --> Redis --> Logstash --> Http. Was hoping to keep the two Logstash instances and Redis on the same Docker container. Am still new to Docker & Would highly appreciate any inputs / feedback on the same.
This would be more complicated than it needs to be. It is much simpler in the Docker world to run three containers to do three things than to run one container that does them all. It is possible though-
You need to run an init process in your container to control multiple processes, and launch that as your container's entry point. The init will have to know how to launch the processes you are interested in, both logstash and the redis. Basimage/phusion provides an image with a good init system, but the launch scripts are based on runit and can be hard to pick up.
If you wanted to only run a single process, you can use a docker-compose file to launch all three processes and link them together.

Docker Redis container orderly shutdown

I am running redis-server in a Docker container on Ubuntu 14.10 x64. If I access the redis database via phpRedisAdmin, do a few edits and then get them to be saved to disk, shutdown the container and then restart it everything is fine - the edited redis keys are present and correct. However, if I edit keys and then shut down the container then restart it the edits do not stick.
Clearly, the dump.rdb file is not being saved automatically when the container is shutdown. I imagine that I could fix this by putting in an /etc/init.d script that is symlinked from /etc/rc6.d. However, I am wondering - why does shutting down a redis container not perform an orderly shutdown of the running process(es) in the container? After all, when I reboot my server (both the server & the container run Ubuntu 14.10) I do not have to explicitly commit the redis db changes to disk.
The main process in a Docker container will be sent a SIGTERM signal when you run docker stop -t N CONTAINER. The process should then begin to shut itself down cleanly. If after N seconds (10 by default) this still hasn't happened, Docker will use a SIGKILL signal, which will kill the process without giving it a chance to clean up. The reason you were having problems was probably because you simply weren't giving Redis long enough to shutdown cleanly.
It's important to note that only the main process in the container (PID 1) will be sent signals. This means that the main process must be responsible for shutting down any child processes in the container, or you can end up with zombie processes.
If you still have problems with redis not doing what you want on shutdown, you could wrap it in a script which acts as PID 1, catches the SIGTERM signal and does whatever tidying up you want (just make sure you do shutdown redis and any other processes you've started).

What is yarn-client mode in Spark?

Apache Spark has recently updated the version to 0.8.1, in which yarn-client mode is available. My question is, what does yarn-client mode really mean? In the documentation it says:
With yarn-client mode, the application will be launched locally. Just like running application or spark-shell on Local / Mesos / Standalone mode. The launch method is also the similar with them, just make sure that when you need to specify a master url, use “yarn-client” instead
What does it mean "launched locally"? Locally where? On the Spark cluster?
What is the specific difference from the yarn-standalone mode?
So in spark you have two different components. There is the driver and the workers. In yarn-cluster mode the driver is running remotely on a data node and the workers are running on separate data nodes. In yarn-client mode the driver is on the machine that started the job and the workers are on the data nodes. In local mode the driver and workers are on the machine that started the job.
When you run .collect() the data from the worker nodes get pulled into the driver. It's basically where the final bit of processing happens.
For my self i have found yarn-cluster mode to be better when i'm at home on the vpn, but yarn-client mode is better when i'm running code from within the data center.
Yarn-client mode also means you tie up one less worker node for the driver.
A Spark application consists of a driver and one or many executors. The driver program is the main program (where you instantiate SparkContext), which coordinates the executors to run the Spark application. The executors run tasks assigned by the driver.
A YARN application has the following roles: yarn client, yarn application master and list of containers running on the node managers.
When Spark application runs on YARN, it has its own implementation of yarn client and yarn application master.
With those background, the major difference is where the driver program runs.
Yarn Standalone Mode: your driver program is running as a thread of the yarn application master, which itself runs on one of the node managers in the cluster. The Yarn client just pulls status from the application master. This mode is same as a mapreduce job, where the MR application master coordinates the containers to run the map/reduce tasks.
Yarn client mode: your driver program is running on the yarn client where you type the command to submit the spark application (may not be a machine in the yarn cluster). In this mode, although the drive program is running on the client machine, the tasks are executed on the executors in the node managers of the YARN cluster.
Reference: http://spark.incubator.apache.org/docs/latest/cluster-overview.html
A Spark application running in
yarn-client mode:
driver program runs in client machine or local machine where the application has been launched.
Resource allocation is done by YARN resource manager based on data locality on data nodes and driver program from local machine will control the executors on spark cluster (Node managers).
Please refer this cloudera article for more info.
The difference between standalone mode and yarn deployment mode,
Resource optimization won't be efficient in standalone mode.
In standalone mode, driver program launch an executor in every node of a cluster irrespective of data locality.
standalone is good for use case, where only your spark application is being executed and the cluster do not need to allocate resources for other jobs in efficient manner.
Both spark and yarn are distributed framework , but their roles are different:
Yarn is a resource management framework, for each application, it has following roles:
ApplicationMaster: resource management of a single application, including ask for/release resource from Yarn for the application and monitor.
Attempt: an attempt is just a normal process which does part of the whole job of the application. For example , a mapreduce job which consists of multiple mappers and reducers , each mapper and reducer is an Attempt.
A common process of summiting a application to yarn is:
The client submit the application request to yarn. In the
request, Yarn should know the ApplicationMaster class; For
SparkApplication, it is
org.apache.spark.deploy.yarn.ApplicationMaster,for MapReduce job ,
it is org.apache.hadoop.mapreduce.v2.app.MRAppMaster.
Yarn allocate some resource for the ApplicationMaster process and
start the ApplicationMaster process in one of the cluster nodes;
After ApplicationMaster starts, ApplicationMaster will request resource from Yarn for this Application and start up worker;
For Spark, the distributed computing framework, a computing job is divided into many small tasks and each Executor will be responsible for each task, the Driver will collect the result of all Executor tasks and get a global result. A spark application has only one driver with multiple executors.
So, then ,the problem comes when Spark is using Yarn as a resource management tool in a cluster:
In Yarn Cluster Mode, Spark client will submit spark application to
yarn, both Spark Driver and Spark Executor are under the supervision
of yarn. In yarn's perspective, Spark Driver and Spark Executor have
no difference, but normal java processes, namely an application
worker process. So, when the client process is gone , e.g. the client
process is terminated or killed, the Spark Application on yarn is
still running.
In yarn client mode, only the Spark Executor are under the
supervision of yarn. The Yarn ApplicationMaster will request resource
for just spark executor. The driver program is running in the client
process which have nothing to do with yarn, just a process submitting
application to yarn.So ,when the client leave, e.g. the client
process exits, the Driver is down and the computing terminated.
First of all, let's make clear what's the difference between running Spark in standalone mode and running Spark on a cluster manager (Mesos or YARN).
When running Spark in standalone mode, you have:
a Spark master node
some Spark slaves nodes, which have been "registered" with the Spark master
So:
the master node will execute the Spark driver sending tasks to the executors & will also perform any resource negotiation, which is quite basic. For example, by default each job will consume all the existing resources.
the slave nodes will run the Spark executors, running the tasks submitted to them from the driver.
When using a cluster manager (I will describe for YARN which is the most common case), you have :
A YARN Resource Manager (running constantly), which accepts requests for new applications and new resources (YARN containers)
Multiple YARN Node Managers (running constantly), which consist the pool of workers, where the Resource manager will allocate containers.
An Application Master (running for the duration of a YARN application), which is responsible for requesting containers from the Resource Manager and sending commands to the allocated containers.
Note that there are 2 modes in that case: cluster-mode and client-mode. In the client mode, which is the one you mentioned:
the Spark driver will be run in the machine, where the command is executed.
The Application Master will be run in an allocated Container in the cluster.
The Spark executors will be run in allocated containers.
The Spark driver will be responsible for instructing the Application Master to request resources & sending commands to the allocated containers, receiving their results and providing the results.
So, back to your questions:
What does it mean "launched locally"? Locally where? On the Spark
cluster?
Locally means in the server in which you are executing the command (which could be a spark-submit or a spark-shell). That means that you could possibly run it in the cluster's master node or you could also run it in a server outside the cluster (e.g. your laptop) as long as the appropriate configuration is in place, so that this server can communicate with the cluster and vice-versa.
What is the specific difference from the yarn-standalone mode?
As described above, the difference is that in the standalone mode, there is no cluster manager at all. A more elaborate analysis and categorisation of all the differences concretely for each mode is available in this article.
With yarn-client mode, your spark application is running in your local machine. With yarn-standalone mode, your spark application would be submitted to YARN's ResourceManager as yarn ApplicationMaster, and your application is running in a yarn node where ApplicationMaster is running.
In both case, yarn serve as spark's cluster manager. Your application(SparkContext) send tasks to yarn.