How do I kill a YARN container to test failure scenarios - hadoop-yarn

I'm building an application on AWS EMR using YARN (and Dask) version Hadoop 2.7.3-amzn-1. I'm trying to test various failure scenarios and I'm wanting to simulate a container failure. I can't seem to find an easy way to kill a YARN container - only the whole application. Is there a command-line utility for this?

[root#node1 lillcol]# yarn container -help
20/04/24 15:04:14 INFO client.AHSProxy: Connecting to Application History server at node1/127.0.0.1:10200
usage: container
-help Displays help for all commands.
-list <Application Attempt ID> List containers for application
attempt.
-signal <container ID [signal command]> Signal the container. The
available signal commands are
[OUTPUT_THREAD_DUMP,
GRACEFUL_SHUTDOWN,
FORCEFUL_SHUTDOWN] Default
command is OUTPUT_THREAD_DUMP.
-status <Container ID> Prints the status of the
container.
Through the command yarn container -signal [container-ID] GRACEFUL_SHUTDOWN to achieve.
i've tried and int works,I hope that will be helpful.

YARN has no CLI or REST API that kills a container.
The simplest way to create a container failure is to login to a NodeManager host and kill the process (which would be a container) spawned by the NodeManager.

Seems like it's exposed in API starting from version 2.8.0
https://hadoop.apache.org/docs/r2.8.0/api/org/apache/hadoop/yarn/client/api/YarnClient.html#signalToContainer(org.apache.hadoop.yarn.api.records.ContainerId,%20org.apache.hadoop.yarn.api.records.SignalContainerCommand)

Related

Azure container instance behave differently that local container

I have a strange situation that i would like to share with you.
I started container recently and wand to have Azure DevOps agent running on container.
On my windows 10 laptop , i can instanciate a Linux container and
everything run & execute well (using WSL)
On a Ubuntu VM running an Azure, the same container run well and
execute well
However the same container in Azure Container instance failed and for unknow reason i get the following error:
Generating browser application bundles (phase: setup)...
/bin/sh: 1: wslpath: not found
01 12 2022 14:55:59.933:ERROR [config]: Error in config file!
Error: Command failed: wslpath -w "/usr/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin"
/bin/sh: 1: wslpath: not found
When i see wsl i think about Windows Subsystem Linux but i am not sure. I really do not understand why ACI behave like that with that error.
I always thought that container should behave the same wherever it runs. Did you experience it ? Any idea are welcome.
Regards

When I run a command with yarn, how do I get the applicationId?

I'm submitting a job with the yarn jar command to run the distributed shell. How do I get the applicationId programmatically?
To get the application Id you need to go to ResourceManager Web UI, which can be accessed by the IP addr of your node where resource manager is available and port number to use is 8088. There you can see the Application id, Container id and your job status.
You can look the job status from CLI also. You can list all the running jobs using command yarn application -list and yarn application status . It won't be a detailed output like you can see in web UI but will help you get the status and running jobs

Running Jenkins tests in Docker containers build from dockerfile in codebase

I want to deploy a continuous integration platform based on Jenkins. As I have various kinds of projects (PHP / Symfony, node, angular, …) and as I want these tests to run both locally and on Jenkins, I was thinking about using Dockers containers.
The process I’m aiming for is :
A merge request is opened on Github / Gitlab
A webhook notifies Jenkins of the merge request
Jenkins pulls the repo, builds the containers and runs a shell script to execute the tests
Once the tests are finished, Jenkins retrieves the results from one of the containers (through a shared volume) and process the results.
I do not want Jenkins to be in a container.
With this kind of process, I’m hoping to be able to run very easily the tests on each developer machine with something like a docker-composer up and then in one of the container ./tests all.
I’m not very familiar with Jenkins. I’ve read a lot of documentation, but most of them suggested to define Jenkins slaves for each kind of projects beforehand. I would like everything to be as dynamic as possible and require as less configuration on Jenkins as possible.
I would appreciate a description of your test process if you have ever implemented something similar. If you think what I’m aiming for is impossible, I would also appreciate if you could explain to me why.
A setup I suggest is Docker in Docker.
The base is a derived Docker image, which extends the jenkins:2.x image by adding a Docker commandline client.
The Jenkins is started as a container with its home folder (a folder e.g. /var/jenkins_home mounted from the Docker host) and the Docker socket file to be able to start Docker containers from Jenkins build jobs.
docker run -d --name jenkins -v /var/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock ... <yourDerivedJenkinsImage>
To check, if this setup is working just execute following command after starting the Jenkins container:
docker exec jenkins docker version
If the "docker version" output does NOT show:
Is the docker daemon running on this host?
Everythin is fine.
In your build jobs, you could configure the process you mentioned above. Let Jenkins simply check out the repository. The repository should contain your build and test scripts.
Use a freestyle build job with a shell execution. A shell execution could look like this:
docker run --rm --volumes-from jenkins <yourImageToBuildAndTestTheProject> bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh
This command simply starts a new container (to build and/or test your project) with the volumes from jenkins. Which means that the cloned repository will be available under $WORKSPACE. So if you run "bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh" your project will be built within a container of "yourImageToBuildAndTestTheProject". After running this, you could start other containers for integration tests or combine this with "docker-compose" by installing it on the derived Jenkins image.
Advantages are the minimal configuration affort you have within Jenkins - only the SCM configuration for cloning the GIT repository is required. Since each Jenkins job uses the Docker client directly you could use for each project one or Docker image to build and/or test, WITHOUT further Jenkins configuration.
If you need additional configuration e.g. SSH keys or Maven settings, just put them on the Docker host and start the Jenkins container with the additional volumes, which contain those configuration files.
Using this Docker option within the shell execution of your build jobs:
--volumes-from jenkins
Automatically adds workspace and configuration files to each of your build jobs.

Fuse ESB admin command not found

I use jboss fuse 6.0.0 in windows and start the container using the bin/fuse.bat. The etc/users.properties is modified to add the line admin=admin,admin.
At first the admin command acts as normal. I have admin:list showing all the containers, admin:create to create the child containers.
Then I followed the instructions of
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/6.0/html/Getting_Started/files/Deploy-Fabric-Create.html
and create a fabric use the command fabric:create --clean. After that the admin command is gone! I get Command not found: admin:list, and I can no longer list the child containers created by admin:create. The fabric:container-list command only enumerates the containers created by the fabric:container-create-child command.
Does any one experienced this problem before? Is it normal? How can I get the admin commands back?
This is expected, when you create fabric, then fabric is managing the containers. So you should use fabric commands to create/manage your containers.

How do I run puppet agent inside a docker container to build it out. How do I achieve this?

If I run a docker container with CMD["/use/sbin/ssh", "-D"], I can have them running daemonized, which is good.
Then, I want to run puppet agent too, to build our said container as say an apache server.
Is it possible to do this and then expose the apache server?
Here is another solution. We use ENTRYPOINT docker file instruction as described here: https://docs.docker.com/articles/dockerfile_best-practices/#entrypoint. Using it you can run puppet agent and other services in background before instruction from CMD or command passed via docker run.