How do I run puppet agent inside a docker container to build it out. How do I achieve this? - apache

If I run a docker container with CMD["/use/sbin/ssh", "-D"], I can have them running daemonized, which is good.
Then, I want to run puppet agent too, to build our said container as say an apache server.
Is it possible to do this and then expose the apache server?

Here is another solution. We use ENTRYPOINT docker file instruction as described here: https://docs.docker.com/articles/dockerfile_best-practices/#entrypoint. Using it you can run puppet agent and other services in background before instruction from CMD or command passed via docker run.

Related

Airflow variables and connections

I have a Airflow 1.10.11 running in an EC2 setup by my predecessors with docker-compose.
I wish to learn how to set it up. I have the docker-compose file. But i need all the configurations.
I know the config file can be found in scheduler container under /opt/bitnami/airflow/airflow.cfg.
There are many connections and variables and xcom in the UI. Where can I find them in which container?
Or how could I export them? some variables are encrypted and i can only see ***, so i could not recreate one by one in UI. Thanks
i saw the documantation on exporting connections using command: airflow connections export connections.json
where do i execute this command in CLI, in which container?
You can run on each airflow-* running container.
docker exec -it airflow-worker bin/bash
Or you can run with docker-compose from your machine . for more details see here
docker-compose run airflow-worker airflow [command]

How do I kill a YARN container to test failure scenarios

I'm building an application on AWS EMR using YARN (and Dask) version Hadoop 2.7.3-amzn-1. I'm trying to test various failure scenarios and I'm wanting to simulate a container failure. I can't seem to find an easy way to kill a YARN container - only the whole application. Is there a command-line utility for this?
[root#node1 lillcol]# yarn container -help
20/04/24 15:04:14 INFO client.AHSProxy: Connecting to Application History server at node1/127.0.0.1:10200
usage: container
-help Displays help for all commands.
-list <Application Attempt ID> List containers for application
attempt.
-signal <container ID [signal command]> Signal the container. The
available signal commands are
[OUTPUT_THREAD_DUMP,
GRACEFUL_SHUTDOWN,
FORCEFUL_SHUTDOWN] Default
command is OUTPUT_THREAD_DUMP.
-status <Container ID> Prints the status of the
container.
Through the command yarn container -signal [container-ID] GRACEFUL_SHUTDOWN to achieve.
i've tried and int works,I hope that will be helpful.
YARN has no CLI or REST API that kills a container.
The simplest way to create a container failure is to login to a NodeManager host and kill the process (which would be a container) spawned by the NodeManager.
Seems like it's exposed in API starting from version 2.8.0
https://hadoop.apache.org/docs/r2.8.0/api/org/apache/hadoop/yarn/client/api/YarnClient.html#signalToContainer(org.apache.hadoop.yarn.api.records.ContainerId,%20org.apache.hadoop.yarn.api.records.SignalContainerCommand)

Openwhisk serverless setup on premise

I want to setup apache openwhisk on-premise in my organization. so that we can use it internally within the org. i am not able to find much on the net on this. i tried cloning the code from git and building it in the windows. but it doesn't work. kindly help
Follow these steps to start the platform in a virtual machine in your local environment.
# Clone openwhisk
git clone --depth=1 https://github.com/openwhisk/openwhisk.git
# Change directory to tools/vagrant
cd openwhisk/tools/vagrant
# Run script to create vm and run hello action
./hello
If this works, you should see the following output.
wsk action invoke /whisk.system/utils/echo -p message hello --result
{
"message": "hello"
}
If you encounter problems with these steps, please open an issue in the Github repository for the project.
Deploy OpenWhisk using the ansible playbook in high-availability mode.

Running Jenkins tests in Docker containers build from dockerfile in codebase

I want to deploy a continuous integration platform based on Jenkins. As I have various kinds of projects (PHP / Symfony, node, angular, …) and as I want these tests to run both locally and on Jenkins, I was thinking about using Dockers containers.
The process I’m aiming for is :
A merge request is opened on Github / Gitlab
A webhook notifies Jenkins of the merge request
Jenkins pulls the repo, builds the containers and runs a shell script to execute the tests
Once the tests are finished, Jenkins retrieves the results from one of the containers (through a shared volume) and process the results.
I do not want Jenkins to be in a container.
With this kind of process, I’m hoping to be able to run very easily the tests on each developer machine with something like a docker-composer up and then in one of the container ./tests all.
I’m not very familiar with Jenkins. I’ve read a lot of documentation, but most of them suggested to define Jenkins slaves for each kind of projects beforehand. I would like everything to be as dynamic as possible and require as less configuration on Jenkins as possible.
I would appreciate a description of your test process if you have ever implemented something similar. If you think what I’m aiming for is impossible, I would also appreciate if you could explain to me why.
A setup I suggest is Docker in Docker.
The base is a derived Docker image, which extends the jenkins:2.x image by adding a Docker commandline client.
The Jenkins is started as a container with its home folder (a folder e.g. /var/jenkins_home mounted from the Docker host) and the Docker socket file to be able to start Docker containers from Jenkins build jobs.
docker run -d --name jenkins -v /var/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock ... <yourDerivedJenkinsImage>
To check, if this setup is working just execute following command after starting the Jenkins container:
docker exec jenkins docker version
If the "docker version" output does NOT show:
Is the docker daemon running on this host?
Everythin is fine.
In your build jobs, you could configure the process you mentioned above. Let Jenkins simply check out the repository. The repository should contain your build and test scripts.
Use a freestyle build job with a shell execution. A shell execution could look like this:
docker run --rm --volumes-from jenkins <yourImageToBuildAndTestTheProject> bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh
This command simply starts a new container (to build and/or test your project) with the volumes from jenkins. Which means that the cloned repository will be available under $WORKSPACE. So if you run "bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh" your project will be built within a container of "yourImageToBuildAndTestTheProject". After running this, you could start other containers for integration tests or combine this with "docker-compose" by installing it on the derived Jenkins image.
Advantages are the minimal configuration affort you have within Jenkins - only the SCM configuration for cloning the GIT repository is required. Since each Jenkins job uses the Docker client directly you could use for each project one or Docker image to build and/or test, WITHOUT further Jenkins configuration.
If you need additional configuration e.g. SSH keys or Maven settings, just put them on the Docker host and start the Jenkins container with the additional volumes, which contain those configuration files.
Using this Docker option within the shell execution of your build jobs:
--volumes-from jenkins
Automatically adds workspace and configuration files to each of your build jobs.

Jenkins SSH remote process is getting killed as soon as the Jenkins SSH plugin returns back

Jenkins version: 1.574
I created a simple job which performs the following:
Using "Execute shell script on remote host using SSH" as one of the BUILD steps, I'm just calling a shell script. This shell script performs stop and start operations on Tomcat to restart an application on the target machine.
I have a valid username, password, port defined for the target SSH server in Jenkins Global settings.
I saw this behavior that when I run a Jenkins job and call the restart script (which gets the application name as parameter $1), it works fine, but as soon as "Execute shell script on remote host using SSH" step completes, I see the new process dies on the remote/target application server.
If I run the script from the target/remote server itself, everything works fine and the new process/PID remains live forever, but running the same script from Jenkins, though I don't see any errors and everything works as expected, the new process dies as soon as the above mentioned SSH step is complete and control comes back to the next BUILD step in Jenkins job OR the Jenkins job is complete.
I saw a few posts/blogs and tried setting: BUILD_ID=dontKillMe in the Jenkins job (in various places i.e. Prepare Environment variables and also using Inject Environment variables...). When the job's particular build# is complete, I can see Environment Variables for that build# does say BUILD_ID=dontKillMe as its value (instead of the default Timestamp tag value).
I tried putting nohup before calling the restart script, i.e.,
nohup restart_tomcat.sh "${app}"
I also tried:
BUILD_ID=dontKillMe nohup restart_tomcat.sh "${app}"
This doesn't give any error and creates a nohup.out file on the remote server (but I'm not worried about it as the restart_tomcat.sh script itself creates its own LOG file which I'm "cat"ing after the restart_tomcat.sh script is complete. cat'ing on the log file is performed using another "Execute shell script on remote host using SSH" build step, and it successfully shows the log file created by the restart script).
I don't know what I'm missing at this point, but as soon as the restart_tomcat.sh step is complete, the new PID/process on the remote/target server dies.
How can I fix this?
I've been through this myself.
On my first iteration, before I knew about Jenkins ProcessTreeKiller, I ended up just daemonizing Tomcat. The Apache Tomcat documentation includes a section on running as a daemon.
You can also try disabling the ProcessTreeKiller for your whole Jenkins instance, if it's relatively small (read the first link for information).
The BUILD_ID=dontKillMe should be passed to the shell, and therefore it should be in your command line, not in Jenkins global configuration or job parameters.
BUILD_ID=dontKillMe restart_tomcat.sh "${app}" should have worked without problems.
You can also try nohup restart_tomcat.sh "${app}" & with the & at the end.
My solution (it worked after trying everything else) in Ubuntu 14.04 (Trusty Tahr) (Amazon AWS - Amazon EC2), Jenkins 1.601:
Exec command: (setsid COMMAND < /dev/null > /dev/null 2>&1 &);
Exec in PTY: DISABLED
// Example COMMAND=socat TCP4-LISTEN:1337,fork TCP4:127.0.0.1:1338
I created this Transfer as my last one.
#!/bin/ksh
export BUILD_ID=dontKillMe
I added the above line to the start of my script and the issue was resolved.