Running Jenkins tests in Docker containers build from dockerfile in codebase - testing

I want to deploy a continuous integration platform based on Jenkins. As I have various kinds of projects (PHP / Symfony, node, angular, …) and as I want these tests to run both locally and on Jenkins, I was thinking about using Dockers containers.
The process I’m aiming for is :
A merge request is opened on Github / Gitlab
A webhook notifies Jenkins of the merge request
Jenkins pulls the repo, builds the containers and runs a shell script to execute the tests
Once the tests are finished, Jenkins retrieves the results from one of the containers (through a shared volume) and process the results.
I do not want Jenkins to be in a container.
With this kind of process, I’m hoping to be able to run very easily the tests on each developer machine with something like a docker-composer up and then in one of the container ./tests all.
I’m not very familiar with Jenkins. I’ve read a lot of documentation, but most of them suggested to define Jenkins slaves for each kind of projects beforehand. I would like everything to be as dynamic as possible and require as less configuration on Jenkins as possible.
I would appreciate a description of your test process if you have ever implemented something similar. If you think what I’m aiming for is impossible, I would also appreciate if you could explain to me why.

A setup I suggest is Docker in Docker.
The base is a derived Docker image, which extends the jenkins:2.x image by adding a Docker commandline client.
The Jenkins is started as a container with its home folder (a folder e.g. /var/jenkins_home mounted from the Docker host) and the Docker socket file to be able to start Docker containers from Jenkins build jobs.
docker run -d --name jenkins -v /var/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock ... <yourDerivedJenkinsImage>
To check, if this setup is working just execute following command after starting the Jenkins container:
docker exec jenkins docker version
If the "docker version" output does NOT show:
Is the docker daemon running on this host?
Everythin is fine.
In your build jobs, you could configure the process you mentioned above. Let Jenkins simply check out the repository. The repository should contain your build and test scripts.
Use a freestyle build job with a shell execution. A shell execution could look like this:
docker run --rm --volumes-from jenkins <yourImageToBuildAndTestTheProject> bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh
This command simply starts a new container (to build and/or test your project) with the volumes from jenkins. Which means that the cloned repository will be available under $WORKSPACE. So if you run "bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh" your project will be built within a container of "yourImageToBuildAndTestTheProject". After running this, you could start other containers for integration tests or combine this with "docker-compose" by installing it on the derived Jenkins image.
Advantages are the minimal configuration affort you have within Jenkins - only the SCM configuration for cloning the GIT repository is required. Since each Jenkins job uses the Docker client directly you could use for each project one or Docker image to build and/or test, WITHOUT further Jenkins configuration.
If you need additional configuration e.g. SSH keys or Maven settings, just put them on the Docker host and start the Jenkins container with the additional volumes, which contain those configuration files.
Using this Docker option within the shell execution of your build jobs:
--volumes-from jenkins
Automatically adds workspace and configuration files to each of your build jobs.

Related

How to Use Docker Build Secrets with Kaniko

Context
Our current build system builds docker images inside of a docker container (Docker in Docker). Many of our docker builds need credentials to be able to pull from private artifact repositories.
We've handled this with docker secrets.. passing in the secret to the docker build command, and in the Dockerfile, referencing the secret in the RUN command where its needed. This means we're using docker buildkit. This article explains it.
We are moving to a different build system (GitLab) and the admins have disabled Docker in Docker (security reasons) so we are moving to Kaniko for docker builds.
Problem
Kaniko doesn't appear to support secrets the way docker does. (there are no command line options to pass a secret through the Kaniko executor).
The credentials the docker build needs are stored in GitLab variables. For DinD, you simply add those variables to the docker build as a secret:
DOCKER_BUILDKIT=1 docker build . \
--secret=type=env,id=USERNAME \
--secret=type=env,id=PASSWORD \
And then in docker, use the secret:
RUN --mount=type=secret,id=USERNAME --mount=type=secret,id=PASSWORD \
USER=$(cat /run/secrets/USERNAME) \
PASS=$(cat /run/secrets/PASSWORD) \
./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts
...rest of build..
Without the --secret flag to the kaniko executor, I'm not sure how to take advantage of docker secrets... nor do I understand the alternatives. I also want to continue to support developer builds. We have a 'build.sh' script that takes care of gathering credentials and adding them to the docker build command.
Current Solution
I found this article and was able to sort out a working solution. I want to ask the experts if this is valid or what the alternatives might be.
I discovered that when the kaniko executor runs, it appears to mount a volume into the image that's being built at: /kaniko. That directory does not exist when the build is complete and does not appear to be cached in the docker layers.
I also found out that if if the Dockerfile secret is not passed in via the docker build command, the build still executes.
So my gitlab-ci.yml file has this excerpt (the REPO_USER/REPO_PWD variables are GitLab CI variables):
- echo "${REPO_USER}" > /kaniko/repo-credentials.txt
- echo "${REPO_PWD}" >> /kaniko/repo-credentials.txt
- /kaniko/executor
--context "${CI_PROJECT_DIR}/docker/target"
--dockerfile "${CI_PROJECT_DIR}/docker/target/Dockerfile"
--destination "${IMAGE_NAME}:${BUILD_TAG}"
Key piece here is echo'ing the credentials to a file in the /kaniko directory before calling the executor. That directory is (temporarily) mounted into the image which the executor is building. And since all this happens inside of the kaniko image, that file will disappear when kaniko (gitlab) job completes.
The developer build script (snip):
//to keep it simple, this assumes that the developer has their credentials//cached in a file (ignored by git) called dev-credentials.txt
DOCKER_BUILDKIT=1 docker build . \
--secret id=repo-creds,src=dev-credentials.txt
Basically same as before. Had to put it in a file instead of environment variables.
The dockerfile (snip):
RUN --mount=type=secret,id=repo-creds,target=/kaniko/repo-credentials.txt USER=$(sed '1q;d' /kaniko/repo-credentials.txt) PASS=$(sed '2q;d' /kaniko/repo-credentials.txt) ./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts...rest of build..
This Works!
In the Dockerfile, by mounting the secret in the /kaniko subfolder, it will work with both the DinD developer build as well as with the CI Kaniko executor.
For Dev builds, DinD secret works as always. (had to change it to a file rather than env variables which I didn't love.)
When the build is run by Kaniko, I suppose since the secret in the RUN command is not found, it doesn't even try to write the temporary credentials file (which I expected would fail the build). Instead, because I directly wrote the varibles to the temporarily mounted /kaniko directory, the rest of the run command was happy.
Advice
To me this does seem more kludgy than expected. I'm wanting to find out other/alternative solutions. Finding out the /kaniko folder is mounted into the image at build time seems to open a lot of possibilities.

How to run script on build host (not docker container) during gitlab CI pipeline

I'm new to Gitlab (and YML files, and just about everything else in this project ...), and want to add some script to the build pipeline to automate release file generation.
I added a few script lines to the branch's release section in gitlab-ci.yml, and can see they're generally doing what I want in the console output to the Gitlab UI. But, after a bit of head scratching I now realise the build itself is executed within a docker instance, which explains why I can't see the release file on the Gitlab host machine.
Is there a way of executing script from the YML file on the Gitlab host, as opposed to within the build container?
It depends on the executor type of the Gitlab Runner where the scripts are executed.
If the jobs are executed on a runner with a docker executor, each job will be run in a new container.
If the jobs are executed on a runner with a Shell executor the jobs will be executed directly on the runner machine.
BTW, it is not advised to have your runners on the same machine where you host your Gitlab server. Especially, runners with Shell executor. It compromises resiliency and scalability. A CI/CD job could break the entire host and make your gitlab server unavailable.

Gitlab-Runner Deploy to multiple servers

I got following environment and setup:
3 load balanced production servers and
one development server
a working auto deployment to my development server with
one gitlab-runner which connects by ssh to my dev-server and pulls the dev-branch and
a gitlab-ci.yml which is limited to my dev-branch
How can I achieve:
further auto-deploy to my development server if I push into my dev-branch
auto deploy to my 3 production servers if I push into my master-branch
The question is based on my current setup where following questions came up
can I let a gitlab-runner run just locally (also not in a docker container because I havn't installed one) so that its just up to the gitlab-ci.yml to differ on branches and deploy to the specific servers or
can I install multiple gitlab-runners which are taking aktion on just specific branches?
or is there another solution which lets me achieve my plans?
"Can I let a gitlab-runner run just locally (also not in a docker container because I havn't installed one) so that its just up to the
gitlab-ci.yml to differ on branches and deploy to the specific
servers"
Yes, register local runners as specific runners with Shell executors on the machines you want to deploy to so they can run local commands just like you use SSH now and only your specific projects can use them. Then take a look at the next sub-answer regarding tags.
"Can I install multiple gitlab-runners which are taking aktion on just specific branches?"
Use either tags to pin certain jobs to runners (e.g. your deploy job) or use only or except to pin jobs to branches or
tags. (e.g. your deploy_prod job only run on master branch)
Example .gitlab-ci.yml file (abstract):
deploy-dev:
tags:
- dev-runner
only:
- dev-branch
script:
- cd mydir
- git pull
deploy-prod:
tags:
- prod-runner
only:
- master
script:
- cd mydir
- git pull

One Gitlab CI runner per project?

We upgraded from Gitlab 7.11.4 to 9 in one fell swoop (by accident). Now we are trying to get CI set up the way it use to run for us before. I understand that CI is an integrated thing now.
One of my coworkers got a multi-runner thing going. The running command looks like so:
/usr/bin/gitlab-ci-multi-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner
But previously we had 1 runner for each project and we had a user associated for each project. So, if we have 2 projects called "portal" and "engine", we would have users created thusly:
gitlab-runner-fps-portal
gitlab-runner-fps-engine
And being users, they would have home folders like:
/home/gitlab-runner-fps-portal
/home/gitlab-runner-fps-engine
In the older version of CI, you'd have a config.yml with the url of CI and the runners token. Now you have config.toml.
I want to "divorce" the engine runner from this multi setup which runs under user "gitlab-runner" and have its own runner that runs under "gitlab-runner-fps-engine".
Easy to do? Right now since all of this docker business is new to us, we're continuing on to use "shell" as our executor in gitlab, if that information is useful.
There are at least two ways you can do it:
Register a specific runner in each of the projects and disable the shared runners.
Use tags to specify the job must be run on a specific runner. This way you can have some CI jobs run on your defined environment while others (like lint for example) can be run on tagged shared runners.

How do I run puppet agent inside a docker container to build it out. How do I achieve this?

If I run a docker container with CMD["/use/sbin/ssh", "-D"], I can have them running daemonized, which is good.
Then, I want to run puppet agent too, to build our said container as say an apache server.
Is it possible to do this and then expose the apache server?
Here is another solution. We use ENTRYPOINT docker file instruction as described here: https://docs.docker.com/articles/dockerfile_best-practices/#entrypoint. Using it you can run puppet agent and other services in background before instruction from CMD or command passed via docker run.