Global environment variables for gitlab CI runner - gitlab-ci

I am working to set up a gitlab runner for multiple projects, and we want to be able to set up environment variables for all of the projects. I tried to set global variables in the .bashrc for both the gitlab-runner and root users but it did not recognize them during the CI script. What is the correct location to declare global environment variables?

You can also inject environment variables to your gitlab-runner directly in the commandline, as the gitlab-runner exec docker --help states:
OPTIONS: ..
--env value Custom environment variables injected to build environment [$RUNNER_ENV] ..
Here is a small example how I use it in a script:
Change the declarations as needed:
declare jobname="your_jobname"
declare runnerdir="/path/to/your/repository"
Get the env file into a bash array.
[ -f "$runnerdir/env" ] \
&& declare -a envlines=($(cat "$runnerdir/env"))
declare -a envs=()
for env in "${envlines[#]}"; do
envs+=(--env "$env")
done
And finally pass it to the gitlab-runner.
[ -d "$runnerdir" ] && cd "$runnerdir" \
&& gitlab-runner exec docker "${envs[#]}" $jobname \
&& cd -

You can define environment variables to inject in the runner's config.toml file. See the advanced runner configuration documentation in the [[runners]] section.
There doesn't seem to be a way to specify environment variables in the GitLab UI just for a specific runner.

With GitLab 13.1 (June 2020), you now have:
Instance-level CI/CD variables
GitLab now supports instance-level variables.
With this ability to set global variables, you no longer need to manually enter the same credentials repeatedly for all your projects.
This MVC introduces access to this feature by API, and the next iteration of this feature will provide the ability to configure instance-level variables directly in the UI.
See Documentation and issue.

Consider using an external persistent secret storage service like Vault or Keywhiz
Disclaimer: I am not associated nor used any of the above services

I have added export MY_VAR="FOO" to gitlab-runner's .bashrc, and it works.
echo export MY_VAR=\"FOO\" >> /home/gitlab-runner/.bashrc
Check which type of executor do you use? (shell, kubernetes, docker-ssh, parallels...) I use shell executor.
Check what type of shell does gitlab-runner use? (How to determine the current shell I'm working on) And edit the proper rc file for that.
Check the Gitlab CI Runner user.
I suggest dump all environment variables for further debugging, by add env to the .gitlab-ci.yml's script:
#.gitlab-ci.yml
job:
script: env

Making changes in ~/.bash_profile NOT ~/.bashrc.
See my answer

You can easily setup Variables in the GitLab Settings:
Project-level variables can be added by going to your project's Settings > CI/CD, then finding the section called Variables.
To make sure your variables are only used in
See:
https://docs.gitlab.com/ee/ci/variables/#variables

Related

How to Use Docker Build Secrets with Kaniko

Context
Our current build system builds docker images inside of a docker container (Docker in Docker). Many of our docker builds need credentials to be able to pull from private artifact repositories.
We've handled this with docker secrets.. passing in the secret to the docker build command, and in the Dockerfile, referencing the secret in the RUN command where its needed. This means we're using docker buildkit. This article explains it.
We are moving to a different build system (GitLab) and the admins have disabled Docker in Docker (security reasons) so we are moving to Kaniko for docker builds.
Problem
Kaniko doesn't appear to support secrets the way docker does. (there are no command line options to pass a secret through the Kaniko executor).
The credentials the docker build needs are stored in GitLab variables. For DinD, you simply add those variables to the docker build as a secret:
DOCKER_BUILDKIT=1 docker build . \
--secret=type=env,id=USERNAME \
--secret=type=env,id=PASSWORD \
And then in docker, use the secret:
RUN --mount=type=secret,id=USERNAME --mount=type=secret,id=PASSWORD \
USER=$(cat /run/secrets/USERNAME) \
PASS=$(cat /run/secrets/PASSWORD) \
./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts
...rest of build..
Without the --secret flag to the kaniko executor, I'm not sure how to take advantage of docker secrets... nor do I understand the alternatives. I also want to continue to support developer builds. We have a 'build.sh' script that takes care of gathering credentials and adding them to the docker build command.
Current Solution
I found this article and was able to sort out a working solution. I want to ask the experts if this is valid or what the alternatives might be.
I discovered that when the kaniko executor runs, it appears to mount a volume into the image that's being built at: /kaniko. That directory does not exist when the build is complete and does not appear to be cached in the docker layers.
I also found out that if if the Dockerfile secret is not passed in via the docker build command, the build still executes.
So my gitlab-ci.yml file has this excerpt (the REPO_USER/REPO_PWD variables are GitLab CI variables):
- echo "${REPO_USER}" > /kaniko/repo-credentials.txt
- echo "${REPO_PWD}" >> /kaniko/repo-credentials.txt
- /kaniko/executor
--context "${CI_PROJECT_DIR}/docker/target"
--dockerfile "${CI_PROJECT_DIR}/docker/target/Dockerfile"
--destination "${IMAGE_NAME}:${BUILD_TAG}"
Key piece here is echo'ing the credentials to a file in the /kaniko directory before calling the executor. That directory is (temporarily) mounted into the image which the executor is building. And since all this happens inside of the kaniko image, that file will disappear when kaniko (gitlab) job completes.
The developer build script (snip):
//to keep it simple, this assumes that the developer has their credentials//cached in a file (ignored by git) called dev-credentials.txt
DOCKER_BUILDKIT=1 docker build . \
--secret id=repo-creds,src=dev-credentials.txt
Basically same as before. Had to put it in a file instead of environment variables.
The dockerfile (snip):
RUN --mount=type=secret,id=repo-creds,target=/kaniko/repo-credentials.txt USER=$(sed '1q;d' /kaniko/repo-credentials.txt) PASS=$(sed '2q;d' /kaniko/repo-credentials.txt) ./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts...rest of build..
This Works!
In the Dockerfile, by mounting the secret in the /kaniko subfolder, it will work with both the DinD developer build as well as with the CI Kaniko executor.
For Dev builds, DinD secret works as always. (had to change it to a file rather than env variables which I didn't love.)
When the build is run by Kaniko, I suppose since the secret in the RUN command is not found, it doesn't even try to write the temporary credentials file (which I expected would fail the build). Instead, because I directly wrote the varibles to the temporarily mounted /kaniko directory, the rest of the run command was happy.
Advice
To me this does seem more kludgy than expected. I'm wanting to find out other/alternative solutions. Finding out the /kaniko folder is mounted into the image at build time seems to open a lot of possibilities.

How to import the environment variables from the parent branch in a forked repo (GitLab)?

I'm setting up a Gitlab runner to SSH into a remote server so I can run tests on physical hardware, however the jobs fail when launched from my forked branch. I save the SSH keys as environment variables in the parent and they are not picked up by the jobs running on the forked runners. How can I import the environment variables from the parent?
The jobs are successful when I manually add the SSH key as an environment variable to my forked repo, however this is not scalable. I have tried adding the project and all people involved to a common group and set the same variables in there, as well as initiate Group Runners. It seems that if you kickoff a runner from your personal account then you cannot access the necessary variables.
In the .gitlab-ci.yml file I added some print out statements to help debug. I set the SSH_PRIVATE_KEY and RUNNER_ID to their required values in the parent repo and left unassigned in my forked branch. I got blank outputs when run from my personal account.
gitlab-ci.yml
hardware-1:
image: ubuntu
before_script:
- echo "$SSH_PRIVATE_KEY"
- echo "$RUNNER_ID"
tags:
- hardware
script:
- ssh pi#raspberry "./test-hardware.sh"
Runner console output on forked repo.
$ ...
$ Updating certificates in /etc/ssl/certs...
$ 0 added, 0 removed; done.
$ Running hooks in /etc/ca-certificates/update.d...
$ echo "$SSH_PRIVATE_KEY"
$ echo "$RUNNER_ID"
On the parent branch, the console outputs the actual SSH_PRIVATE_KEY and RUNNER_ID. How to I force the runner to always run from the parent repo?
It might be because of this:
Variables can be protected. Whenever a variable is protected, it would only be securely passed to pipelines running on the protected branches or protected tags. The other pipelines would not get any protected variables.
Protected variables can be added by going to your project’s Settings > CI/CD, then finding the section called Variables, and check “Protected”.
Once you set them, they will be available for all subsequent pipelines.
To protect a branch or a tag:
Settings -> Repository -> Protected branches/tags

How to dynamically set an ENV variable using a Dockerfile

I have a Dockerfile that has access to a variable that indicates the environment it is being targeted to. Our CICD pipeline makes this environment variable available to the Dockerfile and I can test for a particular environment using "Run if $Environment =".
When I detect a "test" environment, I need to create another environment variable on-the-fly. However, code like this doesn't seem to work:
RUN if $Environment="test" ; then ; /
ENV NewEnvironmentVariable = "test" ; /
fi
The get "ENV" not found when it runs. So obviously, you can't use ENV this way within a RUN .. if.
I CAN however, use bash commands to export the variable, but it's probably creating this export in a different context, so, the Dockerfile doesn't have access to it. I would have thought that exporting it would make the new environment variable to the Docker file (when it returns from the "if" block.
In short, I simply need to evaluate and existing environment variable and if it contains the value I'm looking for it will create a new ENV variable just as if I have done "ENV MyNewVar=1".
Is this possible?
Thanks

Accessing Global Bamboo variables in inline shell script

I am using Bamboo 5.6.2
I am defining a plan global variable named "cd" and willing to access it as part of inline script task.
I have tried
echo $BAMBOO_CD
echo `$BAMBOO_CD`
echo "$BAMBOO_CD"
echo $BAMBOO_cd
echo `$BAMBOO_cd`
echo "$BAMBOO_cd"
But none of them are printing anything.
I have already gone through https://confluence.atlassian.com/display/BAMBOO056/Defining+global+variables
Appreciate pointers or an alternative way to define things globally.
I do not have an option of upgrading Bamboo at this stage to use advanced features.
if you declare dev as global variable . In inline script , you can access using $bamboo_dev

How to define a variable in a Dockerfile?

In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
I am aware of the ENV instruction, but I do no want these variables to be environment variables.
Is there a way to declare variables at Dockerfile scope?
You can use ARG - see https://docs.docker.com/engine/reference/builder/#arg
The ARG instruction defines a variable that users can pass at
build-time to the builder with the docker build command using the
--build-arg <varname>=<value> flag. If a user specifies a build
argument that was not defined in the Dockerfile, the build outputs an
error.
Can be useful with COPY during build time (e.g. copying tag specific content like specific folders)
For example:
ARG MODEL_TO_COPY
COPY application ./application
COPY $MODEL_TO_COPY ./application/$MODEL_TO_COPY
While building the container:
docker build --build-arg MODEL_TO_COPY=model_name -t <container>:<model_name specific tag> .
To answer your question:
In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
You can define a variable with:
ARG myvalue=3
Spaces around the equal character are not allowed.
And use it later with:
RUN echo $myvalue > /test
To my knowledge, only ENV allows that, as mentioned in "Environment replacement"
Environment variables (declared with the ENV statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
They have to be environment variables in order to be redeclared in each new containers created for each line of the Dockerfile by docker build.
In other words, those variables aren't interpreted directly in a Dockerfile, but in a container created for a Dockerfile line, hence the use of environment variable.
This day, I use both ARG (docker 1.10+, and docker build --build-arg var=value) and ENV.
Using ARG alone means your variable is visible at build time, not at runtime.
My Dockerfile usually has:
ARG var
ENV var=${var}
In your case, ARG is enough: I use it typically for setting http_proxy variable, that docker build needs for accessing internet at build time.
Christopher King adds in the comments:
Watch out!
The ARG variable is only in scope for the "stage that it is used" and needs to be redeclared for each stage.
He points out to Dockerfile / scope
An ARG variable definition comes into effect from the line on which it is defined in the Dockerfile not from the argument’s use on the command-line or elsewhere.
For example, consider this Dockerfile:
FROM busybox
USER ${user:-some_user}
ARG user
USER $user
# ...
A user builds this file by calling:
docker build --build-arg user=what_user .
The USER at line 2 evaluates to some_user as the user variable is defined on the subsequent line 3.
The USER at line 4 evaluates to what_user as user is defined and the what_user value was passed on the command line.
Prior to its definition by an ARG instruction, any use of a variable results in an empty string.
An ARG instruction goes out of scope at the end of the build stage where it was defined.
To use an arg in multiple stages, each stage must include the ARG instruction.
If the variable is re-used within the same RUN instruction, one could simply set a shell variable. I really like how they approached this with the official Ruby Dockerfile.
You can use ARG variable defaultValue and during the run command you can even update this value using --build-arg variable=value. To use these variables in the docker file you can refer them as $variable in run command.
Note: These variables would be available for Linux commands like RUN echo $variable and they wouldn't persist in the image.
Late to the party, but if you don't want to expose environment variables, I guess it's easier to do something like this:
RUN echo 1 > /tmp/__var_1
RUN echo `cat /tmp/__var_1`
RUN rm -f /tmp/__var_1
I ended up doing it because we host private npm packages in aws codeartifact:
RUN aws codeartifact get-authorization-token --output text > /tmp/codeartifact.token
RUN npm config set //company-123456.d.codeartifact.us-east-2.amazonaws.com/npm/internal/:_authToken=`cat /tmp/codeartifact.token`
RUN rm -f /tmp/codeartifact.token
And here ARG cannot work and i don't want to use ENV because i don't want to expose this token to anything else