I tried to use an enviromnet variable in my travis.yml, but all I get is an empty string. I have added a repository variable DEPLOY_KEY with some value to my repository settings, now I try to access it like this:
after_success:
- "curl -H 'Content-Type: application/json;' -X POST -d '{\"api-key\": $DEPLOY_KEY, \"branch\": $TRAVIS_BRANCH}' https://some.where/deploy"
I expected $DEPLOY_KEY to return my key, but instead it returns only an empty string, even though Travis does export DEPLOY_KEY=[secure] when running the build.
I think I need to add something like this to my travis.yml:
env:
- secret: "..."
But my problem is, what is "..." exactly? Is it my repositories public key? I can't find any information on how to use repository variables inside my travis.yml in the docs.
Some solutions suggested that encrypted variables could be used instead, but then, why give me the ability to set repository variables in the first place?
The documentation at https://docs.travis-ci.com/user/encryption-keys/ shows how to use encrypted environment variables.
According to that document the "..." contains both the name of the environment variable and the value of the environment variable. You will need to to use the travis CLI tool to create that secret value. The command to generate that secret looks like travis encrypt SOMEVAR="secretvalue"
Related
I'm trying to sort the list of artifacts from jfrog artifactory but getting (The requested URL returned error: 400 Bad Request), in the jfrog documentation (https://www.jfrog.com/confluence/display/JFROG/Artifactory+Comparison+Matrix) says it won't work for open source services. After we get list of artifacts need to delete old artifacts from subfolder in the artifactory repo. Tried with CLI and AQL but nothing worked.
Our repo url looks like this
http://domainname/artifactory/repo/folder/subfolder/test1.zip
Like test 1.zip we have many artifacts(let's say 50)in that subfolder. Looking for help on this, anyone pls me on this issue. Thanks.
While sorting is not supported in OSS versions, if you would like to delete artifacts older than a certain time period, you can use Relative Time Operators, parse the output, and use a script to delete those artifacts.
You can also specify a specific date. There are several Comparison Operators that you can use.
You can use the below AQL for reference:
curl -uadmin:password -XPOST "http://localhost:8082/artifactory/api/search/aql" -d 'items.find({"repo": "repo"}, {"path": "folder/subfolder"}, {"created" : {"$before" : "2minutes"}})' -H "Content-Type: text/plain"
I have a Dockerfile that has access to a variable that indicates the environment it is being targeted to. Our CICD pipeline makes this environment variable available to the Dockerfile and I can test for a particular environment using "Run if $Environment =".
When I detect a "test" environment, I need to create another environment variable on-the-fly. However, code like this doesn't seem to work:
RUN if $Environment="test" ; then ; /
ENV NewEnvironmentVariable = "test" ; /
fi
The get "ENV" not found when it runs. So obviously, you can't use ENV this way within a RUN .. if.
I CAN however, use bash commands to export the variable, but it's probably creating this export in a different context, so, the Dockerfile doesn't have access to it. I would have thought that exporting it would make the new environment variable to the Docker file (when it returns from the "if" block.
In short, I simply need to evaluate and existing environment variable and if it contains the value I'm looking for it will create a new ENV variable just as if I have done "ENV MyNewVar=1".
Is this possible?
Thanks
I am running version 0.8.4 as a container in my lab. CLI is also at version 0.8.4
I am trying to use a secret in a command one of my containers is trying to run.
Following the documentation has me needing to sign a repo to allow the job to consume the secret. The drone CLI does not seem to have a
drone sign command for me to run. So I create the secret with a --skip-verify=true flag. This creates the secret but when I run the job it errors out. The output in the UI shows a blank space where the secret should be injected.
Here is an excerpt of my .drone.yml where I am trying to inject secrets -s production -u ${cf_user} -p ${cf_password} --s
I have tried all the following ways to create a secret:
drone secret add <repo_name> --name <key> --value <value> --skip-verify=true
drone secret add <repo_name> --name <key> --value <value>
GUI Creation
I notice when I create an all capital name value the UI represents the value in all lowercase when the CLI shows it in capitals.
I also notice that if I include hyphens in the name and try to use that in my drone.yml the job errors out immediately with a bad substitution error.
Any help understanding what I am doing wrong would be much appreciated!
I got lost in the different documentation available. Should have been looking here rather than secret-guide.
In case I am not alone, I needed to add a secrects block in my pipeline.
I also needed to access them with $SECRET_KEY rather than ${SECRET_KEY}
pipeline:
publish:
image: governmentpaas/cf-cli
secrets: [ cf_user, cf_password ]
Just a little update on this one, I stumbled over it as well because the docs are inconsistent.
In the 0.8.5 version the only thing I had to do is:
add secrets via CLI or UI
add secrets array to utilise it
no need to pass variables to environment.
I am working to set up a gitlab runner for multiple projects, and we want to be able to set up environment variables for all of the projects. I tried to set global variables in the .bashrc for both the gitlab-runner and root users but it did not recognize them during the CI script. What is the correct location to declare global environment variables?
You can also inject environment variables to your gitlab-runner directly in the commandline, as the gitlab-runner exec docker --help states:
OPTIONS: ..
--env value Custom environment variables injected to build environment [$RUNNER_ENV] ..
Here is a small example how I use it in a script:
Change the declarations as needed:
declare jobname="your_jobname"
declare runnerdir="/path/to/your/repository"
Get the env file into a bash array.
[ -f "$runnerdir/env" ] \
&& declare -a envlines=($(cat "$runnerdir/env"))
declare -a envs=()
for env in "${envlines[#]}"; do
envs+=(--env "$env")
done
And finally pass it to the gitlab-runner.
[ -d "$runnerdir" ] && cd "$runnerdir" \
&& gitlab-runner exec docker "${envs[#]}" $jobname \
&& cd -
You can define environment variables to inject in the runner's config.toml file. See the advanced runner configuration documentation in the [[runners]] section.
There doesn't seem to be a way to specify environment variables in the GitLab UI just for a specific runner.
With GitLab 13.1 (June 2020), you now have:
Instance-level CI/CD variables
GitLab now supports instance-level variables.
With this ability to set global variables, you no longer need to manually enter the same credentials repeatedly for all your projects.
This MVC introduces access to this feature by API, and the next iteration of this feature will provide the ability to configure instance-level variables directly in the UI.
See Documentation and issue.
Consider using an external persistent secret storage service like Vault or Keywhiz
Disclaimer: I am not associated nor used any of the above services
I have added export MY_VAR="FOO" to gitlab-runner's .bashrc, and it works.
echo export MY_VAR=\"FOO\" >> /home/gitlab-runner/.bashrc
Check which type of executor do you use? (shell, kubernetes, docker-ssh, parallels...) I use shell executor.
Check what type of shell does gitlab-runner use? (How to determine the current shell I'm working on) And edit the proper rc file for that.
Check the Gitlab CI Runner user.
I suggest dump all environment variables for further debugging, by add env to the .gitlab-ci.yml's script:
#.gitlab-ci.yml
job:
script: env
Making changes in ~/.bash_profile NOT ~/.bashrc.
See my answer
You can easily setup Variables in the GitLab Settings:
Project-level variables can be added by going to your project's Settings > CI/CD, then finding the section called Variables.
To make sure your variables are only used in
See:
https://docs.gitlab.com/ee/ci/variables/#variables
In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
I am aware of the ENV instruction, but I do no want these variables to be environment variables.
Is there a way to declare variables at Dockerfile scope?
You can use ARG - see https://docs.docker.com/engine/reference/builder/#arg
The ARG instruction defines a variable that users can pass at
build-time to the builder with the docker build command using the
--build-arg <varname>=<value> flag. If a user specifies a build
argument that was not defined in the Dockerfile, the build outputs an
error.
Can be useful with COPY during build time (e.g. copying tag specific content like specific folders)
For example:
ARG MODEL_TO_COPY
COPY application ./application
COPY $MODEL_TO_COPY ./application/$MODEL_TO_COPY
While building the container:
docker build --build-arg MODEL_TO_COPY=model_name -t <container>:<model_name specific tag> .
To answer your question:
In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
You can define a variable with:
ARG myvalue=3
Spaces around the equal character are not allowed.
And use it later with:
RUN echo $myvalue > /test
To my knowledge, only ENV allows that, as mentioned in "Environment replacement"
Environment variables (declared with the ENV statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
They have to be environment variables in order to be redeclared in each new containers created for each line of the Dockerfile by docker build.
In other words, those variables aren't interpreted directly in a Dockerfile, but in a container created for a Dockerfile line, hence the use of environment variable.
This day, I use both ARG (docker 1.10+, and docker build --build-arg var=value) and ENV.
Using ARG alone means your variable is visible at build time, not at runtime.
My Dockerfile usually has:
ARG var
ENV var=${var}
In your case, ARG is enough: I use it typically for setting http_proxy variable, that docker build needs for accessing internet at build time.
Christopher King adds in the comments:
Watch out!
The ARG variable is only in scope for the "stage that it is used" and needs to be redeclared for each stage.
He points out to Dockerfile / scope
An ARG variable definition comes into effect from the line on which it is defined in the Dockerfile not from the argument’s use on the command-line or elsewhere.
For example, consider this Dockerfile:
FROM busybox
USER ${user:-some_user}
ARG user
USER $user
# ...
A user builds this file by calling:
docker build --build-arg user=what_user .
The USER at line 2 evaluates to some_user as the user variable is defined on the subsequent line 3.
The USER at line 4 evaluates to what_user as user is defined and the what_user value was passed on the command line.
Prior to its definition by an ARG instruction, any use of a variable results in an empty string.
An ARG instruction goes out of scope at the end of the build stage where it was defined.
To use an arg in multiple stages, each stage must include the ARG instruction.
If the variable is re-used within the same RUN instruction, one could simply set a shell variable. I really like how they approached this with the official Ruby Dockerfile.
You can use ARG variable defaultValue and during the run command you can even update this value using --build-arg variable=value. To use these variables in the docker file you can refer them as $variable in run command.
Note: These variables would be available for Linux commands like RUN echo $variable and they wouldn't persist in the image.
Late to the party, but if you don't want to expose environment variables, I guess it's easier to do something like this:
RUN echo 1 > /tmp/__var_1
RUN echo `cat /tmp/__var_1`
RUN rm -f /tmp/__var_1
I ended up doing it because we host private npm packages in aws codeartifact:
RUN aws codeartifact get-authorization-token --output text > /tmp/codeartifact.token
RUN npm config set //company-123456.d.codeartifact.us-east-2.amazonaws.com/npm/internal/:_authToken=`cat /tmp/codeartifact.token`
RUN rm -f /tmp/codeartifact.token
And here ARG cannot work and i don't want to use ENV because i don't want to expose this token to anything else