Drone.io secrets not populating in yml appropriately and documentation seems inaccurate - drone.io

I am running version 0.8.4 as a container in my lab. CLI is also at version 0.8.4
I am trying to use a secret in a command one of my containers is trying to run.
Following the documentation has me needing to sign a repo to allow the job to consume the secret. The drone CLI does not seem to have a
drone sign command for me to run. So I create the secret with a --skip-verify=true flag. This creates the secret but when I run the job it errors out. The output in the UI shows a blank space where the secret should be injected.
Here is an excerpt of my .drone.yml where I am trying to inject secrets -s production -u ${cf_user} -p ${cf_password} --s
I have tried all the following ways to create a secret:
drone secret add <repo_name> --name <key> --value <value> --skip-verify=true
drone secret add <repo_name> --name <key> --value <value>
GUI Creation
I notice when I create an all capital name value the UI represents the value in all lowercase when the CLI shows it in capitals.
I also notice that if I include hyphens in the name and try to use that in my drone.yml the job errors out immediately with a bad substitution error.
Any help understanding what I am doing wrong would be much appreciated!

I got lost in the different documentation available. Should have been looking here rather than secret-guide.
In case I am not alone, I needed to add a secrects block in my pipeline.
I also needed to access them with $SECRET_KEY rather than ${SECRET_KEY}
pipeline:
publish:
image: governmentpaas/cf-cli
secrets: [ cf_user, cf_password ]

Just a little update on this one, I stumbled over it as well because the docs are inconsistent.
In the 0.8.5 version the only thing I had to do is:
add secrets via CLI or UI
add secrets array to utilise it
no need to pass variables to environment.

Related

How to Use Docker Build Secrets with Kaniko

Context
Our current build system builds docker images inside of a docker container (Docker in Docker). Many of our docker builds need credentials to be able to pull from private artifact repositories.
We've handled this with docker secrets.. passing in the secret to the docker build command, and in the Dockerfile, referencing the secret in the RUN command where its needed. This means we're using docker buildkit. This article explains it.
We are moving to a different build system (GitLab) and the admins have disabled Docker in Docker (security reasons) so we are moving to Kaniko for docker builds.
Problem
Kaniko doesn't appear to support secrets the way docker does. (there are no command line options to pass a secret through the Kaniko executor).
The credentials the docker build needs are stored in GitLab variables. For DinD, you simply add those variables to the docker build as a secret:
DOCKER_BUILDKIT=1 docker build . \
--secret=type=env,id=USERNAME \
--secret=type=env,id=PASSWORD \
And then in docker, use the secret:
RUN --mount=type=secret,id=USERNAME --mount=type=secret,id=PASSWORD \
USER=$(cat /run/secrets/USERNAME) \
PASS=$(cat /run/secrets/PASSWORD) \
./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts
...rest of build..
Without the --secret flag to the kaniko executor, I'm not sure how to take advantage of docker secrets... nor do I understand the alternatives. I also want to continue to support developer builds. We have a 'build.sh' script that takes care of gathering credentials and adding them to the docker build command.
Current Solution
I found this article and was able to sort out a working solution. I want to ask the experts if this is valid or what the alternatives might be.
I discovered that when the kaniko executor runs, it appears to mount a volume into the image that's being built at: /kaniko. That directory does not exist when the build is complete and does not appear to be cached in the docker layers.
I also found out that if if the Dockerfile secret is not passed in via the docker build command, the build still executes.
So my gitlab-ci.yml file has this excerpt (the REPO_USER/REPO_PWD variables are GitLab CI variables):
- echo "${REPO_USER}" > /kaniko/repo-credentials.txt
- echo "${REPO_PWD}" >> /kaniko/repo-credentials.txt
- /kaniko/executor
--context "${CI_PROJECT_DIR}/docker/target"
--dockerfile "${CI_PROJECT_DIR}/docker/target/Dockerfile"
--destination "${IMAGE_NAME}:${BUILD_TAG}"
Key piece here is echo'ing the credentials to a file in the /kaniko directory before calling the executor. That directory is (temporarily) mounted into the image which the executor is building. And since all this happens inside of the kaniko image, that file will disappear when kaniko (gitlab) job completes.
The developer build script (snip):
//to keep it simple, this assumes that the developer has their credentials//cached in a file (ignored by git) called dev-credentials.txt
DOCKER_BUILDKIT=1 docker build . \
--secret id=repo-creds,src=dev-credentials.txt
Basically same as before. Had to put it in a file instead of environment variables.
The dockerfile (snip):
RUN --mount=type=secret,id=repo-creds,target=/kaniko/repo-credentials.txt USER=$(sed '1q;d' /kaniko/repo-credentials.txt) PASS=$(sed '2q;d' /kaniko/repo-credentials.txt) ./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts...rest of build..
This Works!
In the Dockerfile, by mounting the secret in the /kaniko subfolder, it will work with both the DinD developer build as well as with the CI Kaniko executor.
For Dev builds, DinD secret works as always. (had to change it to a file rather than env variables which I didn't love.)
When the build is run by Kaniko, I suppose since the secret in the RUN command is not found, it doesn't even try to write the temporary credentials file (which I expected would fail the build). Instead, because I directly wrote the varibles to the temporarily mounted /kaniko directory, the rest of the run command was happy.
Advice
To me this does seem more kludgy than expected. I'm wanting to find out other/alternative solutions. Finding out the /kaniko folder is mounted into the image at build time seems to open a lot of possibilities.

In what order is the serverless file evaluated?

I have tried to find out in what order the statements of the serverless file are evaluated (maybe it is more common to say that 'variables are resolved').
I haven't been able to find any information about this and to some extent it makes working with serverless feel like a guessing game for me.
As an example, the latest surprise I got was when I tried to run:
$ sls deploy
serverless.yaml
useDotenv: true
provider:
stage: ${env:stage}
region: ${env:region}
.env
region=us-west-1
stage=dev
I got an error message stating that env is not available at the time when stage is resolved. This was surprising to me since I have been able to use env to resolve other variables in the provider section, and there is nothing in the syntax to indicate that stage is resolved earlier.
In what order is the serverless file evaluated?
In effect you've created a circular dependency. Stage is special because it is needed to identify which .env file to load. ${env:stage} is being resolved from ${stage}.env, but Serverless needs to know what ${stage} is in order to find ${stage}.env etc.
This is why it's evaluated first.
Stage (and region, actually) are both optional CLI parameters. In your serverless.yml file what you're setting is a default, with the CLI parameter overriding it where different.
Example:
provider:
stage: staging
region: ca-central-1
Running serverless deploy --stage prod --region us-west-2 will result in prod and us-west-2 being used for stage and region (respectively) for that deployment.
I'd suggest removing any variable interpolation for stage and instead setting a default, and overriding via CLI when needed.
Then dotenv will know which environment file to use, and complete the rest of the template.

Snakemake - Tibanna config support

I trying to run snakemake --tibanna to deploy Snakemake on AWS using the "Unicorn" Step Functions Tibanna creates.
I can't seem to find a way to change the different arguments Tibanna accepts like which subnet, AZ or Security Group will be used for the actual EC2 instance deployed.
Argument example (when running Tibanna without Snakemake):
https://github.com/4dn-dcic/tibanna/blob/master/test_json/unicorn/shelltest4.json#L32
Thanks!
Did you noticed this option?
snakemake --help
--tibanna-config TIBANNA_CONFIG [TIBANNA_CONFIG ...]
Additional tibanan config e.g. --tibanna-config
spot_instance=true subnet= security
group=
I think it was added recently.
-jk

Drone Repo Add throwing error - No help topic for 'add'

Getting the following error when using drone cli to add/activate repo
No help topic for 'add'
I can confirm I am successfully login and I am an admin.
{"id":1,"login":"XXXXX","email":"","machine":false,"admin":true,"active":true,"avatar":"https://bitbucket.org/account/XXXX/avatar/32/","syncing":false,"synced":1578888217,"created":1578431775,"updated":1578891320,"last_login":1578891344}
I can also list my repo using 'drone repo ls'
My guess, if you are using the add option is that you are still interacting with drone 0.8 or below, in this case the docs have been archived to an alternate location in favor of the latest version (v1.x). The old docs are still available under the following URL and help for the add option is present there:
https://0-8-0.docs.drone.io/cli-repository-add/
If you are not using 0.8 and are indeed trying to use 1.x, perhaps you are referencing improper documentation, as this cli option shifted in v1 to enable
$ drone repo enable <repo/name>
Regardless of the versions however, you will want to ensure you both have admin access to the repository (so that drone is able to add the appropriate webhooks) and also refresh or sync your repo listing in if it is something brand new:
$ drone repo sync
username/hello-world
organization/minio
...
NOTE: This might take a bit depending on how many repos you have access to

Delete or reset Gitlab CI builds

Is it possible to delete old builds in Gitlab CI?
I tested a few things and have now about 20 builds that are useless (most are failed anyway).
It also shows stages that I don't have anymore which kinda clutters the Pipelines page and some of the uploaded artifacts are a bit big.
I wasn't able to find any documentation on this, only that disabling CI in the settings doesn't remove the builds.
Using Gitlab 8.10 Community (hosted by Gitlab.com)
There is currently no option in the GUI to completely get rid of a build other than expunge related data from the build. (The erase option in the build)
If you would have a local installation you could modify the database directly but I would advise caution. (I'll put the guide here for completeness sake)
Login to the GitLab database. If you use the default PostgreSQL :
sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h /var/opt/gitlab/postgresql -d gitlabhq_production
Check if there is a table ci_builds. For pSQL: \dt
Delete the builds with normal SQL. For example: DELETE FROM ci_builds WHERE id = 2
(Optional) If you want to cleanup a list of commits which triggered a build you need to midify the table ci_commits.