How to view a Drone secret using the CLI? - drone.io

I want to see what the value is of a drone secret. Is there any way to view drone secrets using the drone CLI? I don't want to see it using echo "$secret" in the build.

You can't use the Drone CLI to see a secret's value once it has been set.

There's no public methods (API/CLI) to retrieve secret values.
However, Drone stores its secrets unencrypted in the database, in hex encoding. If you are a server admin, you can retrieve them.

Related

Fastlane Match with Gitlab Secure Files - Can't use different private token for code signing repo in CI/CD

We've been having some issues getting Fastlane Match to work in Gitlab CI using access tokens from within an existing CI pipeline.
The setup:
Repo for storing the certs / profiles: set up during fastlane init. We'll call this the "Cert Repo".
Repo for our React Native project: uses fastlane to handle builds/uploading to App Center and Testflight. We'll call this the "Project Repo"
Setup of match via match init went fine. We did the setup via terminal on the build server.
In our Matchfile:
gitlab_project("PATH_TO_CERT_REPO_HERE")
storage_mode("gitlab_secure_files")
app_identifier(["APP_IDENTIFIER_HERE"])
username("APPLE_ACCOUNT_USERNAME_HERE")
keychain_password("KEYCHAIN_PW_HERE")
team_id("TEAM_ID_HERE")
We had to pass in some env vars to the command for our gitlab enterprise api url (as "CI_API_V4_URL" and the Cert Repo access token as "PRIVATE_TOKEN"
We ran match for all cert/profile types we needed and they all uploaded to the secure files section of the Cert Repo correctly.
The Problem:
Our branches in the Project Repo use Gitlab CI to run various scripts and call a fastlane lane that will do the versioning, certs/profiles, and then upload the build to App Center or Testflight.
When we run match in readonly mode in our lane this way, match is failing with a 401 error.
Looking into the source for match and secure files for storage, it seems if you have a PRIVATE_TOKEN env var set, match will give a warning that you have both JOB_TOKEN and PRIVATE_TOKEN set and it will use the JOB_TOKEN.
The JOB_TOKEN is provided via Gitlab CI itself.
My guess is the JOB_TOKEN for this pipeline is not a valid token to authenticate against the CERT_REPO, which match needs to download the certs/profiles.
How is this supposed to work if I can't pass in a token for match to use for the CERT_REPO?
If we were using normal git storage, you can pass in the git_basic_authorization argument and give the base64 encoded "username:acces_token" string, which I'm assuming would solve the problem.
But using gitlab_secure_files, you can only use tokens.
Before we go and redo everything to use git storage and not gitlab secure files, can someone explain what we're missing here?
How is match supposed to authenticate with the Cert Repo from within the Project Repo in CI if the token it has is for the Project Repo? Doesn't it need the token for the Cert Repo to authenticate?

How do I get revision tags from google cloud source repository via curl?

What I want
I have a python backend application, using a service account, running in docker.
I have a cloud build trigger that is connected to a bitbucket repository. This trigger uses a webhook. For revision I use tags.
I want to trigger this webhook with my backend application. I want to provide a specific tag (using a placeholder variable).
I want the backend to give me a list of all available tags (like I get on the console.google.com frontend, see screenshot)
What I tried
I tried this API endpoint using a Bearer token (which works fine), but it doesn't provide me with a tag list: Source Repo API
curl https://sourcerepo.googleapis.com/v1/projects/<project>/repos/<repo>' --header "Authorization: Bearer $(gcloud auth print-access-token)" --header 'Accept: application/json'
Because it is possible to retrieve all tags in the cloud console, I used the developer tools to find the endpoint that provides me with all available tags:
https://console.cloud.google.com/m/source/repos/get?project=<project>&repo=<repo>
My issue here is that it takes cookies to authenticate, if I use the Bearer token it does not work.
Is it possible to authenticate my service account automatically against console.google.com to use this endpoint? Or is there another way to get a list of tags?
From what you have explained I understand that your concerns are:
1. If there is a way to get the list of tags from your repository that you are able to see in the GCP console using the endpoint that you have found.
The information that the console displays regarding tags do not come from any REST or gRPC API (the APIs provided by Google), but rather it comes directly from the git API. The console frontend runs a command similar to git tag in order to get the tags from your repository. The tags are not stored within the GCP system, the console only queries the git repo for the tags.
2. Can I authenticate with a service account on the console?
No. The APIs used by the web frontends (i.e. APIs starting with https://console.cloud.google.com) will only allow cookie authentication, which only user accounts can obtain. There is usually a way to translate a frontend API (https://console.cloud.google.com) to a GCP API (https://*.googleapis.com), where you can use regular authentication to retrieve the information. However,in this case, the tag information is not in a GCP API (but rather inside the git repo), so there is no translation available.
3. If there is another way to list the possible tags present in the repository?
I tried to reproduce your situation to find a way to be able to get the list of the tags present in one repository, in this case a Bitbucket repository, and I found that you will be able to get this data using the $ git tag command. In this documentation you will be able to find all the commands related to Repository tags.
Knowing this, after linking the Bitbucket Repository to my code, I was able to get the list of tags after using the $ git tag command.

Can I use git-tfs with extra header?

I need to automate the 'git-tfs pull' command in azure devops.
I have no problem executing this command with my user/pass from cmd window, but when it runs in Azure DevOps Build Agent it doesn't finish, sure it is an authentication problem.
Git-commands have the possibility to pass additional http-headers with the request. This is used by Azure DevOps build agent to pass OAuth token in an AUTHORIZATION: bearer-header when fetching the files.
Is it possible that git-tfs pull can pass the extra headers with the request?
Is it possible that git-tfs pull can pass the extra headers with the request?
No...
But... I haven't tried this and I'm guessing it won't work, since setting an OAuth credential tends to require a different object type in the TFVC Client Object Model, but this may do the trick:
git config --local tfs-remote.default.username .
git config --local tfs-remote.default.password $(SYSTEM.ACCESSTOKEN)
This will set the OAuth token in pipelines as the password to use for authentication. Make sure the pipeline has access to this special variable.
If the OAuth Token for the pipeline won't do the trick, you could try using a Personal Access Token instead.

Access Token for Dockerhub

I created a repository on hub.docker.com and now want to push my image to the Dockerhub using my credentials. I am wondering whether I have to use my username and password or whether I can create some kind of access token to push the docker image.
What I want to do is using the docker-image resource from Concourse to push an image to Dockerhub. Therefore I have to configure credentials like:
type: docker-image
source:
email: {{docker-hub-email}}
username: {{docker-hub-username}}
password: {{docker-hub-password}}
repository: {{docker-hub-image-dummy-resource}}
and I don't want to use my Dockerhub password for that.
In short, you can't. There are some solutions that may appeal to you, but it may ease your mind first to know there's a structural reason for this:
Resources are configured via their source and params, which are defined at the pipeline level (in your yml file). Any authentication information has to be defined there, because there's no way to get information from an earlier step in your build into the get step (it has no inputs).
Since bearer tokens usually time out after "not that long" (i.e. hours or days) which is also true of DockerHub tokens, the concourse instance needs to be able to fetch a new token from the authentication service every time the build runs if necessary. This requires some form of persistent auth to be stored in the concourse server anyway, and currently Dockerhub does not support CI access tokens a la github.
All that is to say, you will need to provide a username and password to Concourse one way or another.
If you're worried about security, there are some steps you can most likely take to reduce risk:
you can use --load-vars-from to protect your credentials from being saved in your pipeline, storing them elsewhere (LastPass, local file, etc).
you might be able to create a user on Dockerhub that only has access to the particular repo(s) you want to push, a "CI bot user" if you will.
Docker Hub supports Access Token
goto Account Settings > Security
its same as Github personal access token (PAT)
You can use this token instead of actual password

Back-end access and secret keys required?

Are Docker Registry S3 back-end access and secret keys required? I don't understand why.
I use an IAM role and can't get access and secret keys from that. Before I didn't have to provide access and secret key to s3 settings in docker registry and it worked automatically since the IAM role granted the server access to the s3 resources. Now the keys are required in the YAML setting (I use docker compose to spin up registry) and it won't start without them.
Is there some way to get around this without having to add an IAM user?
I got it working. All I did was take out the following field and let the docker registry do it's magic (picking up access/secret key from the IAM role I assume)
Delete these 2 entries from docker-compose.yml:
REGISTRY_STORAGE_S3_ACCESSKEY: xxx
REGISTRY_STORAGE_S3_SECRETKEY: xxx