I want to add an ssh key to my drone secrets, but can't get the command to work. I've tried many versions of the command found on various stackoverflow/documentation pages, but none of them work. The command help also gives an entirely different syntax than the one in the documentation...
From the documentation:
drone secret add \
--repository <registry> \
--image <image> \
--name <name> \
--value <value>
This just doesn't work for me.
Then I've found this Stackoverflow question about adding ssh keys to secrets. The answer shows again a different syntax, I tried this way, still doesn't work.
When I try to use the command, I get this:
Incorrect Usage.
NAME: drone secret add - adds a secret
USAGE: drone secret add [command options] [repo] [key] [value]
OPTIONS: --event [--event option --event option] inject the secret
for these event types --image [--image option --image
option] inject the secret for these image types --input input
secret value from a file --skip-verify skip verification for the
secret --conceal conceal secret in build logs
Which suggests it should be used like this:
drone secret add user/repo SSH_KEY <my_id_rsa>
But that doesn't work either.
So what's the actual way of using this command?
Turns out there is an option to add secrets in the web interface, in the settings page of a repo. I completely missed it and the documentation doesn't mention it.
So no need to use the command line.
Related
We are using the community version of Tyk. When trying to add new API via file-based add, and then reloading - it doesn't seem to work! It produces this in logs.
tyk-gateway_1 | time="Jun 21 04:56:28" level=warning msg="Attempted administrative access with invalid or missing key!" prefix=main
When I execute this…
curl -H "x-tyk-authorization: '352d20ee67be67f6340b4c0605b044b7'" -s http://localhost:8080/tyk/reload/group | python3 -mjson.tool
I am 100% sure the docker-compose took secret listed in tyk.standalone.conf file. I even logged into the container created and checked “tyk.conf” and it has this very secret. Not sure what is wrong at this stage. It just doesn’t seem to recognize it as an admin key. Any advice?
Please check whether docker-compose.yaml file has the environment variable TYK_GW_SECRET set. If so, please use that value as secret as the environment variable takes precedence over the config values.
Please refer to https://tyk.io/docs/tyk-environment-variables/ for more information.
Please remove the single quote covering the secret value.
Please try with below command and let us know.
curl -H "x-tyk-authorization: 352d20ee67be67f6340b4c0605b044b7" -s http://localhost:8080/tyk/reload/group | python3 -mjson.tool
I’m using the dev container and I need to have a fixed secret key.
Here’s my current script to create the container and the db.
#!/usr/bin/env bash
docker pull fauna/faunadb
docker container stop faunadb || true && docker container rm faunadb || true
docker run --name faunadb -d \
--health-cmd="faunadb-admin status" --health-interval=5s \
-p 8443:8443 \
-p 8084:8084 \
fauna/faunadb
./docker/wait-for-healthy.sh faunadb 30
echo n | fauna add-endpoint http://localhost:8443/ --alias localhost --key secret
fauna create-database generator_dev --endpoint=localhost
fauna create-key generator_dev --endpoint=localhost
curl -u secret: http://localhost:8084/import --data-binary "#functions/schemas/schema.graphql"
I would like this command to always return the same secret key
fauna create-key generator_dev --endpoint=localhost
Is that possible?
I need a fixed secret key because I need to import the schema in the next step, so the easy way is to have a known secret key
Any idea is appreciated
By default, the Fauna Dev Docker image uses secret as the root-level admin key's secret. That would provided the consistency you seek without requiring additional key generation.
When you create a key, the BCrypt algorithm is employed, and the Snowflake-inspired document id is incorporated into the hash. That means that there is no way to "generate" a consistent key multiple times.
For most situations, where you are simulating a production workload, you would have to create a new key, capture the returned secret, and use the secret in subsequent queries. How you do that is up to you.
Here's what I've got so far:
I've generated an SSH key pair inside my repo and also added the public key to my ~/.ssh/authorized_keys on the remote host.
My remote host has root user and password login disabled for security. I put the SSH username I use to log in manually inside an environment variable called SSH_USERNAME.
Here's where I'm just not sure what to do. How should I fill out my bitbucket-pipelines.yml?
Here is the raw contents of that file... What should I add?
# This is a sample build configuration for JavaScript.
# Check our guides at https://confluence.atlassian.com/x/14UWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: samueldebruyn/debian-git
pipelines:
branches:
master:
- step:
script: # Modify the commands below to build your repository.
- sftp $FTP_USERNAME#192.241.216.482
First of all: you should not add a key pair to your repo. Credentials should never be in a repo.
Defining the username as an environment variable is a good idea. You should do the same with the private key of your keypair. (But you have to Bas64-encode it – see Bb Pipelines documentation – and mark it as secure, so it is not visible in the repo settings.)
Then, before you actually want to connect, you have to make sure the private key (of course, Base64-decoded) is known to your pipeline’s SSH setup.
Basically, what you need to do in your script (either directly or in a shell script) is:
- echo "$SSH_PRIVATE_KEY" | base64 --decode > ~/.ssh/id_rsa
- chmod go-r ~/.ssh/id_rsa
BTW, I’d suggest also putting also the host’s IP in an env variable.
The following command leads to a series of reasonable prompts for information such as company information, contact info, etc... I'd like to be able to run it but pass that information as either parameters or a config file but I can't find out how from the docs (https://certbot.eff.org/docs/using.html#command-line-options). Any ideas?
letsencrypt certonly \
--webroot -w /letsencrypt/challenges/ \
--text --renew-by-default --agree-tos \
$domain_args \
--email=$EMAIL
Note that I am not trying to renew but to generate fresh new certificates.
Thank you
You should pass the --noninteractive flag to letsencrypt. According to the document that you linked to, that will produce an error telling you which other flags are necessary.
When using ployst/letsencrypt the initial certificate creation can be done using their internal scripts. Those scripts already pass all the right arguments to make this an automated process and not an interactive one. The documentation has the following two steps that both create the certificate and apply it as a secret.
If your environment variables are already set properly, you don't even have to pass -c 'EMAIL=.... etc.
Generate a new set of certs
Once this container is running you can generate new certificates
using:
kubectl exec -it <pod> -- bash -c 'EMAIL=fred#fred.com DOMAINS=example.com foo.example.com ./fetch_certs.sh'
Save the set of certificates as a secret
kubectl exec -it <pod> -- bash -c 'DOMAINS=example.com foo.example.com ./save_certs.sh'
This question may have been asked before but I don't understand the concept. Can you please help me here?
Weird issue from this morning .. see i just push my file to google cloud computing then showing below error.. I don't know where to look that error.
ri#ri-desktop:~$ gcloud compute --project "project" ssh --zone "europe-west1-b" "instance"
Warning: Permanently added '192.xx.xx.xx' (ECDSA) to the list of known hosts.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
This occurs when your compute instance has PermitRootLogin no in it's SSHD config and you try to login as root. You can change the login user by adding username# before the instance name. Here is a complete example:
gcloud compute instances create my-demo-compute \
--zone us-central1-f \
--machine-type f1-micro \
--image-project debian-cloud \
--image-family debian-8 \
--boot-disk-size=10GB
gcloud --quiet compute ssh user#hostname --zone us-central1-f
In the example above, gcloud will set the correct credentials and will make sure you login. You can add the --quiet to ignore the ssh-password question.
One possible cause is that someone else in your project set the per-instance metadata for sshKeys (which overrides the project-wide metadata). When you run gcloud compute instances describe your-instance-name do you see a key called sshKeys in the metadata items?
It would also be helpful to see the contents of the latest log in ~/.config/gcloud/logs/. However, please make sure to scrub it of sensitive information.
I have a MacBook after facing with same problem, I re-created my SSH key in this format and works fine.
Generate your key with:
ssh-keygen -t rsa -C your_username
Copy the key and paste the ssh key under compute Engine metadata:
cat ~/.ssh/id_rsa.pub
It should work fine