As you may know Gitlab runner curently cannot mask SSH keys to due pattern requirements but there are some work arounds.
My question is, do we even need to mask the SSH keys? It seems Gitlab runner won't even log it unless you explicitly want to.
For example, a common usage of SSH key in gitlab-ci.yml is something like this:
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
Correct me if I'm wrong, but this command would not really log or show the plain text value of $SSH_PRIVATE_KEY in the runner's output or logs, right?
Or, if I'm wrong, what places the runner would have logged the plain text value?
Related
Here's what I've got so far:
I've generated an SSH key pair inside my repo and also added the public key to my ~/.ssh/authorized_keys on the remote host.
My remote host has root user and password login disabled for security. I put the SSH username I use to log in manually inside an environment variable called SSH_USERNAME.
Here's where I'm just not sure what to do. How should I fill out my bitbucket-pipelines.yml?
Here is the raw contents of that file... What should I add?
# This is a sample build configuration for JavaScript.
# Check our guides at https://confluence.atlassian.com/x/14UWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: samueldebruyn/debian-git
pipelines:
branches:
master:
- step:
script: # Modify the commands below to build your repository.
- sftp $FTP_USERNAME#192.241.216.482
First of all: you should not add a key pair to your repo. Credentials should never be in a repo.
Defining the username as an environment variable is a good idea. You should do the same with the private key of your keypair. (But you have to Bas64-encode it – see Bb Pipelines documentation – and mark it as secure, so it is not visible in the repo settings.)
Then, before you actually want to connect, you have to make sure the private key (of course, Base64-decoded) is known to your pipeline’s SSH setup.
Basically, what you need to do in your script (either directly or in a shell script) is:
- echo "$SSH_PRIVATE_KEY" | base64 --decode > ~/.ssh/id_rsa
- chmod go-r ~/.ssh/id_rsa
BTW, I’d suggest also putting also the host’s IP in an env variable.
Everything works fine, but it keeps prompting for ssh passphrase during provisioning, which is very annoying, when you have 6 vm's it will prompt you like 12 times (and the whole automation piece kinda loses its point).
I've tried searching the web, but couldn't find an answer to a pretty obvious question.
There are various ways how to prevent this.
First of all and most obvious (but least preferable) is to remove the passphrase from the key:
ssh-keygen -p -P old_passphrase -N "" -f /path/to/key_file
The other possibility is to use ssh-agent, which will store the encrypted version of your key and will do the required operation on it when asked. You can find many guides and questions about it, but for completeness
eval $(ssh-agent)
ssh-add /path/to/key_file
do-your-vagrant-stuff
You can use sshpass, which will provide the passphrase to the ssh commands. It can read the passphrase as an argument, from environment variable or from file (can be insecure)
sshpass -p password your-vagrant-stuff
there are probably other ways, but you should most probably use ssh-agent.
i am trying to automate the process to order a linux instance and also handle the ssh keys on the instance-level.
Is it possible to generate a ssh key file for another user with the gcloud command line (without ssh'ing to it, that auto generate keys).
For Windows instances it looks like this:
I automate the instance creation
I automate generating windows password for windows instances
I email the newly generated password to the requesting user
For Linux:
I have automated linux instance creation
But what do i do next to generate a ssh key for another specific username so that i can attach in the email to the requesting user. The user does not have access to the GCE dashboard.
With AWS its simple because then i create the keys before the instance and can attach those, but dont know how to solve this automation issue with GCE.
Help?!
Thanks
Take a look at the instructions for "Adding and Removing SSH Keys", summarized here:
$ # Creating a new SSH key-pair with the correct format (`USERNAME` is your Google username
$ ssh-keygen -t rsa -f ~/.ssh/[KEY_FILE_NAME] -C [USERNAME]
$ # Edit the file. It should look like the following line:
$ # [USERNAME]:ssh-rsa [KEY_VALUE] [USERNAME]
$ vim ~/.ssh/[KEY_FILE_NAME]
$ # Get the existing metadata for the instance:
$ gcloud compute instances describe [INSTANCE]
$ # Look for the "metadata" -> "ssh-keys" entry and merge your new SSH key in.
$ vim all_keys.txt # This is where the merged key list goes
$ gcloud compute instances add-metadata [INSTANCE_NAME] \
--metadata-from-file ssh-keys=all_keys.txt
The link contains advanced instructions for adding an expiration time, adding the key to the entire project, blocking project-wide keys from working on an instance, using the Cloud Console instead of gcloud, doing this on Windows, etc.
That said, I'd urge you to use caution when emailing SSH private keys around.
I am trying to write a script that makes use of {ssh,gpg}-agents effortless (like keychain, but I discovered it too late). This script will be run in a sensitive environment, so I set a timeout in order to remove the keys from the agent after some time.
I managed to write the spawn/reuse part but now, I want ssh-add to be called automatically when the user is opening a ssh connection if the agent has no proper key.
Is there any way to make ssh-agent call ssh-add on failure or something better ?
What I am doing (assuming the key has a distinctive name)
I have a script in ~/bin/ (which is in my PATH)
!/bin/bash
if ! ssh-add -l | grep -q nameOfmyKey
then
ssh-add -t 2h ~/path-to-mykeys/nameOfmyKey.key
fi
ssh myuser#myserver.example.net
ssh -l lists all keys currently active in the agent.
The parameter -t ensures that the key is enabled for a restricted time only.
I work with several different servers, and it would be useful to be able to set some environment variables such that they are active on all of them when I SSH in. The problem is, the contents of some of the variables contain sensitive information (hashed passwords), and so I don't want to leave it lying around in a .bashrc file -- I'd like to keep it only in memory.
I know that you can use SSH to forward the DISPLAY variable (via ForwardX11) or an SSH Agent process (via ForwardAgent), so I'm wondering if there's a way to automatically forward the contents of arbitrary environment variables across SSH connections. Ideally, something I could set in a .ssh/config file so that it would run automatically when I need it to. Any ideas?
You can, but it requires changing the server configuration.
Read the entries for AcceptEnv in sshd_config(5) and SendEnv in ssh_config(5).
update:
You can also pass them on the command line:
ssh foo#host "FOO=foo BAR=bar doz"
Regarding security, note than anybody with access to the remote machine will be able to see the environment variables passed to any running process.
If you want to keep that information secret it is better to pass it through stdin:
cat secret_info | ssh foo#host remote_program
You can't do it automatically (except for $DISPLAY which you can forward with -X along with your Xauth info so remote programs can actually connect to your display) but you can use a script with a "here document":
ssh ... <<EOF
export FOO="$FOO" BAR="$BAR" PATH="\$HOME/bin:\$PATH"
runRemoteCommand
EOF
The unescaped variables will be expanded locally and the result transmitted to the remote side. So the PATH will be set with the remote value of $HOME.
THIS IS A SECURITY RISK Don't transmit sensitive information like passwords this way because anyone can see environment variables of every process on the same computer.
Something like:
ssh user#host bash -c "set -e; $(env); . thescript.sh"
...might work (untested)
Bit of a hack but if you cannot change the server config for some reason it might work.