With Terraform I generate an SSH key with the private key provider:
resource "tls_private_key" "cluster_key_private" {
algorithm = "ED25519"
}
Now to automatically add the key to the ssh-agent keychain I have tried to run a null_resource:
resource "null_resource" "sshkey_add" {
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = <<-EOT
eval "$(ssh-agent -s)"
echo '${tls_private_key.cluster_key_private.private_key_openssh}' | tr -d '\r' | ssh-add -
EOT
}
}
Still when I check for added keys:
❯ ssh-add -L
The agent has no identities.
Besides that you shouldn't do it, I'm pretty sure it's impossible.
ssh-agent is daemon which is run in shell. If you run it from terraform than it's alive as long as terraform process.
It's equivalent of:
(eval "$(ssh-agent -s)"; ssh-add ~/.ssh/your-ssh-key)
run it on your clean terminal session and you'll see that your-ssh-key will not be available. Remember about parenthesis - they run subshell.
Bottom line is - you can't modify parent environment from subprocess.
Related
So, I want to deploy my Gitlab pipelines onto a server with SSH. This is my script .gitlab-ci :
test_job:
stage: test
variables:
GIT_STRATEGY: none # Disable Gitlab auto clone
before_script:
- 'command -v ssh-agent > /dev/null || ( apk add --update openssh )'
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "${SSH_PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh-add ~/.ssh/id_rsa
# Add server to known hosts
- ssh-keyscan ${VM_IPADDRESS} >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
# Verify that key has been registered
- ls ~/.ssh -al
# Verify server connection
- echo "Ping server"
- ping ${VM_IPADDRESS} -c 5
script:
# Pull Git project on remote server
- echo "Git clone from repository"
- ssh -o PreferredAuthentications=publickey ${SSH_USER}#${VM_IPADDRESS} "
rm -rf /tmp/src/${CI_PROJECT_NAME}/ &&
git clone https://gitlab-ci-token:${CI_BUILD_TOKEN}#gitlab.my-domain.fr/user/project.git /tmp/src/${CI_PROJECT_NAME}/
"
$SSH_PRIVATE_KEY contains my private SSH key I use daily to connect on that server. It works perfectly in normal time. ${SSH_USER} and ${VM_IPADDRESS} contain my username and the server address. I already checked that all the values in these parameters are correct on worker.
This is the message I have when trying this script :
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
I'm quite stuck with this actually :(. Any help :) ?
Adding my public key id_rsa.pub to ssh authorized_keys file in the server has solved the problem for me. And you need to make sure of adding your public key to your SSH keys in your Gitlab profile.
Also, it's good to note that:
"Add the public key to the services that you want to have an access to from within the build environment. If you are accessing a private GitLab repository you must add it as a deploy key."
I have a Gitlab pipeline, in which I want to create a connection to a server with ssh:
stages:
- connect
connect-ssh:
stage: connect
script:
- mkdir ~/.ssh
- echo -e "$PROD_SSH_KEY" > file_rsa # SSH PRIVATE KEY
- echo -e "$PROD_SSH_PASSPHRASE" > passfile # PASSPHRASE
- chmod 600 file_rsa
- cat passfile | ssh-add file_rsa # DOESN'T WORK
- ssh -i $PROD_USER#$PROD_HOST
- pwd
$ cat passfile | ssh-add group-5_rsa
Could not open a connection to your authentication agent.
I have seen few answer, but they weren't appropriate with gitlab jobs.
What solution do I have for this situation?
Another approach was illustrated in gitlab-org/gitlab-runner issue 2418
putting the commands in a script, including the eval $(ssh-agent -s)
sourcing said script
calling a script like you did, opens a sub-shell. The ssh agent environment is therefore not available in the outer shell.
Sourcing the scripts, however, executes it in the current shell. This also means you should be careful with what you do in that scripts. You can overwrite environment variables, exit the main shell, etc.
In your case, just adding eval $(ssh-agent -s) might be enough, since sourcing the script would be the same as running those commands line by line in .gitlab-ci.yml itself.
My goal is to store private key with passphrase in GitHub secrets, but I don't know how to enter the passphrase through GitHub actions.
What I've tried:
I created a private key without passphrase and store it in GitHub secrets.
.github/workflows/docker-build.yml
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
eval $(ssh-agent -s)
echo "${{ secrets.SSH_PRIVATE_KEY }}" | ssh-add -
ssh -o StrictHostKeyChecking=no root#${{ secrets.HOSTNAME }} "rm -rf be-bankaccount; git clone https://github.com/kidfrom/be-bankaccount.git; cd be-bankaccount; docker build -t be-bankaccount .; docker-compose up -d;"
I finally figured this out because I didn't want to go to the trouble of updating all my servers with a passphrase-less authorized key. Ironically, it probably took me longer to do this but now I can save you the time.
The two magic ingredients are: using SSH_AUTH_SOCK to share between GH action steps and using ssh-add with DISPLAY=None and SSH_ASKPASS set to an executable script that sends your passphrase via stdin.
For your question specifically, you do not need SSH_AUTH_SOCK because all your commands run within a single job step. However, for more complex workflows, you'll need it set.
Here's an example workflow:
name: ssh with passphrase example
env:
# Use the same ssh-agent socket value across all jobs
# Useful when a GH action is using SSH behind-the-scenes
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
jobs:
job1:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v2
# Start ssh-agent but set it to use the same ssh_auth_sock value.
# The agent will be running in all steps after this, so it
# should be one of the first.
- name: Setup SSH passphrase
env:
SSH_PASSPHRASE: ${{secrets.SSH_PASSPHRASE}}
SSH_PRIVATE_KEY: ${{secrets.SSH_PRIVATE_KEY}}
run: |
ssh-agent -a $SSH_AUTH_SOCK > /dev/null
echo 'echo $SSH_PASSPHRASE' > ~/.ssh_askpass && chmod +x ~/.ssh_askpass
echo "$SSH_PRIVATE_KEY" | tr -d '\r' | DISPLAY=None SSH_ASKPASS=~/.ssh_askpass ssh-add - >/dev/null
# Debug print out the added identities. This will prove SSH_AUTH_SOCK
# is persisted across job steps
- name: Print ssh-add identities
runs: ssh-add -l
job2:
# NOTE: SSH_AUTH_SOCK will be set, but the agent itself is not
# shared across jobs, each job is a new container sandbox
# so you still need to setup the passphrase again
steps: ...
Resources I referenced:
SSH_AUTH_SOCK setting: https://www.webfactory.de/blog/use-ssh-key-for-private-repositories-in-github-actions
GitLab and Ansible using passphrase: How to run an ansible-playbook with a passphrase-protected-ssh-private-key?
You could try and use actions/webfactory-ssh-agent, which comes from the study done in "Using a SSH deploy key in GitHub Actions to access private repositories" done by Matthias Pigulla
GitHub Actions only have access to the repository they run for. So, in order to access additional private repositories, create an SSH key with sufficient access privileges.
Then, use this action to make the key available with ssh-agent on the Action worker node. Once this has been set up, git clone commands using ssh URLs will just work.
# .github/workflows/my-workflow.yml
jobs:
my_job:
...
steps:
- actions/checkout#v1
# Make sure the #v0.4.1 matches the current version of the
# action
- uses: webfactory/ssh-agent#v0.4.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- ... other steps
I have Bamboo 6.6.0 and pipelines in YAML format,
Pipeline needs some secrets: passwords, configs, and ssh keys.
Bamboo Global variables are ok for passwords, but how can I store ssh keys and config files in a secure way in Bamboo?
The only decent way I've found so far is to encrypt ssh key with base64, put it into Bamboo Global variable (it MUST has secret word in name to be not visible in GUI and logs) and decrypt it during pipeline execution.
Example:
---
project:
key: LAMBDA
plan:
key: LAMBDA
name: GaaS Example Lambda Scripted
stages: #TODO: Clean workspace
- jobs:
- dockerImage: kagarlickij/example-lambda
scripts:
- echo "Creating .ssh dir.." && mkdir ~/.ssh
- echo "Exporting SSH key from Bamboo Global vars to env var.." && export BITBUCKET_SSH_KEY=${bamboo.bitbucket_ssh_key_secret}
- echo "Decrypting SSH key.." && if [ -f ~/.ssh/id_rsa ]; then rm -f ~/.ssh/id_rsa ; fi && echo $BITBUCKET_SSH_KEY | base64 --decode > ~/.ssh/id_rsa && chmod 400 ~/.ssh/id_rsa
- echo "Adding BitBucket to known_hosts.." && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
Most of the repositories of my private projects are hosted on a private repository on gitlab.com (the hosted solution, not a privately hosted gitlab server). The sites are hosted on a digitalocean VPS.
I want to use gitlab CI to have every commit on the develop branch automatically deployed on the test server. Since I already have a clone of the repository on this test server the easiest way to automatically deploy seems to have gitlab-ci connect to the ssh server, and trigger a git pull.
The gitlab-ci.yml I have now (ssh before_script copied from http://docs.gitlab.com/ce/ci/ssh_keys/README.html).
deploy to test:
environment: test
only:
- develop
before_script:
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# disable host key checking (NOTE: makes you susceptible to man-in-the-middle attacks)
# WARNING: use only in docker container, if you use it with shell you will overwrite your user's ssh config
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
script:
# Try and connect to the test server
- ssh [myname]#[mydomain.com] "cd /var/www/test.[projectname].com/ && git pull"
The result of a commit on develop in the gitlab pipelines:
$ ssh [myname]#[mydomain.com] "cd /var/www/test.[projectname].com/ && git pull"
Warning: Permanently added '[mydomain.com],[255.255.255.255]' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).
ERROR: Build failed: exit code 1
I have the private key of my local user on my laptop added to the SSH_PRIVATE_KEY variable on gitlab. The private key should work since I can connect to the server from my laptop without providing a password.
Does anyone have this working, how can the gitlab.com worker connect to the ssh server?
AFAIK, you can't do this:
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
The ssh-agent is not getting the key context, nor the FD. You should store the key in some temporary file and then add it to the agent (and potentially remove the file, if it is not needed anymore):
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- echo "$SSH_PRIVATE_KEY" > key
- chmod 600 key
- ssh-add key
- rm key