How to send passphrase for ssh-add with GitHub Actions? - ssh

My goal is to store private key with passphrase in GitHub secrets, but I don't know how to enter the passphrase through GitHub actions.
What I've tried:
I created a private key without passphrase and store it in GitHub secrets.
.github/workflows/docker-build.yml
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
eval $(ssh-agent -s)
echo "${{ secrets.SSH_PRIVATE_KEY }}" | ssh-add -
ssh -o StrictHostKeyChecking=no root#${{ secrets.HOSTNAME }} "rm -rf be-bankaccount; git clone https://github.com/kidfrom/be-bankaccount.git; cd be-bankaccount; docker build -t be-bankaccount .; docker-compose up -d;"

I finally figured this out because I didn't want to go to the trouble of updating all my servers with a passphrase-less authorized key. Ironically, it probably took me longer to do this but now I can save you the time.
The two magic ingredients are: using SSH_AUTH_SOCK to share between GH action steps and using ssh-add with DISPLAY=None and SSH_ASKPASS set to an executable script that sends your passphrase via stdin.
For your question specifically, you do not need SSH_AUTH_SOCK because all your commands run within a single job step. However, for more complex workflows, you'll need it set.
Here's an example workflow:
name: ssh with passphrase example
env:
# Use the same ssh-agent socket value across all jobs
# Useful when a GH action is using SSH behind-the-scenes
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
jobs:
job1:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v2
# Start ssh-agent but set it to use the same ssh_auth_sock value.
# The agent will be running in all steps after this, so it
# should be one of the first.
- name: Setup SSH passphrase
env:
SSH_PASSPHRASE: ${{secrets.SSH_PASSPHRASE}}
SSH_PRIVATE_KEY: ${{secrets.SSH_PRIVATE_KEY}}
run: |
ssh-agent -a $SSH_AUTH_SOCK > /dev/null
echo 'echo $SSH_PASSPHRASE' > ~/.ssh_askpass && chmod +x ~/.ssh_askpass
echo "$SSH_PRIVATE_KEY" | tr -d '\r' | DISPLAY=None SSH_ASKPASS=~/.ssh_askpass ssh-add - >/dev/null
# Debug print out the added identities. This will prove SSH_AUTH_SOCK
# is persisted across job steps
- name: Print ssh-add identities
runs: ssh-add -l
job2:
# NOTE: SSH_AUTH_SOCK will be set, but the agent itself is not
# shared across jobs, each job is a new container sandbox
# so you still need to setup the passphrase again
steps: ...
Resources I referenced:
SSH_AUTH_SOCK setting: https://www.webfactory.de/blog/use-ssh-key-for-private-repositories-in-github-actions
GitLab and Ansible using passphrase: How to run an ansible-playbook with a passphrase-protected-ssh-private-key?

You could try and use actions/webfactory-ssh-agent, which comes from the study done in "Using a SSH deploy key in GitHub Actions to access private repositories" done by Matthias Pigulla
GitHub Actions only have access to the repository they run for. So, in order to access additional private repositories, create an SSH key with sufficient access privileges.
Then, use this action to make the key available with ssh-agent on the Action worker node. Once this has been set up, git clone commands using ssh URLs will just work.
# .github/workflows/my-workflow.yml
jobs:
my_job:
...
steps:
- actions/checkout#v1
# Make sure the #v0.4.1 matches the current version of the
# action
- uses: webfactory/ssh-agent#v0.4.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- ... other steps

Related

How to make gitlab CI use ssh to clone a repository?

The company I work for has a private gitlab server that only supports ssh protocol when cloning a repository.
Inside this server, I have a gitlab-ci.yml file that uses docker executor to run some scripts.
The script's execution fails because it pulls the repository with https at its early stage. It generates this error message: fatal: unable to access 'https://gitlab.mycompany.com/path/to/the/repository/my_repo.git/': SSL certificate problem: unable to get local issuer certificate.
Where can I configure gitlab runner so that it uses ssh to clone the repository?
Here's the full execution log.
Running with gitlab-runner 12.7.1 (003fe500)
on my Group Runner Yh_yL3A2
Using Docker executor with image www.mycompany.com/path/to/the/image:1.0 ...
Pulling docker image www.mycompany.com/path/to/the/image:1.0 ...
Using docker image sha256:474e110ba44ddfje8ncoz4c44e91f2442547281192d4a82b88capmi9047cd8cb for www.mycompany.com/path/to/the/image:1.0 ...
Running on runner-Yh_yL3A2-project-343-concurrent-0 via b55d8c5ba21f...
Fetching changes...
Initialized empty Git repository in /path/to/the/repository/.git/
Created fresh repository.
fatal: unable to access 'https://gitlab.mycompany.com/path/to/the/repository/my_repo.git/': SSL certificate problem: unable to get local issuer certificate
ERROR: Job failed: exit code 1
Here's my .gitlab-ci.yml
image: www.mycompany.com/path/to/the/image:1.0
before_script:
- eval $(ssh-agent -s)
# Reference: https://docs.gitlab.com/ee/ci/ssh_keys/
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
# We're using tr to fix line endings which makes ed25519 keys work
# without extra base64 encoding.
# https://gitlab.com/gitlab-examples/ssh-private-key/issues/1#note_48526556
#
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
#
# Create the SSH directory and give it the right permissions
#
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
stages:
- deploy
deploy:
stage: deploy
tags:
- infra
only:
refs:
- master
script:
- /bin/sh run.sh
I cannot find an option to specify whether the docker executor should use ssh or https to clone the repository.

how to execute git commands in gitlab-ci scripts

I want to change a file and commit changes inside a gitlab-ci pipeline
I tried writing normal git commands in script
script:
- git clone git#gitlab.url.to.project.git
- cd project file
- touch test.txt
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
- git add .
- git commit -m "testing autocommit"
- git push
I get cannot find command git or something along those lines, I know it has something to do with tags, but if I try add a git tag it says no active runner. anyone has an idea how to run git commands on gitlab-ci ?
First you need to make sure you can actually use git, so either run your jobs on a shell executor located on a system that has git or use a docker executor and use an image that has git installed.
Next problem you will encounter is that you can't push to Git(lab) since you can't enter credentials.
So the solution is to create a ssh keypair and load the ssh private key into your CI environment through CI/CD variables, also add the corresponding public key to you your Git(lab) account.
Source: https://about.gitlab.com/2017/11/02/automating-boring-git-operations-gitlab-ci/
Your .gitlab-ci.yml will then look like this:
job-name:
stage: touch
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$GIT_SSH_PRIV_KEY")
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
- mkdir -p ~/.ssh
- cat gitlab-known-hosts >> ~/.ssh/known_hosts
script:
- git clone git#gitlab.url.to.project.git
- cd project file
- touch test.txt
- git add .
- git commit -m "testing autocommit"
- git push
Gitlab CI/CD will clone the repository inside the running job automatically. What you need is the git command installed. You could use bitnami/git image to run the job in a container having the command installed.
This worked for me (trying to verify if a tag is available):
tag-available:
stage: .pre
image: bitnami/git:2.37.1
script:
# list all tags
- git tag -l
# check existence of tag "v1.0.0"
- >
if [ $(git tag -l "v1.0.0") ]; then
echo "yes"
else
echo "no."
If you need to authorize the job agains some gitlab registry (or api) please note that there are some predefined variables for user and password (tokens). For this kind of actions you might be most interested in these variables:
$CI_REGISTRY_PASSWORD
$CI_REGISTRY_USER
$CI_REGISTRY
$CI_REPOSITORY_URL
$CI_DEPLOY_PASSWORD
$CI_DEPLOY_USER
The CI_DEPLOY_USER must have a deploy user created and named "gitlab-deploy-token" to have his password loaded in the CI/CD. Read here more.

BitBucket deployment using SSH keys to remote server

I am trying to write a YAML pipeline script to deploy files that have been altered from my bitbucket repository to my remote server using ssh keys. The document that I have in place at the moment was copied from bitbucket itself and has errors:
pipelines:
default:
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/sftp-deploy:0.3.1
- variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: $REMOTE_PATH
LOCAL_PATH: $LOCAL_PATH
I am getting the following error
Configuration error
There is an error in your bitbucket-pipelines.yml at [pipelines > default > 0 > step > script > 1]. To be precise: Missing or empty command string. Each item in this list should either be a single command string or a map defining a pipe invocation.
My ssh public and private keys are setup in bitbucket along with the fingerprint and host. The variables have also been setup.
How do I go about setting up my YAML deploy script to connect to my remote server via ssh and transfer the files?
Try to update the variables section become:
- variables:
- USER: $USER
- SERVER: $SERVER
- REMOTE_PATH: $REMOTE_PATH
- LOCAL_PATH: $LOCAL_PATH
Here is am example about how to set variables: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_variablesvariables
Your directive - step has to be intended.
I have bitbucket-pipelines.yml like that (using rsync instead of ssh):
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: php:7.2.1-fpm
pipelines:
default:
- step:
script:
- apt-get update
- apt-get install zip -y
- apt-get install unzip -y
- apt-get install libgmp3-dev -y
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- cp .env.example .env
#- vendor/bin/phpunit
- pipe: atlassian/rsync-deploy:0.2.0
variables:
USER: $DEPLOY_USER
SERVER: $DEPLOY_SERVER
REMOTE_PATH: $DEPLOY_PATH
LOCAL_PATH: '.'
I suggest to use their online editor in repository for editing bitbucket-pipelines.yml, it checks all formal yml structure and you can't commit invalid file.
Even if you check file on some other yaml editor, it may look fine, but not necessary according to bitbucket specification. Their online editor does fine job.
Also, I suggest to visit their community on atlasian community as it's very active, sometimes their staff members are providing answers.
However, I struggle with plenty dependencies needed to run tests properly. (actual bitbucket-pipelines.yml is becoming bigger and bigger).
Maybe there is some nicely prepared Docker image for this job.

Use GitLab CI to run tests locally?

If a GitLab project is configured on GitLab CI, is there a way to run the build locally?
I don't want to turn my laptop into a build "runner", I just want to take advantage of Docker and .gitlab-ci.yml to run tests locally (i.e. it's all pre-configured). Another advantage of that is that I'm sure that I'm using the same environment locally and on CI.
Here is an example of how to run Travis builds locally using Docker, I'm looking for something similar with GitLab.
Since a few months ago this is possible using gitlab-runner:
gitlab-runner exec docker my-job-name
Note that you need both docker and gitlab-runner installed on your computer to get this working.
You also need the image key defined in your .gitlab-ci.yml file. Otherwise won't work.
Here's the line I currently use for testing locally using gitlab-runner:
gitlab-runner exec docker test --docker-volumes "/home/elboletaire/.ssh/id_rsa:/root/.ssh/id_rsa:ro"
Note: You can avoid adding a --docker-volumes with your key setting it by default in /etc/gitlab-runner/config.toml. See the official documentation for more details. Also, use gitlab-runner exec docker --help to see all docker-based runner options (like variables, volumes, networks, etc.).
Due to the confusion in the comments, I paste here the gitlab-runner --help result, so you can see that gitlab-runner can make builds locally:
gitlab-runner --help
NAME:
gitlab-runner - a GitLab Runner
USAGE:
gitlab-runner [global options] command [command options] [arguments...]
VERSION:
1.1.0~beta.135.g24365ee (24365ee)
AUTHOR(S):
Kamil TrzciƄski <ayufan#ayufan.eu>
COMMANDS:
exec execute a build locally
[...]
GLOBAL OPTIONS:
--debug debug mode [$DEBUG]
[...]
As you can see, the exec command is to execute a build locally.
Even though there was an issue to deprecate the current gitlab-runner exec behavior, it ended up being reconsidered and a new version with greater features will replace the current exec functionality.
Note that this process is to use your own machine to run the tests using docker containers. This is not to define custom runners. To do so, just go to your repo's CI/CD settings and read the documentation there. If you wanna ensure your runner is executed instead of one from gitlab.com, add a custom and unique tag to your runner, ensure it only runs tagged jobs and tag all the jobs you want your runner to be responsible of.
I use this docker-based approach:
Edit: 2022-10
docker run --entrypoint bash --rm -w $PWD -v $PWD:$PWD -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest -c 'git config --global --add safe.directory "*";gitlab-runner exec docker test'
For all git versions > 2.35.2. You must add safe.directory within the container to avoid fatal: detected dubious ownership in repository at.... This also true for patched git versions < 2.35.2. The old command will not work anymore.
Details
0. Create a git repo to test this answer
mkdir my-git-project
cd my-git-project
git init
git commit --allow-empty -m"Initialize repo to showcase gitlab-runner locally."
1. Go to your git directory
cd my-git-project
2. Create a .gitlab-ci.yml
Example .gitlab-ci.yml
image: alpine
test:
script:
- echo "Hello Gitlab-Runner"
3. Create a docker container with your project dir mounted
docker run -d \
--name gitlab-runner \
--restart always \
-v $PWD:$PWD \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
(-d) run container in background and print container ID
(--restart always) or not?
(-v $PWD:$PWD) Mount current directory into the current directory of the container - Note: On Windows you could bind your dir to a fixed location, e.g. -v ${PWD}:/opt/myapp. Also $PWD will only work at powershell not at cmd
(-v /var/run/docker.sock:/var/run/docker.sock) This gives the container access to the docker socket of the host so it can start "sibling containers" (e.g. Alpine).
(gitlab/gitlab-runner:latest) Just the latest available image from dockerhub.
4. Execute with
Avoid fatal: detected dubious ownership in repository at... More info
docker exec -it -w $PWD gitlab-runner git config --global --add safe.directory "*"
Actual execution
docker exec -it -w $PWD gitlab-runner gitlab-runner exec docker test
# ^ ^ ^ ^ ^ ^
# | | | | | |
# (a) (b) (c) (d) (e) (f)
(a) Working dir within the container. Note: On Windows you could use a fixed location, e.g. /opt/myapp.
(b) Name of the docker container
(c) Execute the command "gitlab-runner" within the docker container
(d)(e)(f) run gitlab-runner with "docker executer" and run a job named "test"
5. Prints
...
Executing "step_script" stage of the job script
$ echo "Hello Gitlab-Runner"
Hello Gitlab-Runner
Job succeeded
...
Note: The runner will only work on the commited state of your code base. Uncommited changes will be ignored. Exception: The .gitlab-ci.yml itself does not have be commited to be taken into account.
Note: There are some limitations running locally. Have a look at limitations of gitlab runner locally.
I'm currently working on making a gitlab runner that works locally.
Still in the early phases, but eventually it will become very relevant.
It doesn't seem like gitlab want/have time to make this, so here you go.
https://github.com/firecow/gitlab-runner-local
If you are running Gitlab using the docker image there: https://hub.docker.com/r/gitlab/gitlab-ce, it's possible to run pipelines by exposing the local docker.sock with a volume option: -v /var/run/docker.sock:/var/run/docker.sock. Adding this option to the Gitlab container will allow your workers to access to the docker instance on the host.
The GitLab runner appears to not work on Windows yet and there is an open issue to resolve this.
So, in the meantime I am moving my script code out to a bash script, which I can easily map to a docker container running locally and execute.
In this case I want to build a docker container in my job, so I create a script 'build':
#!/bin/bash
docker build --pull -t myimage:myversion .
in my .gitlab-ci.yaml I execute the script:
image: docker:latest
services:
- docker:dind
before_script:
- apk add bash
build:
stage: build
script:
- chmod 755 build
- build
To run the script locally using powershell I can start the required image and map the volume with the source files:
$containerId = docker run --privileged -d -v ${PWD}:/src docker:dind
install bash if not present:
docker exec $containerId apk add bash
Set permissions on the bash script:
docker exec -it $containerId chmod 755 /src/build
Execute the script:
docker exec -it --workdir /src $containerId bash -c 'build'
Then stop the container:
docker stop $containerId
And finally clean up the container:
docker container rm $containerId
Another approach is to have a local build tool that is installed on your pc and your server at the same time.
So basically, your .gitlab-ci.yml will basically call your preferred build tool.
Here an example .gitlab-ci.yml that i use with nuke.build:
stages:
- build
- test
- pack
variables:
TERM: "xterm" # Use Unix ASCII color codes on Nuke
before_script:
- CHCP 65001 # Set correct code page to avoid charset issues
.job_template: &job_definition
except:
- tags
build:
<<: *job_definition
stage: build
script:
- "./build.ps1"
test:
<<: *job_definition
stage: test
script:
- "./build.ps1 test"
variables:
GIT_CHECKOUT: "false"
pack:
<<: *job_definition
stage: pack
script:
- "./build.ps1 pack"
variables:
GIT_CHECKOUT: "false"
only:
- master
artifacts:
paths:
- output/
And in nuke.build i've defined 3 targets named like the 3 stages (build, test, pack)
In this way you have a reproducible setup (all other things are configured with your build tool) and you can test directly the different targets of your build tool.
(i can call .\build.ps1 , .\build.ps1 test and .\build.ps1 pack when i want)
I am on Windows using VSCode with WSL
I didn't want to register my work PC as a runner so instead I'm running my yaml stages locally to test them out before I upload them
$ sudo apt-get install gitlab-runner
$ gitlab-runner exec shell build
yaml
image: node:10.19.0 # https://hub.docker.com/_/node/
# image: node:latest
cache:
# untracked: true
key: project-name
# key: ${CI_COMMIT_REF_SLUG} # per branch
# key:
# files:
# - package-lock.json # only update cache when this file changes (not working) #jkr
paths:
- .npm/
- node_modules
- build
stages:
- prepare # prepares builds, makes build needed for testing
- test # uses test:build specifically #jkr
- build
- deploy
# before_install:
before_script:
- npm ci --cache .npm --prefer-offline
prepare:
stage: prepare
needs: []
script:
- npm install
test:
stage: test
needs: [prepare]
except:
- schedules
tags:
- linux
script:
- npm run build:dev
- npm run test:cicd-deps
- npm run test:cicd # runs puppeteer tests #jkr
artifacts:
reports:
junit: junit.xml
paths:
- coverage/
build-staging:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build:stage
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-dev:
stage: deploy
needs: [build-staging]
tags: [linux]
only:
- schedules
# # - branches#gitlab-org/gitlab
before_script:
- apt-get update && apt-get install -y lftp
script:
# temporarily using 'verify-certificate no'
# for more on verify-certificate #jkr: https://www.versatilewebsolutions.com/blog/2014/04/lftp-ftps-and-certificate-verification.html
# variables do not work with 'single quotes' unless they are "'surrounded by doubles'"
- lftp -e "set ssl:verify-certificate no; open mediajackagency.com; user $LFTP_USERNAME $LFTP_PASSWORD; mirror --reverse --verbose build/ /var/www/domains/dev/clients/client/project/build/; bye"
# environment:
# name: staging
# url: http://dev.mediajackagency.com/clients/client/build
# # url: https://stg2.client.co
when: manual
allow_failure: true
build-production:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-client:
stage: deploy
needs: [build-production]
tags: [linux]
only:
- schedules
# - master
before_script:
- apt-get update && apt-get install -y lftp
script:
- sh deploy-prod
environment:
name: production
url: http://www.client.co
when: manual
allow_failure: true
The idea is to keep check commands outside of .gitlab-ci.yml. I use Makefile to run something like make check and my .gitlab-ci.yml runs the same make commands that I use locally to check various things before committing.
This way you'll have one place with all/most of your commands (Makefile) and .gitlab-ci.yml will have only CI-related stuff.
I have written a tool to run all GitLab-CI job locally without have to commit or push, simply with the command ci-toolbox my_job_name.
The URL of the project : https://gitlab.com/mbedsys/citbx4gitlab
Years ago I build this simple solution with Makefile and docker-compose to run the gitlab runner in docker, you can use it to execute jobs locally as well and should work on all systems where docker works:
https://gitlab.com/1oglop1/gitlab-runner-docker
There are few things to change in the docker-compose.override.yaml
version: "3"
services:
runner:
working_dir: <your project dir>
environment:
- REGISTRATION_TOKEN=<token if you want to register>
volumes:
- "<your project dir>:<your project dir>"
Then inside your project you can execute it the same way as mentioned in other answers:
docker exec -it -w $PWD runner gitlab-runner exec <commands>..
I recommend using gitlab-ci-local
https://github.com/firecow/gitlab-ci-local
It's able to run specific jobs as well.
It's a very cool project and I have used it to run simple pipelines on my laptop.

Getting GitLab CI to clone private repositories

I have GitLab & GitLab CI set up to host and test some of my private repos. For my composer modules under this system, I have Satis set up to resolve my private packages.
Obviously these private packages require an ssh key to clone them, and I have this working in the terminal - I can run composer install and get these packages, so long as I have the key added with ssh-add in the shell.
However, when running my tests in GitLab CI, if a project has any of these dependencies the tests will not complete as my GitLab instance needs authentication to get the deps (obviously), and the test fails saying Host key verification failed.
My question is how do I set this up so that when the runner runs the test it can authenticate to gitlab without a password? I have tried putting a password-less ssh-key in my runners ~/.ssh folder, however the build wont even add the key, "eval ssh-agent -s" followed by ssh-add seems to fail saying the agent isn't running...
See also other solutions:
git submodule permission (see Marco A.'s answer)
job token and override repo in git config (see a544jh's answer)
Here a full howto with SSH keys:
General Design
generating a pair of SSH keys
adding the private one as a secure environment variable of your project
making the private one available to your test scripts on GitLab-CI
adding the public one as a deploy key on each of your private dependencies
Generating a pair of public and private SSH keys
Generate a pair of public and private SSH keys without passphrase:
ssh-keygen -b 4096 -C "<name of your project>" -N "" -f /tmp/name_of_your_project.key
Adding the private SSH key to your project
You need to add the key as a secure environment variable to your project as
following:
browse https://<gitlab_host>/<group>/<project_name>/variables
click on "Add a variable"
fill the text field Key with SSH_PRIVATE_KEY
fill the text field Value with the private SSH key itself
click on "Save changes"
Exposing the private SSH key to your test scripts
In order to make your private key available to your test scripts you need to add
the following to your .gitlab-ci.yml file:
before_script:
# install ssh-agent
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
# run ssh-agent
- eval $(ssh-agent -s)
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# disable host key checking (NOTE: makes you susceptible to man-in-the-middle attacks)
# WARNING: use only in docker container, if you use it with shell you will overwrite your user's ssh config
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
Code Snippet comes from GitLab documentation
Adding the public SSH key as a deploy key to all your private dependencies
You need to register the public SSH key as deploy key to all your private
dependencies as following:
browse https://<gitlab_host>/<group>/<dependency_name>/deploy_keys
click on "New deploy key"
fill the text field Title with the name of your project
fill the text field Key with the public SSH key itself
click on "Create deploy key"
If you don't want to fiddle around with ssh keys or submodules, you can override the repo in git's configuration to authenticate with the job token instead (in gitlab-ci.yml):
before_script:
- git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/group/repo.git".insteadOf git#gitlab.example.com:group/repo.git
I'm posting this as an answer since others weren't completely clear and/or detailed IMHO
Starting from GitLab 8.12+, assuming the submodule repo is in the same server as the one requesting it, you can now:
Set up the repo with git submodules as usual (git submodule add git#somewhere:folder/mysubmodule.git)
Modify your .gitmodules file as follows
[submodule "mysubmodule"]
path = mysubmodule
url = ../../group/mysubmodule.git
where ../../group/mysubmodule.git is a relative path from your repository to the submodule's one.
Add the following lines to gitlab-ci.yml
variables:
GIT_SUBMODULE_STRATEGY: recursive
to instruct the runner to fetch all submodules before the build.
Caveat: if your runner seems to ignore the GIT_SUBMODULE_STRATEGY directive, you should probably consider updating it.
(source: https://docs.gitlab.com/ce/ci/git_submodules.html)
The currently accepted answer embeds Gitlab-specific requirements into my .gitmodules file. This forces a specific directory layout for local development and would complicate moving to another version control platform.
Instead, I followed the advice in Juddling's answer. Here's a more complete answer.
My .gitmodules files has the following contents:
[submodule "myproject"]
url = git#git.myhost.com:mygroup/myproject.git
In my gitlab-ci.yml I have the following:
build:
stage: build
before_script:
- git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#git.myhost.com/".insteadOf "git#git.myhost.com:"
- git submodule sync && git submodule update --init
The trailing / and : are critical in the git config line, since we are mapping from SSH authentication to HTTPS. This tripped me up for a while with "Illegal port number" errors.
I like this solution because it embeds the Gitlab-specific requirements in a Gitlab-specific file, which is ignored by everything else.
I used deploy tokens to solve this issue, as setting up SSH keys for a test runner seems a little long winded.
git clone http://<username>:<deploy_token>#gitlab.example.com/tanuki/awesome_project.git
The deploy tokens are per project and are read only.
One way to solve this without changing the git repository's structure is to perform the following steps:
1. get ssh host keys
Get the ssh host keys of the server that you are running on. For gitlab.com:
run ssh-keyscan gitlab.com > known_hosts
check that ssh-keygen -lf known_hosts agrees with the fingerprints reported here.
copy the content of the known_hosts and paste it on a variable called SSH_KNOWN_HOSTS on the repository.
This step is only needed once.
2. configure the job to use ssh
before_script:
- git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.com".insteadOf "git#gitlab.com:"
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
The "ssh://git#gitlab.com" bit may be different if you are trying to do git clone git#gitlab.com: or pip install -e git+ssh://git#gitlab.com/...; adjust it accordingly to your needs.
At this point, your CI is able to use ssh to fetch from another (private) repository.
3. [Bonus DRY]
Use this trick to write it generically:
.enable_ssh: &enable_ssh |-
git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.com".insteadOf "ssh://git#gitlab.com"
mkdir -p ~/.ssh
chmod 700 ~/.ssh
echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
chmod 644 ~/.ssh/known_hosts
and enable it on jobs that need it
test:
stage: test
before_script:
- *enable_ssh
script:
- ...
If your CI runner is running on a container model, you need to use the deploy key. doc: https://docs.gitlab.com/ee/user/project/deploy_tokens/#git-clone-a-repository
git clone https://<username>:<deploy_token>#gitlab.example.com/tanuki/awesome_project.git
Create your deploy token
Add your token in CI pipeline Variable
make sure your container has the git and change the git URL by insteadOf
image: docker:latest
before_script:
- apk add --no-cache curl jq python3 py3-pip git
- git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.come/".insteadOf 'git#gitlab.example.come:'
for replace URL: https://docs.gitlab.com/ee/user/project/working_with_projects.html#authenticate-git-fetches
I had a scenario where I had to use my ssh key in 3 different scripts, so I put the ssh key stuff in a single shell script and called it first, before the other 3 scripts. This ended up not working, I think due to the ssh-agent not persisting between shell scripts, or something to that effect. I ended up actually just outputting the private key into the ~/.ssh/id_rsa file, which will for sure persist to other scripts.
.gitlab-ci.yml
script:
- ci/init_ssh.sh
- git push # or whatever you need ssh for
ci/init_ssh.sh
# only run in docker:
[[ ! -e /.dockerenv ]] && exit 0
mkdir -p ~/.ssh
echo "$GITLAB_RUNNER_SSH_KEY" > ~/.ssh/id_rsa
chmod 400 ~/.ssh/id_rsa
echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > /.ssh/config
It works like a charm!
If you are using an alpine-based image (maybe docker:latest or docker:dind), your before_script might look like this:
before_script:
- apk add --no-cache openssh-client git
- mkdir -p /.ssh && touch /.ssh/known_hosts
- ssh-keyscan gitlab.com >> /.ssh/known_hosts
- echo $SSH_KEY | base64 -d >> /.ssh/id_rsa && chmod 600 /.ssh/id_rsa
- git clone git#git.myhost.com:mygroup/myproject.git
Adding this to .gitlab-ci.yml did the trick for me.
(as mentioned here: https://docs.gitlab.com/ee/user/project/new_ci_build_permissions_model.html#dependent-repositories)
before_script:
echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc
(I tried setting up SSH_PRIVATE_KEY as mentioned in one of the answers above, it won't work)
Gitlab 15.9.0 introduces an update to the pre-defined variable CI_JOB_TOKEN. Now you can control other projects' access to your private repository, see the release note and documentation.
Once access is granted, you can clone private repositories by adding this line to your job's scripts or before_scripts.
git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/<namespace>/<project>
Unfortunately, this still does not play nicely with the submodule integration with Gitlab CI/CD. Instead, I do this in my projects.
# .gitlab-ci.yml
default:
before_script:
- |
cat << EOF > ~/.gitconfig
[url "https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/<namespace>/<project>.git"]
insteadOf = git#gitlab.example.com/<namespace>/<project>.git
EOF
- git submodule update --init --recursive
And this is what my .gitmodules would look like
[submodule "terraform-eks"]
path = modules/<project>
url = git#gitlab.example.com/<namespace>/<project>.git
branch = main
Hope this help!
Seems there is finally a reasonable solution.
In short as of GitLab 8.12 all you need to do is use relative paths in the .submodules, and the git submodule update --init will simply work