GitHub Actions: How to run remote SSH commands using a self hosted runner - ssh

Since I need to use a self-hosted runner, the option to use existing marketplace SSH Actions is not viable because they tend to docker and build an image on GitHub Actions which fails because of a lack of permissions for an unknown user. Existing GitHub SSH Actions such as appleboy fail due to reliance on Docker for me.
Please note: Appleboy & other similar actions work great when using GitHub Actions Runners.

Since it's a self-hosted runner i.e. I have access to it all the time. So using an SSH profile and then using native-run command works pretty well.
To create an SSH profile & use it in GitHub Actions:
Create (if doesn't exist already) a "config" file in "~/.ssh/".
Add a profile in "~/.ssh/config". For example-
Host dev
HostName qa.bluerelay.com
User ec2-user
Port 22
IdentityFile /home/ec2-user/mykey/something.pem
Now to test it, on self-hosted runner run:
ssh dev ls
This should run the ls command inside the dev server.
4. Now since the SSH profile is set up, you can easily run remote SSH commands using GitHub Actions. For example-
name: Test Workflow
on:
push:
branches: ["main"]
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Download Artifacts into Dev
run: |
ssh dev 'cd GADeploy && pwd && sudo wget https://somelink.amazon.com/web.zip'
- name: Deploy Web Artifact to Dev
run: |
ssh dev 'cd GADeploy && sudo ./deploy-web.sh web.zip'
This works like a charm!!

Independently of the runner itself, you would need first to check if SSH does work from your server
curl -v telnet://github.com:22
# Assuming your ~/.ssh/id_rsa.pub is copied to your GitHub profile page
git ls-remote git#github.com:you/YourRepository
Then you can install your runner, without Docker.
Finally, your runner script can execute jobs.<job_id>.steps[*].run commands, using git with SSH URLs.

Related

BitBucket deployment using SSH keys to remote server

I am trying to write a YAML pipeline script to deploy files that have been altered from my bitbucket repository to my remote server using ssh keys. The document that I have in place at the moment was copied from bitbucket itself and has errors:
pipelines:
default:
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/sftp-deploy:0.3.1
- variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: $REMOTE_PATH
LOCAL_PATH: $LOCAL_PATH
I am getting the following error
Configuration error
There is an error in your bitbucket-pipelines.yml at [pipelines > default > 0 > step > script > 1]. To be precise: Missing or empty command string. Each item in this list should either be a single command string or a map defining a pipe invocation.
My ssh public and private keys are setup in bitbucket along with the fingerprint and host. The variables have also been setup.
How do I go about setting up my YAML deploy script to connect to my remote server via ssh and transfer the files?
Try to update the variables section become:
- variables:
- USER: $USER
- SERVER: $SERVER
- REMOTE_PATH: $REMOTE_PATH
- LOCAL_PATH: $LOCAL_PATH
Here is am example about how to set variables: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_variablesvariables
Your directive - step has to be intended.
I have bitbucket-pipelines.yml like that (using rsync instead of ssh):
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: php:7.2.1-fpm
pipelines:
default:
- step:
script:
- apt-get update
- apt-get install zip -y
- apt-get install unzip -y
- apt-get install libgmp3-dev -y
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- cp .env.example .env
#- vendor/bin/phpunit
- pipe: atlassian/rsync-deploy:0.2.0
variables:
USER: $DEPLOY_USER
SERVER: $DEPLOY_SERVER
REMOTE_PATH: $DEPLOY_PATH
LOCAL_PATH: '.'
I suggest to use their online editor in repository for editing bitbucket-pipelines.yml, it checks all formal yml structure and you can't commit invalid file.
Even if you check file on some other yaml editor, it may look fine, but not necessary according to bitbucket specification. Their online editor does fine job.
Also, I suggest to visit their community on atlasian community as it's very active, sometimes their staff members are providing answers.
However, I struggle with plenty dependencies needed to run tests properly. (actual bitbucket-pipelines.yml is becoming bigger and bigger).
Maybe there is some nicely prepared Docker image for this job.

Git clone from remote server failing in bitbucket pipelines

I'm trying to automatically deploy my app to digital ocean through bitbucket pipelines. Here are the steps my deployment is following:
connect to the remote digital ocean droplet using ssh
clone my repository by running a git clone with ssh
launch my application with docker-compose
I have successfully setup ssh access to my remote. I have also configured ssh access to my repository and can successfully execute git clone from my remote server.
However, in the pipeline, while connection to the remote server is successfull, the git clone command fails with the following error.
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Anybody has an idea of what is going on here?
Here is my bitbucket-pipelines.yml
image: atlassian/default-image:latest
pipelines:
default:
- step:
deployment: production
script:
- cat deploy.sh | ssh $USER_NAME#$HOST
- echo "Deploy step finished"
And the deployment script deploy.sh
#!/usr/bin/env sh
git clone git#bitbucket.org:<username>/<my_repo>.git
cd my_repo
docker-compose up -d
Logs for the git clone ssh commands within the droplet and from the pipeline
Git uses the default ssh key by default.
You can overwrite the SSH command used by git, by setting the GIT_SSH_COMMAND environment variable. You can add the -i argument to use a different SSH key.
export GIT_SSH_COMMAND="ssh -i ~/.ssh/<key>"
git clone git#bitbucket.org:<username>/<my_repo>.git
From the git documentation:
GIT_SSH
GIT_SSH_COMMAND
If either of these environment variables is set then git fetch and git push will use the specified command instead of ssh when they need to connect to a remote system. The command-line parameters passed to the configured command are determined by the ssh variant. See ssh.variant option in git-config[1] for details.
$GIT_SSH_COMMAND takes precedence over $GIT_SSH, and is interpreted by the shell, which allows additional arguments to be included. $GIT_SSH on the other hand must be just the path to a program (which can be a wrapper shell script, if additional arguments are needed).
Usually it is easier to configure any desired options through your personal .ssh/config file. Please consult your ssh documentation for further details.

Getting gitlab-runner 10.0.2 cloning repo using ssh

I have a gitlab installation and I am trying to setup a gitlab-runner using a docker executor. All ok until tests start running and then since my projects are private and they have no http access enabled, they fail at clone time with:
Running with gitlab-runner 10.0.2 (a9a76a50)
on Jupiter-docker (5f4ed288)
Using Docker executor with image fedora:26 ...
Using docker image sha256:1f082f05a7fc20f99a4ccffc0484f45e6227984940f2c57d8617187b44fd5c46 for predefined container...
Pulling docker image fedora:26 ...
Using docker image fedora:26 ID=sha256:b0b140824a486ccc0f7968f3c6ceb6982b4b77e82ef8b4faaf2806049fc266df for build container...
Running on runner-5f4ed288-project-5-concurrent-0 via 2705e39bc3d7...
Cloning repository...
Cloning into '/builds/pmatos/tob'...
remote: Git access over HTTP is not allowed
fatal: unable to access 'https://gitlab.linki.tools/pmatos/tob.git': The requested URL returned error: 403
ERROR: Job failed: exit code 1
I have looked into https://docs.gitlab.com/ee/ci/ssh_keys/README.html
and decided to give it a try so my .gitlab-ci.yml starts with:
image: fedora:26
before_script:
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# For Docker builds disable host key checking. Be aware that by adding that
# you are suspectible to man-in-the-middle attacks.
# WARNING: Use this only with the Docker executor, if you use it with shell
# you will overwrite your user's SSH config.
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
... JOBS...
I setup the SSH_PRIVATE_KEY correctly, etc but the issue is that the cloning of the project happens before before_script. I then tried to start the container with -v /home/pmatos/gitlab-runner_ssh:/root/.ssh but still the cloning is trying to use HTTP. How can I force the container to clone through ssh?
Due to the way gitlab CI works, CI requires https access to the repository. Therefore if you enable CI, you need to have https repo access enabled as well.
This is however, not an issue privacy wise as making the container https accessible doesn't stop gitlab from checking if you're authorized to access it.
I then tried to start the container with -v /home/pmatos/gitlab-runner_ssh:/root/.ssh but still the cloning is trying to use HTTP
Try at least if possible within your container to add a
git config --global url.ssh://git#.insteadOf https://
(assuming the ssh user is git)
That would make any clone of any https URL use ssh.

How to publish docker images to docker hub from gitlab-ci

Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.

Subversion export/checkout in Dockerfile without printing the password on screen

I want to write a Dockerfile which exports a directory from a remote Subversion repository into the build context so I can work with these files in subsequent commands. The repository is secured with user/password authentication.
That Dockerfile could look like this:
# base image
FROM ubuntu
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# export my repository
RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory
# further commands, e.g. on container start run a file just downloaded from the repository
CMD ["/bin/bash", "path/to/file.sh"]
However, this has the drawback of printing my username and password on the screen or any logfile where the stdout is directed, as in Step 2 : RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory. In my case, this is a Jenkins build log which is also accessible by other people who are not supposed to see the credentials.
What would be the easiest way to hide the echo of username and password in the output?
Until now, I have not found any way how to execute RUN commands in a Dockerfile silently when building the image. Could the password maybe be imported from somewhere else and attached to the command beforehand so it does not have to be printed anymore? Or are there any methods for password-less authentication in Subversion that would work in the Dockerfile context (in terms of setting them up without interaction)?
The Subversion Server is running remotely in my company and not on my local machine or the Docker host. To my knowledge, I have no access to it except for accessing my repository via username/password authentication, so copying any key files as root to some server folders might be difficult.
The Dockerfile RUN command is always executed and cached when the docker image is build so the variables that svn needs to authenticate must be provided at build time. You can move the svn export call when the docker run is executed in order to avoid this kind of problems. In order to do that you can create a bash script and declare it as a docker entrypoint and pass environment variables for username and password. Example
# base image
FROM ubuntu
ENV REPOSITORY_URL http://subversion.myserver.com/path/to/directory
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# make it executable before you add it here otherwise docker will coplain
ADD docker-entrypoint.sh /enrypoint.sh
ENTRYPOINT /entrypoint.sh
docker-entrypoint.sh
#!/bin/bash
# maybe here some validation that variables $REPO_USER $REPO_PASSOWRD exists.
svn export --username="$REMOTE_USER" --password="$REMOTE_PASSWORD" "$REPOSITORY_URL"
# continue execution
path/to/file.sh
Run your image:
docker run -e REPO_USER=jane -e REPO_PASSWORD=secret your/image
Or you can put the variables in a file:
.svn-credentials
REPO_USER=jane
REPO_PASSWORD=secret
Then run:
docker run --env-file .svn-credentials your/image
Remove the .svn-credentials file when your done.
Maybe using SVN with SSH is a solution for you? You could generate a public/private key pair. The private key could be added to the image whereas the public key gets added to the server.
For more details you could have a look at this stackoverflow question.
One solution is to ADD the entire SVN directory you previously checked out on your builder file-system (or added as a svn:externals if your Dockerfile is itself in a SVN repository like this: svn propset svn:externals 'external_svn_directory http://subversion.myserver.com/path/to/directory' ., then do a svn up).
Then in your Dockerfile you can simply have this:
ADD external_svn_directory /tmp/external_svn_directory
RUN svn export /tmp/external_svn_directory /path/where/to/export/to
RUN rm -rf /tmp/external_svn_directory
Subversion stores authentication details (if it not disabled in configuration) at client side and use stored username|password on request for the subsequent operations on the same URL.
Thus - you have to run (successful) svn export in Dockerfile with username|password only once and allow SVN to use cached credentials (remove auth. options from command-line) later