Inject host's SSH keys into Docker Machine with Docker Compose - ssh

I am using Docker on Mac OS X with Docker Machine (with the default boot2docker machine), and I use docker-compose to setup my development environment.
Let's say that one of the containers is called "stack". Now what I want to do is call:
docker-composer run stack ssh user#stackoverflow.com
My public key (which has been added to stackoverflow.com and which will be used to authenticate me) is located on the host machine. I want this key to be available to the Docker Machine container so that I will be able to authenticate myself against stackoverflow using that key from within the container. Preferably without physically copying my key to Docker Machine.
Is there any way to do this? Also, if my key is password protected, is there any way to unlock it once so after every injection I will not have to manually enter the password?

You can add this to your docker-compose.yml (assuming your user inside container is root):
volumes:
- ~/.ssh:/root/.ssh
Also you can check for more advanced solution with ssh agent (I did not tried it myself)

WARNING: This feature seems to have limited support in Docker Compose and is more designed for Docker Swarm.
(I haven't checked to make sure, but) My current impression is that:
In Docker Compose secrets are just bind mount volumes, so there's no additional security compared to volumes
Ability to change secrets permissions with Linux host may be limited
See answer comments for more details.
Docker has a feature called secrets, which can be helpful here. To use it one could add the following code to docker-compose.yml:
---
version: '3.1' # Note the minimum file version for this feature to work
services:
stack:
...
secrets:
- host_ssh_key
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
Then the new secret file can be accessed in Dockerfile like this:
RUN mkdir ~/.ssh && ln -s /run/secrets/host_ssh_key ~/.ssh/id_rsa
Secret files won't be copied into container:
When you grant a newly-created or running service access to a secret, the decrypted secret is mounted into the container in an in-memory filesystem
For more details please refer to:
https://docs.docker.com/engine/swarm/secrets/
https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets

If you're using OS X and encrypted keys this is going to be PITA. Here are the steps I went through figuring this out.
Straightforward approach
One might think that there’s no problem. Just mount your ssh folder:
...
volumes:
- ~/.ssh:/root/.ssh:ro
...
This should be working, right?
User problem
Next thing we’ll notice is that we’re using the wrong user id. Fine, we’ll write a script to copy and change the owner of ssh keys. We’ll also set ssh user in config so that ssh server knows who’s connecting.
...
volumes:
- ~/.ssh:/root/.ssh-keys:ro
command: sh -c ‘./.ssh-keys.sh && ...’
environment:
SSH_USER: $USER
...
# ssh-keys.sh
mkdir -p ~/.ssh
cp -r /root/.ssh-keys/* ~/.ssh/
chown -R $(id -u):$(id -g) ~/.ssh
cat <<EOF >> ~/.ssh/config
User $SSH_USER
EOF
SSH key passphrase problem
In our company we protect SSH keys using a passphrase. That wouldn’t work in docker since it’s impractical to enter a passphrase each time we start a container.
We could remove a passphrase (see example below), but there’s a security concern.
openssl rsa -in id_rsa -out id_rsa2
# enter passphrase
# replace passphrase-encrypted key with plaintext key:
mv id_rsa2 id_rsa
SSH agent solution
You may have noticed that locally you don’t need to enter a passphrase each time you need ssh access. Why is that?
That’s what SSH agent is for. SSH agent is basically a server which listens to a special file, unix socket, called “ssh auth sock”. You can see its location on your system:
echo $SSH_AUTH_SOCK
# /run/user/1000/keyring-AvTfL3/ssh
SSH client communicates with SSH agent through this file so that you’d enter passphrase only once. Once it’s unencrypted, SSH agent will store it in memory and send to SSH client on request.
Can we use that in Docker? Sure, just mount that special file and specify a corresponding environment variable:
environment:
SSH_AUTH_SOCK: $SSH_AUTH_SOCK
...
volumes:
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
We don’t even need to copy keys in this case.
To confirm that keys are available we can use ssh-add utility:
if [ -z "$SSH_AUTH_SOCK" ]; then
echo "No ssh agent detected"
else
echo $SSH_AUTH_SOCK
ssh-add -l
fi
The problem of unix socket mount support in Docker for Mac
Unfortunately for OS X users, Docker for Mac has a number of shortcomings, one of which is its inability to share Unix sockets between Mac and Linux. There’s an open issue in D4M Github. As of February 2019 it’s still open.
So, is that a dead end? No, there is a hacky workaround.
SSH agent forwarding solution
Luckily, this issue isn’t new. Long before Docker there was a way to use local ssh keys within a remote ssh session. This is called ssh agent forwarding. The idea is simple: you connect to a remote server through ssh and you can use all the same remote servers there, thus sharing your keys.
With Docker for Mac we can use a smart trick: share ssh agent to the docker virtual machine using TCP ssh connection, and mount that file from virtual machine to another container where we need that SSH connection. Here’s a picture to demonstrate the solution:
First, we create an ssh session to the ssh server inside a container inside a linux VM through a TCP port. We use a real ssh auth sock here.
Next, ssh server forwards our ssh keys to ssh agent on that container. SSH agent has a Unix socket which uses a location mounted to Linux VM. I.e. Unix socket works in Linux. Non-working Unix socket file in Mac has no effect.
After that we create our useful container with an SSH client. We share the Unix socket file which our local SSH session uses.
There’s a bunch of scripts that simplifies that process:
https://github.com/avsm/docker-ssh-agent-forward
Conclusion
Getting SSH to work in Docker could’ve been easier. But it can be done. And it’ll likely to be improved in the future. At least Docker developers are aware of this issue. And even solved it for Dockerfiles with build time secrets. And there's a suggestion how to support Unix domain sockets.

You can forward SSH agent:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent

You can use multi stage build to build containers This is the approach you can take :-
Stage 1 building an image with ssh
FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp
RUN apt-get update && \
apt-get install -y git npm
RUN mkdir /root/.ssh/ &&\
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
chmod 600 /root/.ssh/id_rsa &&\
touch /root/.ssh/known_hosts &&\
ssh-keyscan github.com >> /root/.ssh/known_hosts
COPY package*.json ./
RUN npm install
RUN cp -R node_modules prod_node_modules
Stage 2: build your container
FROM node:10-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY ./ ./
COPY --from=sshImage /root/temp/prod_node_modules ./node_modules
EXPOSE 3006
CMD ["npm", "run", "dev"]
add env attribute in your compose file:
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
then pass args from build script like this:
docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
And remove the intermediate container it for security. This Will help you cheers.

Docker for Mac now supports mounting the ssh agent socket on macOS.

Related

Cannot connect via SSH from Github Action workflow

Connection to created Droplet via SSH by Github Actions runner.
My steps:
ssh-keygen -t rsa -f ~/.ssh/KEY_NAME -P ""
doctl compute ssh-key create KEY --public-key "CONTENT OF KEY_NAME.pub"
doctl compute droplet create --image ubuntu-20-04-x64 --size s-1vcpu-1gb --region fra1 DROPLET_NAME --ssh-keys FINGERPRINT --wait
ssh -vvv -i ~/.ssh/KEY_NAME root#DROPLET_IP
✔️ Tested on Windows local machine using doctl.exe runned from cmd - works!
✔️ Tested on Docker (installed on Windows) based on Linux image using doctl script - works!
⚠️ Tested on Github Actions runner based on ubuntu-latest using digitalocean/action-doctl script - doesn't work!
Received message is: connect to host ADDRESS_IP port 22: Connection refused.
So the steps are correct, so why does this not work for Github Actions?
If you are using the GitHub Action digitalocean/action-doctl, check issue 14 first:
In order to SSH into a Droplet, doctl needs access to the private half of the SSH key pair whose public half is on the Droplet.
Currently the doctl Action is based on a Docker container.
If you were using the Docker container directly, you could invoke it with:
docker run --rm --interactive --tty \
--env=DIGITALOCEAN_ACCESS_TOKEN=<YOUR-DO-API-TOKEN> \
-v $HOME/.ssh/id_rsa:/root/.ssh/id_rsa \
digitalocean/doctl compute ssh <DROPLET-ID>
in order to mount the SSH key from outside the container.
You might be better off just using doctl to grep the Droplet's IP address and using this Action that is more focused on SSH related use cases and provides a lot of additional functionality: marketplace/actions/ssh-remote-commands.

Is it possible to add an ssh key to the agent for a private repo in an ansible playbook?

I am using Ansible to provision a Vagrant environment. As part of the provisioning process, I need to connect from the currently-provisioning VM to a private external repository using an ssh key in order to use composer to pull in modules for an application. I've done a lot of reading on this before asking this question, but still can't seem to comprehend what's going on.
What I want to happen is:
As part of the playbook, on the Vagrant VM, I add the ssh key to the private repo to the ssh-agent
Using that private key, I am then able to use composer to require modules from the external source
I've read articles which highlight specifying the key in playbook execution. (E.g. ansible-play -u username --private-key play.yml) As far as I understand, this isn't for me, as I'm calling the playbook via Vagrant file. I've also read articles which mention ssh forwarding. (SSH Agent Forwarding with Ansible). Based on what I have read, this is what I've done:
On the VM being provisioned, I insert a known_hosts file which consists of the host entries of the machines which house the repos I need:
On the VM being provisioned, I have the following in ~/.ssh/config:
Host <VM IP>
ForwardAgent yes
I have the following entries in my ansible.cfg to support ssh forwarding:
[defaults]
transport = ssh
[ssh_connection]
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
[privilege_escalation]
pipelining = False
I have also added the following task to the playbook which tries to
use composer:
- name: Add ssh agent line to sudoers
become: true
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
I exit the ansible provisioner and add the private key on the provisioned VM to the agent via a shell provisioner (This is where I suspect I'm going wrong)
Then, I attempt to use composer, or call git via the command module. Like this, for example, to test:
- name: Test connection
command: ssh -T git#github.com
Finally, just in case I wasn't understanding ssh connection forwarding correctly, I assumed that what was supposed to happen was that I needed to first add the key to my local machine's agent, then forward that through to the provisioned VM to use to grab the repositories via composer. So I used ssh-add on my local machine before executing vagrant up and running the provisioner.
No matter what, though, I always get permission denied when I do this. I'd greatly appreciate some understanding as to what I may be missing in my understanding of how ssh forwarding should be working here, as well as any guidance for making this connection happen.
I'm not certain I understand your question correctly, but I often setup machines that connect to a private bitbucket repository in order to clone it. You don't need to (and shouldn't) use agent forwarding for that ("ssh forwarding" is unclear; there's "authentication agent forwarding" and "port forwarding", but you need neither in this case).
Just to be clear with terminology, you are running Ansible in your local machine, you are provisioning the controlled machine, and you want to ssh from the controlled machine to a third-party server.
What I do is I upload the ssh key to the controlled machine, in /root/.ssh (more generally $HOME/.ssh where $HOME is the home directory of the controlled machine user who will connect to the third-party server—in my case that's root). I don't use the names id_rsa and id_rsa.pub, because I don't want to touch the default keys of that user (these might have a different purpose; for example, I use them to backup the controlled machine). So this is the code:
- name: Install bitbucket aptiko_ro ssh key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa
mode: 0600
content: "{{ aptiko_ro_ssh_key }}"
- name: Install bitbucket aptiko_ro ssh public key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa.pub
content: "{{ aptiko_ro_ssh_pub_key }}"
Next, you need to tell the controlled machine ssh this: "When you connect to the third-party server, use key X instead of the default key, and logon as user Y". You tell it in this way:
- name: Install ssh config that uses aptiko_ro keys on bitbucket
copy:
dest: /root/.ssh/config
content: |
Host bitbucket.org
IdentityFile ~/.ssh/aptiko_ro_id_rsa
User aptiko_ro

Cannot ssh into remote machine after rsync

I followed this page on Protecting the Docker daemon Socket with HTTPS to generate ca.pem, server-key.pem, server-cert.pem, key.pem and key-cert.pem
I wanted a remote Docker daemon to use those keys so i used rsync via ssh to send three of the files(ca.pem, server-key.pem and key.pem) to the remote host's home directory. The identity file for ssh into the remote host is called dl-datatest-internal.pem
ubuntu#ip-10-3-1-174:~$ rsync -avz -progress -e "ssh -i dl-datatest-internal.pem" dockerCer/ core#10.3.1.181:~/
sending incremental file list
./
ca.pem
server-cert.pem
server-key.pem
sent 3,410 bytes received 79 bytes 6,978.00 bytes/sec
total size is 4,242 speedup is 1.22
The remote host stopped recognising the identity file ever since and started asking for a non-existent password.
ubuntu#ip-10-3-1-174:~$ ssh -i dl-datatest-internal.pem core#10.3.1.151
core#10.3.1.151's password:
Does anyone know why and how to fix it? I still have all the keys if that helps.
There are a couple things about the rsync command that bother me, but, I can't put my finger on the problem (if there is one).
the rsync command and subsequent ssh command reference different hosts: rsync(core#10.3.1.181:~/
) and ssh to the host(core#10.3.1.151). Those are different machines, no?
the ~ in the target of the rsync command. core#10.3.1.181:~/. I am pretty sure that the ~/ references the core home directory, but, you could just get rid of the ~/ and replace that with a . (dot).
If you can reproduce the environment you did the copy in, you can add a --dry-run to the rsync command to see what it is going to do. Looking at this command I can't see it erasing the target's .ssh directory.

Docker: What is the simplest way to secure a private registry?

Our Docker images ship closed sources, we need to store them somewhere safe, using own private docker registry.
We search the simplest way to deploy a private docker registry with a simple authentication layer.
I found :
this manual way http://www.activestate.com/blog/2014/01/deploying-your-own-private-docker-registry
and the shipyard/docker-private-registry docker image based on stackbrew/registry and adding basic auth via Nginx - https://github.com/shipyard/docker-private-registry
I think use shipyard/docker-private-registry, but is there one another best way?
I'm still learning how to run and use Docker, consider this an idea:
# Run the registry on the server, allow only localhost connection
docker run -p 127.0.0.1:5000:5000 registry
# On the client, setup ssh tunneling
ssh -N -L 5000:localhost:5000 user#server
The registry is then accessible at localhost:5000, authentication is done through ssh that you probably already know and use.
Sources:
https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
https://docs.docker.com/userguide/dockerlinks/
You can also use an Nginx front-end with a Basic Auth and an SSL certificate.
Regarding the SSL certificate I have tried couple of hours to have a working self-signed certificate but Docker wasn't able to work with the registry. To solve this I have a free signed certificate which work perfectly. (I have used StartSSL but there are others).
Also be careful when generating the certificate. If you want to have the registry running at the URL registry.damienroch.com, you must give this URL with the sub-domain otherwise it's not going to work.
You can perform all this setup using Docker and my nginx-proxy image (See the README on Github: https://github.com/zedtux/nginx-proxy).
This means that in the case you have installed nginx using the distribution package manager, you will replace it by a containerised nginx.
Place your certificate (.crt and .key files) on your server in a folder (I'm using /etc/docker/nginx/ssl/ and the certificate names are private-registry.crt and private-registry.key)
Generate a .htpasswd file and upload it on your server (I'm using /etc/docker/nginx/htpasswd/ and the filename is accounts.htpasswd)
Create a folder where the images will be stored (I'm using /etc/docker/registry/)
Using docker run my nginx-proxy image
Run the docker registry with some environment variable that nginx-proxy will use to configure itself.
Here is an example of the commands to run for the previous steps:
sudo docker run -d --name nginx -p 80:80 -p 443:443 -v /etc/docker/nginx/ssl/:/etc/nginx/ssl/ -v /var/run/docker.sock:/tmp/docker.sock -v /etc/docker/nginx/htpasswd/:/etc/nginx/htpasswd/ zedtux/nginx-proxy:latest
sudo docker run -d --name registry -e VIRTUAL_HOST=registry.damienroch.com -e MAX_UPLOAD_SIZE=0 -e SSL_FILENAME=private-registry -e HTPASSWD_FILENAME=accounts -e DOCKER_REGISTRY=true -v /etc/docker/registry/data/:/tmp/registry registry
The first line starts nginx and the second one the registry. It's important to do it in this order.
When both are up and running you should be able to login with:
docker login https://registry.damienroch.com
I have create an almost ready to use but certainly ready to function setup for running a docker-registry: https://github.com/kwk/docker-registry-setup .
Maybe it helps.
Everything (Registry, Auth server, and LDAP server) is running in containers which makes parts replacable as soon as you're ready to. The setup is fully configured to make it easy to get started. There're even demo certificates for HTTPs but they should be replaced at some point.
If you don't want LDAP authentication but simple static authentication you can disable it in auth/config/config.yml and put in your own combination of usernames and hashed passwords.

How do I setup passwordless ssh on AWS

How do I setup passwordless ssh between nodes on AWS cluster
Following steps to setup password less authentication are tested thoroughly for Centos and Ubuntu.
Assumptions:
You already have access to your EC2 machine. May be using the pem key or you have credentials for a unix user which has root permissions.
You have already setup RSA keys on you local machine. Private key and public key are available at "~/.ssh/id_rsa" and "~/.ssh/id_rsa.pub" respectively.
Steps:
Login to you EC2 machine as a root user.
Create a new user
useradd -m <yourname>
sudo su <yourname>
cd
mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
Append contents of file ~/.ssh/id_rsa.pub on you local machine to ~/.ssh/authorized_keys on EC2 machine.
chmod -R 700 ~/.ssh
chmod 600 ~/.ssh/*
Make sure sshing is permitted by the machine. In file /etc/ssh/sshd_config, make sure that line containing "PasswordAuthentication yes" is uncommented. Restart sshd service if you make any change in this file:
service sshd restart # On Centos
service ssh restart # On Ubuntu
Your passwordless login should work now. Try following on your local machine:
ssh -A <yourname>#ec2-xx-xx-xxx-xxx.ap-southeast-1.compute.amazonaws.com
Making yourself a super user. Open /etc/sudoers. Make sure following two lines are uncommented:
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Add yourself to wheel group.
usermod -aG wheel <yourname>
This may help someone
Copy the pem file on the machine then copy the content of pem file to the .ssh/id_rsa file you can use bellow command or your own
cat my.pem > ~/.ssh/id_rsa
try ssh localhost it should work and same with the other machines in the cluster
how I made Paswordless shh work between two instances is the following:
create ec2 instances – they should be in the same subnet and have the same security group
Open ports between them – make sure instances can communicate to each other. Use the default security group which has one rule relevant for this case:
Type: All Traffic
Source: Custom – id of the security group
Log in to the instance you want to connect from to the other instance
Run:
1 ssh-keygen -t rsa -N "" -f /home/ubuntu/.ssh/id_rsa
to generate a new rsa key.
Copy your private AWS key as ~/.ssh/my.key (or whatever name you want to use)
Make sure you change the permission to 600
1 chmod 600 .ssh/my.key
Copy the public key to the instance you wish to connect to passwordless
1 cat ~/.ssh/id_rsa.pub | ssh -i ~/.ssh/my.key ubuntu#10.0.0.X "cat >> ~/.ssh/authorized_keys"
If you test the passwordless ssh to the other machine, it should work.
1 ssh 10.0.0.X
you can use ssh keys like described here:
http://pkeck.myweb.uga.edu/ssh/