Mounting user SSH key in container - ssh

I am building a script that will mount some local folders into the container, one of which is the user's ~/.ssh folder. That way, users can still utilize their SSH key for Git commits.
docker run -ti -v $HOME/.ssh/:$HOME/.ssh repo:tag
But that does not mount the SSH folder into the container. Am I doing it incorrectly?

The typical syntax is (from Mount a host directory as a data volume):
docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
(you can skip the command part, here 'app.py', if your image defines an entrypoint and default command)
(-d does not apply in your case, it did in the case of that python web server )
Try:
docker run -ti -v $HOME/.ssh:$HOME/.ssh repo:tag

Related

Installing and upgrading help Data directory (/var/www/moodledata) cannot be created by the installer

I'm trying to deploy Moodle into Docker.
Here is the steps I followed:
First, create a new network for the application and the database:
$ docker network create moodle
Then, start a new database process in an isolated container:
$ docker run --name mysql --network moodle -e MYSQL_ROOT_PASSWORD=password -d mysql
Finally, you can run this moodle image and link it to your mysql container:
$ docker run --name my-moodle --network moodle --link mysql:database -p 8080:80 -d aesr/moodle
Access it via http://localhost:8080 or http://host-ip:8080 in a browser.
But while installing moodle I'm getting this error:
Data directory (/var/www/moodledata) cannot be created by the installer.
Maybe because of Apache doesn't have the proper permission. I'm running Docker on Windows.
My solutions worked on Centos 7.
Just move out the moodledata to somewhere else, like
mkdir /moodledata
chown -R apache:apache /moodledata
Because it calls the folder /var can be expose from internet and not accept to start the Installation

Docker container cannot share data with host

I am running an automated test on a docker container that downloads a file as part of the test. The file ends up in the home/seluser/Download folder of the docker container. But I want to be ale to access it locally on my mac os x.
However, when I run the following command:
docker run -v /Users/MyUsername/Downloads/MappedFolder:/home/seluser/Downloads -d -P -p 4444:4444 selenium/standalone-chrome:3.7.1-beryllium
The downloaded files don't appear in either the docker container or the host.
As soon as I remove
-v /Users/MyUsername/Downloads/MappedFolder:/home/seluser/Downloads
and end up with
docker run -d -P -p 4444:4444 selenium/standalone-chrome:3.7.1-beryllium
the downloaded file shows up in the docker container
I can't seem to find a way to share that data with my host, so I can access the downloaded file in /Users/MyUsername/Downloads/MappedFolder
you are mounting /Users/MyUsername/Downloads/MappedFolder into docker container.
so it will overwrite the inside volume of docker container, so you can't see any files in the /home/seluser/Downloads directory, you can able to see data exst in host directory.
after running docker container.. you can able see all files whichever you download into /home/seluser/Downloads or in host directory

How to pass securely SSH Keys to Docker Build?

I want to create a Docker image for devs that reproduces our production servers. Those servers are configured by Ansible.
My idea is to run an ansible-pull to apply all the configuration inside the container. The problem is that I need the SSH key to pull the playbook, but I don't want to share the SSH key on the Docker image.
So, there is a way to have the SSH keys on build time without having them on run time?
Nice question. The simple way to do it is by removing the SSH keys after the Ansible stuff in the build - but because Docker stores images as layers, someone could still find the old layer with the keys in it.
If you build this Dockerfile:
FROM ubuntu
COPY ansible-ssh-key.rsa /key.rsa
RUN [ansible stuff]
RUN rm /key.rsa
The final image will have all your Ansible state and the SSH key will be gone but someone could easily run docker history to look at all the image layers, and just start a container from an intermediate layer before the key was deleted, and grab the key.
The trick would be to do something like this and then use Jason Wilder's docker-squash tool to squash the final image. In the squashed image the intermediate layer is gone and there's no way to get at the deleted key.
I'd setup some local file serving facility available only in your build environment.
E.g. start lighttpd on your build host to serve your pem-files only to local clients.
And in your Dockerfile do add/pull/cleanup in a single run:
RUN curl -sO http://build-host:8888/key.pem && ansible-pull -U myrepo && rm -rf key.pem
In this case it should be done in a single layer, so there should be no trace of key.pem left after layer commit.
This is another solution by using this repo, dockito/vault,
Secret store to be used on Docker image building.
I create a service dockito/vault and Ubuntu image where I attach my private key to the volume and run it as a process using,
docker run -it -v ~/.ssh:/vault/.ssh ubuntu /bin/bash -c "echo mysupersecret > /vault/.ssh/key"
docker run -d -p 14242:3000 -v ~/.ssh:/vault/.ssh dockito/vault
And, here is my Dockerfile
FROM ubuntu:14.04
RUN apt-get update -y && \
apt-get install -y curl && \
curl -L $(ip route|awk '/default/{print $3}'):14242/ONVAULT >
/usr/local
/bin/ONVAULT && \
chmod +x /usr/local/bin/ONVAULT
ENV REV_BREAK_CACHE=1
RUN ONVAULT echo ENV: && env && echo TOKEN ENV && echo $TOKEN
RUN ONVAULT ls -lha ~/.ssh/
RUN ONVAULT cat ~/.ssh/key
You can use the alpine linux to reduce final build size, and built the image as,
docker build -f Dockerfile -t mohan08p/VaultTest .
And, you are done. You can inspect the image. Secrets has not stored inside the image as its empty.
docker run -it mohan08p/VaultTest ls /root/.ssh
This is good technique to pass the .ssh at the build time. Only disadvantage is I need to keep additional Vault service running.
You could mount the SSH Keys into the Container on runtime.
docker run -v /path/to/ssh/key:/path/to/key/in/container image command

How can I backup a Docker-container with its data-volumes?

I've been using this Docker-image tutum/wordpress to demonstrate a Wordpress website. Recently I found out that the image uses volumes for the MySQL-data.
So the problem is this: If I want to backup and restore the container I can try to commit an image, and then later delete the container, and create a new container from the committed image. But if I do that the volume gets deleted and all my data is gone.
There must be some simple way to backup my container plus its volume-data but I can't find it anywhere.
if I want to revert the container I can try to commit an image, and then later delete the container, and create a new container from the committed image. But if I do that the volume gets deleted and all my data is gone
As the docker user guide explains, data volumes are meant to persist data outside of a container filesystem. This also eases the sharing of data between multiple containers.
While Docker will never delete data in volumes (unless you delete the associated container with docker rm -v), volumes that are not referenced by any docker container are called dangling volumes. Those dangling volumes are difficult to get rid of and difficult to access.
This means that as soon as the last container using a volume is deleted, the data volume becomes dangling and its content difficult to access.
In order to prevent those dangling volumes, the trick is to create an additional docker container using the data volume you want to persist so that there will always be at least that docker container referencing the volume. This way you can delete the docker container running the wordpress app without losing the ease of access to that data volume content.
Such containers are called data volume containers.
There must be some simple way to back up my container plus volume data but I can't find it anywhere.
back up docker images
To back up docker images, use the docker save command that will produce a tar archive that can be used later on to create a new docker image with the docker load command.
back up docker containers
You can back up a docker container by different means
by committing a new docker image based on the docker container current state using the docker commit command
by exporting the docker container file system as a tar archive using the docker export command. You can later on create a new docker image from that tar archive with the docker import command.
Be aware that those commands will only back up the docker container layered file system. This excludes the data volumes.
back up docker data volumes
To back up a data volume you can run a new container using the volume you want to back up and executing the tar command to produce an archive of the volume content as described in the docker user guide.
In your particular case, the data volume is used to store the data for a MySQL server. So if you want to export a tar archive for this volume, you will need to stop the MySQL server first. To do so you will have to stop the wordpress container.
back up the MySQL data
An other way is to remotely connect to the MySQL server to produce a database dump with the mysqldump command. However in order for this to work, your MySQL server must be configured to accept remote connections and also have a user who is allowed to connect remotely. This might not be the case with the wordpress docker image you are using.
Edit
Docker recently introduced Docker volume plugins which allow to delegate the handling of volumes to plugins implemented by vendors.
The docker run command has a new behavior for the -v option. It is now possible to pass it a volume name. Volumes created in that way are named and easy to reference later on, easing the issues with dangling volumes.
Edit 2
Docker introduced the docker volume prune command to delete all dangling volumes easily.
UPDATE 2
Raw single volume backup bash script:
#!/bin/bash
# This script allows you to backup a single volume from a container
# Data in given volume is saved in the current directory in a tar archive.
CONTAINER_NAME=$1
VOLUME_PATH=$2
usage() {
echo "Usage: $0 [container name] [volume path]"
exit 1
}
if [ -z $CONTAINER_NAME ]
then
echo "Error: missing container name parameter."
usage
fi
if [ -z $VOLUME_PATH ]
then
echo "Error: missing volume path parameter."
usage
fi
sudo docker run --rm --volumes-from $CONTAINER_NAME -v $(pwd):/backup busybox tar cvf /backup/backup.tar $VOLUME_PATH
Raw single volume restore bash script:
#!/bin/bash
# This script allows you to restore a single volume from a container
# Data in restored in volume with same backupped path
NEW_CONTAINER_NAME=$1
usage() {
echo "Usage: $0 [container name]"
exit 1
}
if [ -z $NEW_CONTAINER_NAME ]
then
echo "Error: missing container name parameter."
usage
fi
sudo docker run --rm --volumes-from $NEW_CONTAINER_NAME -v $(pwd):/backup busybox tar xvf /backup/backup.tar
Usage can be like this:
$ volume_backup.sh old_container /srv/www
$ sudo docker stop old_container && sudo docker rm old_container
$ sudo docker run -d --name new_container myrepo/new_container
$ volume_restore.sh new_container
Assumptions are: backup file is named backup.tar, it resides in the same directory as backup and restore script, volume name is the same between containers.
UPDATE
It seems to me that backupping volumes from containers is not different from backupping volumes from data containers.
Volumes are nothing else than paths linked to a container so the process is the same.
I don't know if docker-backup works also for same container volumes but you can use:
sudo docker run --rm --volumes-from yourcontainer -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
and:
sudo docker run --rm --volumes-from yournewcontainer -v $(pwd):/backup busybox tar xvf /backup/backup.tar
END UPDATE
There is this nice tool available which lets you backup and restore docker volumes containers:
https://github.com/discordianfish/docker-backup
if you have a container linked to some container volumes like this:
$ docker run --volumes-from=my-data-container --name my-server ...
you can backup all the volumes like this:
$ docker-backup store my-server-backup.tar my-server
and restore like this:
$ docker-backup restore my-server-backup.tar
Or you can follow the official way:
How to port data-only volumes from one host to another?
If your project uses docker-compose, here is an approach for backing up and restoring your volumes.
docker-compose.yml
Basically you add db-backup and db-restore services to your docker-compose.yml file, and adapt it for the name of your volume. My volume is named dbdata in this example.
version: "3"
services:
db:
image: percona:5.7
volumes:
- dbdata:/var/lib/mysql
db-backup:
image: alpine
tty: false
environment:
- TARGET=dbdata
volumes:
- ./backup:/backup
- dbdata:/volume
command: sh -c "tar -cjf /backup/$${TARGET}.tar.bz2 -C /volume ./"
db-restore:
image: alpine
environment:
- SOURCE=dbdata
volumes:
- ./backup:/backup
- dbdata:/volume
command: sh -c "rm -rf /volume/* /volume/..?* /volume/.[!.]* ; tar -C /volume/ -xjf /backup/$${SOURCE}.tar.bz2"
Avoid corruption
For data consistency, stop your db container before backing up or restoring
docker-compose stop db
Backing up
To back up to the default destination (backup/dbdata.tar.bz2):
docker-compose run --rm db-backup
Or, if you want to specify an alternate target name, do:
docker-compose run --rm -e TARGET=mybackup db-backup
Restoring
To restore from backup/dbdata.tar.bz2, do:
docker-compose run --rm db-restore
Or restore from a specific file using:
docker-compose run --rm -e SOURCE=mybackup db-restore
I adapted commands from https://loomchild.net/2017/03/26/backup-restore-docker-named-volumes/ to create this approach.
If you only need to backup mounted volumes you can just copy folders from your Dockerhost.
Note: If you are on Ubuntu, Dockerhost is your local machine. If you are on Mac, Dockerhost is your virtual machine.
On Ubuntu
You can find all folders with volumes here: /var/lib/docker/volumes/ so you can copy them and archive wherever you want.
On MAC
It's not so easy as on Ubuntu. You need to copy files from VM.
Here is a script of how to copy all folders with volumes from virtual machine (where Docker server is running) to your local machine. We assume that your docker-machine VM named default.
docker-machine ssh default sudo cp -v -R /var/lib/docker/volumes/ /home/docker/volumes
docker-machine ssh default sudo chmod -R 777 /home/docker/volumes
docker-machine scp -R default:/home/docker/volumes ./backup_volumes
docker-machine ssh default sudo rm -r /home/docker/volumes
It is going to create a folder ./backup_volumes in your current directory and copy all volumes to this folder.
Here is a script of how to copy all saved volumes from your local directory (./backup_volumes) to Dockerhost machine
docker-machine scp -r ./backup_volumes default:/home/docker
docker-machine ssh default sudo mv -f /home/docker/backup_volumes /home/docker/volumes
docker-machine ssh default sudo chmod -R 777 /home/docker/volumes
docker-machine ssh default sudo cp -v -R /home/docker/volumes /var/lib/docker/
docker-machine ssh default sudo rm -r /home/docker/volumes
Now you can check if it works by:
docker volume ls
Let's say your volume name is data_volume. You can use the following commands to backup and restore the volume to and from a docker image named data_image:
To backup:
docker run --rm --mount source=data_volume,destination=/data alpine tar -c -f- data | docker run -i --name data_container alpine tar -x -f-
docker container commit data_container data_image
docker rm data_container
To restore:
docker run --rm data_image tar -c -f- data | docker run -i --rm --mount source=data_volume,destination=/data alpine tar -x -f-
I know this is old, but I realize that there isnt a well documented solution to pushing a data container (as backup) to docker hub. I just published a short example on how doing so at
https://dzone.com/articles/docker-backup-your-data-volumes-to-docker-hub
Following is the bottom line
The docker tutorial suggest you can backup and restore the data volume locally. We are going to use this technique, add a few more lines to get this backup pushed into docker hub for easy future restoration to any location we desire. So, lets get started. These are the steps to follow:
Backup the data volume from the data container named data-container-to-backup
docker run --rm --volumes-from data-container-backup --name tmp-backup -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /folderToBackup
Expand this tar file into a new container so we can commit it as part of its image
docker run -d -v $(pwd):/backup --name data-backup ubuntu /bin/sh -c "cd / && tar xvf /backup/backup.tar"
Commit and push the image with a desired tag ($VERSION)
docker commit data-backup repo/data-backup:$VERSION
docker push repo/data-backup:$VERSION
Finally, lets clean up
docker rm data-backup
docker rmi $(docker images -f "dangling=true" -q)
Now we have an image named data-backup in our repo that is simply a filesystem with the backup files and folders. In order use this image (aka restore from backup), we do the following:
Run the data container with the data-backup image
run -v /folderToBackup --entrypoint "bin/sh" --name data-container repo/data-backup:${VERSION}
Run your whatEver image with volumes from the data-conainter
docker run --volumes-from=data-container repo/whatEver
Thats it.
I was surprised there is no documentation for this work around. I hope someone find this helpful. I know it took me a while to think about this.
The following command will run tar in a container with all named data volumes mounted, and redirect the output into a file:
docker run --rm `docker volume list -q | egrep -v '^.{64}$' | awk '{print "-v " $1 ":/mnt/" $1}'` alpine tar -C /mnt -cj . > data-volumes.tar.bz2
Make sure to test the resulting archive in case something went wrong:
tar -tjf data-volumes.tar.bz2
If you just need a simple backup to an archive, you can try my little utility: https://github.com/loomchild/volume-backup
Example
Backup:
docker run -v some_volume:/volume -v /tmp:/backup --rm loomchild/volume-backup backup archive1
will archive volume named some_volume to /tmp/archive1.tar.bz2 archive file
Restore:
docker run -v some_volume:/volume -v /tmp:/backup --rm loomchild/volume-backup restore archive1
will wipe and restore volume named some_volume from /tmp/archive1.tar.bz2 archive file.
More info: https://medium.com/#loomchild/backup-restore-docker-named-volumes-350397b8e362
I have created a tool to orchestrate and launch backup of data and mysql containers, simply called docker-backup. There is even a ready-to-use image on the docker hub.
It's mainly written in Bash as it is mainly orchestration. It uses duplicity for the actual backup engine. You can currently backup to FTP(S) and Amazon S3.
The configuration is quite simple: write a config file in YAML describing what to backup and where, and here you go!
For data containers, it automatically mount the volumes shared by your container to backup and process it. For mysql containers, it links them and execute a mysqldump bundled with your container and process the result.
I wrote it because I use Docker-Cloud which is not up-to-date with recent docker-engine releases and because I wanted to embrace the Docker way by not including any process of backup inside my application containers.
If you want a complete backup, you will need to perform a few steps:
Commit the container to an image
Save the image
Backup the container's volume by creating a tar file of the volume's mount point in the container.
Repeat steps 1-3 for the database container as well.
Note that doing just a Docker commit of the container to an image does NOT include volumes attached to the container (ref: Docker commit documentation).
"The commit operation will not include any data contained in volumes mounted inside the container."
We can use an image to back up all our volumes. I write a script to help backup and restore. furthermore, I save the data to a tar file compression to save all data on a local disc. I use this script to save my Postgres and Cassandra volume databases at the same image. for example, if we have a pg_data for Postgres and cassandra_data for Cassandra database we can call the following script twice one with pg_data argument and then cassandra_data argument for Cassandra
backup script:
#! /bin/bash
GENERATE_IMAGE="data_image"
TEMPRORY_CONTAINER_NAME="data_container"
VOLUME_TO_BACKUP=${1}
RANDOM=$(head -200 /dev/urandom | cksum | cut -f1 -d " ")
if docker images | grep -q ${GENERATE_IMAGE}; then
docker run --rm --mount source=${VOLUME_TO_BACKUP},destination=/${VOLUME_TO_BACKUP} ${GENERATE_IMAGE} tar -c -f- ${VOLUME_TO_BACKUP} | docker run -i --name ${TEMPRORY_CONTAINER_NAME} ${GENERATE_IMAGE} tar -x -f-
else
docker run --rm --mount source=${VOLUME_TO_BACKUP},destination=/${VOLUME_TO_BACKUP} alpine tar -c -f- ${VOLUME_TO_BACKUP} | docker run -i --name ${TEMPRORY_CONTAINER_NAME} alpine tar -x -f-
fi
docker container commit ${TEMPRORY_CONTAINER_NAME} ${GENERATE_IMAGE}
docker rm ${TEMPRORY_CONTAINER_NAME}
if [ -f "$(pwd)/backup/${VOLUME_TO_BACKUP}.tar" ]; then
docker run --rm -v $(pwd)/backup:/backup ${GENERATE_IMAGE} tar cvf /backup/${VOLUME_TO_BACKUP}_${RANDOM}.tar /${VOLUME_TO_BACKUP}
else
docker run --rm -v $(pwd)/backup:/backup ${GENERATE_IMAGE} tar cvf /backup/${VOLUME_TO_BACKUP}.tar /${VOLUME_TO_BACKUP}
fi
example:
./backup.sh cassandra_data
./backup.sh pg_data
Restore script:
#! /bin/bash
GENERATE_IMAGE="data_image"
TEMPRORY_CONTAINER_NAME="data_container"
VOLUME_TO_RESTORE=${1}
docker run --rm ${GENERATE_IMAGE} tar -c -f- ${VOLUME_TO_RESTORE} | docker run -i --rm --mount source=${VOLUME_TO_RESTORE},destination=/${VOLUME_TO_RESTORE} alpine tar -x -f-
example:
./restore.sh cassandra_data
./restore.sh pg_data
The problem: You want to backup you image container WITH the data volumes in it but this option is Not out off the box, The straight forward and trivial way would be copy the volumes path and backup the docker image 'reload it and and link it both together. but this solution seems to be clumsy and not sustainable and maintainable - You would need to create a cron job that would make this flow each time.
Solution: Using dockup - Docker image to backup your Docker container volumes and upload it to s3 (Docker + Backup = dockup) . dockup will use your AWS credentials to create a new bucket with name as per the environment variable ,gets the configured volumes and will be tarballed, gzipped, time-stamped and uploaded to the S3 bucket.
Steps:
configure the docker-compose.yml and attach the env.txt configuration file to it, The data should be uploaded to a dedicated secured s3 bucket and ready to be reloaded on DRP executions. in order to verify which volumes path to configure run docker inspect <service-name> and locate the volumes :
"Volumes": {
"/etc/service-example": {},
"/service-example": {}
},
Edit the content of the configuration file env.txt, and place it on the project path:
AWS_ACCESS_KEY_ID=<key_here>
AWS_SECRET_ACCESS_KEY=<secret_here>
AWS_DEFAULT_REGION=us-east-1
BACKUP_NAME=service-backup
PATHS_TO_BACKUP=/etc/service-example /service-example
S3_BUCKET_NAME=docker-backups.example.com
RESTORE=false
Run the dockup container
$ docker run --rm \
--env-file env.txt \
--volumes-from <service-name> \
--name dockup tutum/dockup:latest
Afterwards verify your s3 bucket contains the relevant data
docker container run --rm --volumes-from your_db_container -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /your_named_volume
run creates the new container
--rm option removes the container just after the execution of the tar cvf /backup/backup.tar /dbdata command
--volumes-from creates a named volume (your_named_volume) taken from the one you've created in your_db_container
-v $(pwd):/backup creates a bind mount between your current host directory ($(pwd)) and a /backup directory in your new container
tar cvf /backup/backup.tar /your_named_volume creates the archive
source: backup a volume
If you have a case as simple as mine was you can do the following:
Create a Dockerfile that extends the base image of your container
I assume that your volumes are mapped to your filesystem, so you can just add those files/folders to your image using ADD folder destination
Done!
For example, assuming you have the data from the volumes on your home directory, for example at /home/mydata you can run the following:
DOCKERFILE=/home/dockerfile.bk-myimage
docker build --rm --no-cache -t $IMAGENAME:$TAG -f $DOCKERFILE /home/pirate
Where your DOCKERFILE points to a file like this:
FROM user/myimage
MAINTAINER Danielo Rodríguez Rivero <example#gmail.com>
WORKDIR /opt/data
ADD mydata .
The rest of the stuff is inherited from the base image. You can now push that image to docker cloud and your users will have the data available directly on their containers
If you like entering arcane operators from the command line, you’ll love these manual container backup techniques. Keep in mind, there’s a faster and more efficient way to backup containers that’s just as effective. I've written instructions here: https://www.morpheusdata.com/blog/2017-03-02-how-to-create-a-docker-backup-with-morpheus
Step 1: Add a Docker Host to Any Cloud
As explained in a tutorial on the Morpheus support site, you can add a Docker host to the cloud of your choice in a matter of seconds. Start by choosing Infrastructure on the main Morpheus navigation bar. Select Hosts at the top of the Infrastructure window, and click the “+Container Hosts” button at the top right.
To back up a Docker host to a cloud via Morpheus, navigate to the Infrastructure screen and open the “+Container Hosts” menu.
Choose a container host type on the menu, select a group, and then enter data in the five fields: Name, Description, Visibility, Select a Cloud and Enter Tags (optional). Click Next, and then configure the host options by choosing a service plan. Note that the Volume, Memory, and CPU count fields will be visible only if the plan you select has custom options enabled.
Here is where you add and size volumes, set memory size and CPU count, and choose a network. You can also configure the OS username and password, the domain name, and the hostname, which by default is the container name you entered previously. Click Next, and then add any Automation Workflows (optional).Finally, review your settings and click Complete to save them.
Step 2: Add Docker Registry Integration to Public or Private Clouds
Adam Hicks describes in another Morpheus tutorial how simple it is to integrate with a private Docker Registry. (No added configuration is required to use Morpheus to provision images with Docker’s public hub using the public Docker API.)
Select Integrations under the Admin tab of the main navigation bar, and then choose the “+New Integration” button on the right side of the screen. In the Integration window that appears, select Docker Repository in the Type drop-down menu, enter a name and add the private registry API endpoint. Supply a username and password for the registry you’re using, and click the Save Changes button.
Integrate a Docker Registry with a private cloud via the Morpheus “New Integration” dialog box.
To provision the integration you just created, choose Docker under Type in the Create Instance dialog, select the registry in the Docker Registry drop-down menu under the Configure tab, and then continue provisioning as you would any Docker container.
Step 3: Manage Backups
Once you’ve added the Docker host and integrated the registry, a backup will be configured and performed automatically for each instance you provision. Morpheus support provides instructions for viewing backups, creating an instance backup, and creating a server backup.
I would suggest using restic. It's an easy to use backup application that can back up to various targets such as local file systems, S3 compatible storage services or a restic REST target server to mention some of the options. Using resticker, you will have an already prepared container that can be scheduled with cron syntax: https://github.com/djmaze/resticker
For the ones that want to learn more about restic and it's usage, I did write a blog post series on that topic including examples on its usage:
https://remo-hoeppli.medium.com/restic-backup-i-simple-and-beautiful-backups-bdbbc178669d
I have been using this batch script to back up all my volumes. The script takes the container name as the single argument, and automatically finds all its mounted volumes.
Then it creates one tar archive for each volume.
#! /bin/bash
container=$1
dirname="backup-$container-$(date +"%FT%H%M%z")"
mkdir $dirname
cd $dirname
volume_paths=( $(docker inspect $container | jq '.[] | .Mounts[].Name, .Mounts[].Source') )
volume_count=$(( ${#volume_paths[#]} / 2 ))
for i in $(seq $volume_count); do
volume_name=${volume_paths[i-1]}
volume_name=$(echo $volume_name | tr -d '"')
volume_path=${volume_paths[(i-1)+volume_count]}
volume_path=$(echo $volume_path | tr -d '"')
echo "$volume_name : $volume_path"
# create an archive with volume name
tar -zcvf "$volume_name.tar" $volume_path
done
The code is available at Github.
This is a volume-folder-backup way.
If you have docker registry infra, This method is very helpful.
This uses docker registry for moving the zip file easily.
#volume folder backup script. !/bin/bash
#common bash variables. set these variable before running scripts
REPO=harbor.otcysk.org:20443/levee
VFOLDER=/data/mariadb
TAG=mariadb1
#zip local folder for volume files
tar cvfz volume-backup.tar.gz $VFOLDER
#copy the zip file to volume-backup container.
#zip file must be in current folder.
docker run -d -v $(pwd):/temp --name volume-backup ubuntu \
bash -c "cd / && cp /temp/volume-backup.tar.gz ."
#commit for pushing into REPO
docker commit volume-backup $REPO/volume-backup:$TAG
#check gz files in this container
#docker run --rm -it --entrypoint bash --name check-volume-backup \
$REPO/volume-backup:$TAG
#push into REPO
docker push $REPO/volume-backup:$TAG
In another server
#pull the image in another server
docker pull $REPO/volume-backup:$TAG
#restore files in another server filesystem
docker run --rm -v $VFOLDER:$VFOLDER --name volume-backup $REPO/volume-backup:$TAG \
bash -c "cd / && tar xvfz volume-backup.tar.gz"
Run your image which uses this volume folder.
You can make a image which has both one run-image and one volume zip file easily.
But I do not recommened for various reasons(image size, entry command, ..).

Using SSH keys inside docker container

I have an app that executes various fun stuff with Git (like running git clone & git push) and I'm trying to docker-ize it.
I'm running into an issue though where I need to be able to add an SSH key to the container for the container 'user' to use.
I tried copying it into /root/.ssh/, changing $HOME, creating a git ssh wrapper, and still no luck.
Here is the Dockerfile for reference:
#DOCKER-VERSION 0.3.4
from ubuntu:12.04
RUN apt-get update
RUN apt-get install python-software-properties python g++ make git-core openssh-server -y
RUN add-apt-repository ppa:chris-lea/node.js
RUN echo "deb http://archive.ubuntu.com/ubuntu precise universe" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install nodejs -y
ADD . /src
ADD ../../home/ubuntu/.ssh/id_rsa /root/.ssh/id_rsa
RUN cd /src; npm install
EXPOSE 808:808
CMD [ "node", "/src/app.js"]
app.js runs the git commands like git pull
It's a harder problem if you need to use SSH at build time. For example if you're using git clone, or in my case pip and npm to download from a private repository.
The solution I found is to add your keys using the --build-arg flag. Then you can use the new experimental --squash command (added 1.13) to merge the layers so that the keys are no longer available after removal. Here's my solution:
Build command
$ docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .
Dockerfile
FROM python:3.6-slim
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
# Avoid cache purge by adding requirements first
ADD ./requirements.txt /app/requirements.txt
WORKDIR /app/
RUN pip install -r requirements.txt
# Remove SSH keys
RUN rm -rf /root/.ssh/
# Add the rest of the files
ADD . .
CMD python manage.py runserver
Update: If you're using Docker 1.13 and have experimental features on you can append --squash to the build command which will merge the layers, removing the SSH keys and hiding them from docker history.
Turns out when using Ubuntu, the ssh_config isn't correct. You need to add
RUN echo " IdentityFile ~/.ssh/id_rsa" >> /etc/ssh/ssh_config
to your Dockerfile in order to get it to recognize your ssh key.
Note: only use this approach for images that are private and will always be!
The ssh key remains stored within the image, even if you remove the key in a layer command after adding it (see comments in this post).
In my case this is ok, so this is what I am using:
# Setup for ssh onto github
RUN mkdir -p /root/.ssh
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN echo "Host github.com\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config
If you are using Docker Compose an easy choice is to forward SSH agent like that:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
or equivalently, if using docker run:
$ docker run --mount type=bind,source=$SSH_AUTH_SOCK,target=/ssh-agent \
--env SSH_AUTH_SOCK=/ssh-agent \
some-image
Expanding Peter Grainger's answer I was able to use multi-stage build available since Docker 17.05. Official page states:
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.
Keeping this in mind here is my example of Dockerfile including three build stages. It's meant to create a production image of client web application.
# Stage 1: get sources from npm and git over ssh
FROM node:carbon AS sources
ARG SSH_KEY
ARG SSH_KEY_PASSPHRASE
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan bitbucket.org > /root/.ssh/known_hosts && \
echo "${SSH_KEY}" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
WORKDIR /app/
COPY package*.json yarn.lock /app/
RUN eval `ssh-agent -s` && \
printf "${SSH_KEY_PASSPHRASE}\n" | ssh-add $HOME/.ssh/id_rsa && \
yarn --pure-lockfile --mutex file --network-concurrency 1 && \
rm -rf /root/.ssh/
# Stage 2: build minified production code
FROM node:carbon AS production
WORKDIR /app/
COPY --from=sources /app/ /app/
COPY . /app/
RUN yarn build:prod
# Stage 3: include only built production files and host them with Node Express server
FROM node:carbon
WORKDIR /app/
RUN yarn add express
COPY --from=production /app/dist/ /app/dist/
COPY server.js /app/
EXPOSE 33330
CMD ["node", "server.js"]
.dockerignore repeats contents of .gitignore file (it prevents node_modules and resulting dist directories of the project from being copied):
.idea
dist
node_modules
*.log
Command example to build an image:
$ docker build -t ezze/geoport:0.6.0 \
--build-arg SSH_KEY="$(cat ~/.ssh/id_rsa)" \
--build-arg SSH_KEY_PASSPHRASE="my_super_secret" \
./
If your private SSH key doesn't have a passphrase just specify empty SSH_KEY_PASSPHRASE argument.
This is how it works:
1). On the first stage only package.json, yarn.lock files and private SSH key are copied to the first intermediate image named sources. In order to avoid further SSH key passphrase prompts it is automatically added to ssh-agent. Finally yarn command installs all required dependencies from NPM and clones private git repositories from Bitbucket over SSH.
2). The second stage builds and minifies source code of web application and places it in dist directory of the next intermediate image named production. Note that source code of installed node_modules is copied from the image named sources produced on the first stage by this line:
COPY --from=sources /app/ /app/
Probably it also could be the following line:
COPY --from=sources /app/node_modules/ /app/node_modules/
We have only node_modules directory from the first intermediate image here, no SSH_KEY and SSH_KEY_PASSPHRASE arguments anymore. All the rest required for build is copied from our project directory.
3). On the third stage we reduce a size of the final image that will be tagged as ezze/geoport:0.6.0 by including only dist directory from the second intermediate image named production and installing Node Express for starting a web server.
Listing images gives an output like this:
REPOSITORY TAG IMAGE ID CREATED SIZE
ezze/geoport 0.6.0 8e8809c4e996 3 hours ago 717MB
<none> <none> 1f6518644324 3 hours ago 1.1GB
<none> <none> fa00f1182917 4 hours ago 1.63GB
node carbon b87c2ad8344d 4 weeks ago 676MB
where non-tagged images correpsond to the first and the second intermediate build stages.
If you run
$ docker history ezze/geoport:0.6.0 --no-trunc
you will not see any mentions of SSH_KEY and SSH_KEY_PASSPHRASE in the final image.
In order to inject you ssh key, within a container, you have multiple solutions:
Using a Dockerfile with the ADD instruction, you can inject it during your build process
Simply doing something like cat id_rsa | docker run -i <image> sh -c 'cat > /root/.ssh/id_rsa'
Using the docker cp command which allows you to inject files while a container is running.
This is now available since 18.09 release!
According to the documentation:
The docker build has a --ssh option to allow the Docker Engine to
forward SSH agent connections.
Here is an example of Dockerfile using SSH in the container:
# syntax=docker/dockerfile:experimental
FROM alpine
# Install ssh client and git
RUN apk add --no-cache openssh-client git
# Download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# Clone private repository
RUN --mount=type=ssh git clone git#github.com:myorg/myproject.git myproject
Once the Dockerfile is created, use the --ssh option for connectivity with the SSH agent:
$ docker build --ssh default .
Also, take a look at https://medium.com/#tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
One cross-platform solution is to use a bind mount to share the host's .ssh folder to the container:
docker run -v /home/<host user>/.ssh:/home/<docker user>/.ssh <image>
Similar to agent forwarding this approach will make the public keys accessible to the container. An additional upside is that it works with a non-root user too and will get you connected to GitHub. One caveat to consider, however, is that all contents (including private keys) from the .ssh folder will be shared so this approach is only desirable for development and only for trusted container images.
Starting from docker API 1.39+ (Check API version with docker version) docker build allows the --ssh option with either an agent socket or keys to allow the Docker Engine to forward SSH agent connections.
Build Command
export DOCKER_BUILDKIT=1
docker build --ssh default=~/.ssh/id_rsa .
Dockerfile
# syntax=docker/dockerfile:experimental
FROM python:3.7
# Install ssh client (if required)
RUN apt-get update -qq
RUN apt-get install openssh-client -y
# Download public key for github.com
RUN --mount=type=ssh mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# Clone private repository
RUN --mount=type=ssh git clone git#github.com:myorg/myproject.git myproject
More Info:
https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds
https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md#run---mounttypessh
This line is a problem:
ADD ../../home/ubuntu/.ssh/id_rsa /root/.ssh/id_rsa
When specifying the files you want to copy into the image you can only use relative paths - relative to the directory where your Dockerfile is. So you should instead use:
ADD id_rsa /root/.ssh/id_rsa
And put the id_rsa file into the same directory where your Dockerfile is.
Check this out for more details: http://docs.docker.io/reference/builder/#add
Docker containers should be seen as 'services' of their own. To separate concerns you should separate functionalities:
1) Data should be in a data container: use a linked volume to clone the repo into. That data container can then be linked to the service needing it.
2) Use a container to run the git cloning task, (i.e it's only job is cloning) linking the data container to it when you run it.
3) Same for the ssh-key: put it is a volume (as suggested above) and link it to the git clone service when you need it
That way, both the cloning task and the key are ephemeral and only active when needed.
Now if your app itself is a git interface, you might want to consider github or bitbucket REST APIs directly to do your work: that's what they were designed for.
We had similar problem when doing npm install in docker build time.
Inspired from solution from Daniel van Flymen and combining it with git url rewrite, we found a bit simpler method for authenticating npm install from private github repos - we used oauth2 tokens instead of the keys.
In our case, the npm dependencies were specified as "git+https://github.com/..."
For authentication in container, the urls need to be rewritten to either be suitable for ssh authentication (ssh://git#github.com/) or token authentication (https://${GITHUB_TOKEN}#github.com/)
Build command:
docker build -t sometag --build-arg GITHUB_TOKEN=$GITHUB_TOKEN .
Unfortunately, I'm on docker 1.9, so --squash option is not there yet, eventually it needs to be added
Dockerfile:
FROM node:5.10.0
ARG GITHUB_TOKEN
#Install dependencies
COPY package.json ./
# add rewrite rule to authenticate github user
RUN git config --global url."https://${GITHUB_TOKEN}#github.com/".insteadOf "https://github.com/"
RUN npm install
# remove the secret token from the git config file, remember to use --squash option for docker build, when it becomes available in docker 1.13
RUN git config --global --unset url."https://${GITHUB_TOKEN}#github.com/".insteadOf
# Expose the ports that the app uses
EXPOSE 8000
#Copy server and client code
COPY server /server
COPY clients /clients
Forward the ssh authentication socket to the container:
docker run --rm -ti \
-v $SSH_AUTH_SOCK:/tmp/ssh_auth.sock \
-e SSH_AUTH_SOCK=/tmp/ssh_auth.sock \
-w /src \
my_image
Your script will be able to perform a git clone.
Extra: If you want cloned files to belong to a specific user you need to use chown since using other user than root inside the container will make git fail.
You can do this publishing to the container's environment some additional variables:
docker run ...
-e OWNER_USER=$(id -u) \
-e OWNER_GROUP=$(id -g) \
...
After you clone you must execute chown $OWNER_USER:$OWNER_GROUP -R <source_folder> to set the proper ownership before you leave the container so the files are accessible by a non-root user outside the container.
You can use multi stage build to build containers
This is the approach you can take :-
Stage 1 building an image with ssh
FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp
RUN apt-get update && \
apt-get install -y git npm
RUN mkdir /root/.ssh/ &&\
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
chmod 600 /root/.ssh/id_rsa &&\
touch /root/.ssh/known_hosts &&\
ssh-keyscan github.com >> /root/.ssh/known_hosts
COPY package*.json ./
RUN npm install
RUN cp -R node_modules prod_node_modules
Stage 2: build your container
FROM node:10-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY ./ ./
COPY --from=sshImage /root/temp/prod_node_modules ./node_modules
EXPOSE 3006
CMD ["npm", "run", "dev"]
add env attribute in your compose file:
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
then pass args from build script like this:
docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
And remove the intermediate container it for security.
This Will help you cheers.
I ran into the same problem today and little bit modified version with previous posts I found this approach more useful to me
docker run -it -v ~/.ssh/id_rsa:/root/.my-key:ro image /bin/bash
(Note that readonly flag so container will not mess my ssh key in any case.)
Inside container I can now run:
ssh-agent bash -c "ssh-add ~/.my-key; git clone <gitrepourl> <target>"
So I don't get that Bad owner or permissions on /root/.ssh/.. error which was noted by #kross
This issue is really an annoying one. Since you can't add/copy any file outside the dockerfile context, which means it's impossible to just link ~/.ssh/id_rsa into image's /root/.ssh/id_rsa, and when you definitely need a key to do some sshed thing like git clone from a private repo link..., during the building of your docker image.
Anyways, I found a solution to workaround, not so persuading but did work for me.
in your dockerfile:
add this file as /root/.ssh/id_rsa
do what you want, such as git clone, composer...
rm /root/.ssh/id_rsa at the end
a script to do in one shoot:
cp your key to the folder holding dockerfile
docker build
rm the copied key
anytime you have to run a container from this image with some ssh requirements, just add -v for the run command, like:
docker run -v ~/.ssh/id_rsa:/root/.ssh/id_rsa --name container image command
This solution results in no private key in both you project source and the built docker image, so no security issue to worry about anymore.
As eczajk already commented in Daniel van Flymen's answer it does not seem to be safe to remove the keys and use --squash, as they still will be visible in the history (docker history --no-trunc).
Instead with Docker 18.09, you can now use the "build secrets" feature. In my case I cloned a private git repo using my hosts SSH key with the following in my Dockerfile:
# syntax=docker/dockerfile:experimental
[...]
RUN --mount=type=ssh git clone [...]
[...]
To be able to use this, you need to enable the new BuildKit backend prior to running docker build:
export DOCKER_BUILDKIT=1
And you need to add the --ssh default parameter to docker build.
More info about this here: https://medium.com/#tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
At first, some meta noise
There is a dangerously wrong advice in two highly upvoted answers here.
I commented, but since I have lost many days with this, please MIND:
Do not echo the private key into a file (meaning: echo "$ssh_prv_key" > /root/.ssh/id_ed25519). This will destroy the needed line format, at least in my case.
Use COPY or ADD instead. See Docker Load key “/root/.ssh/id_rsa”: invalid format for details.
This was also confirmed by another user:
I get Error loading key "/root/.ssh/id_ed25519": invalid format. Echo will
remove newlines/tack on double quotes for me. Is this only for ubuntu
or is there something different for alpine:3.10.3?
1. A working way that keeps the private key in the image (not so good!)
If the private key is stored in the image, you need to pay attention that you delete the public key from the git website, or that you do not publish the image. If you take care of this, this is secure. See below (2.) for a better way where you could also "forget to pay attention".
The Dockerfile looks as follows:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y git
RUN mkdir -p /root/.ssh && chmod 700 /root/.ssh
COPY /.ssh/id_ed25519 /root/.ssh/id_ed25519
RUN chmod 600 /root/.ssh/id_ed25519 && \
apt-get -yqq install openssh-client && \
ssh-keyscan -t ed25519 -H gitlab.com >> /root/.ssh/known_hosts
RUN git clone git#gitlab.com:GITLAB_USERNAME/test.git
RUN rm -r /root/.ssh
2. A working way that does not keep the private key in the image (good!)
The following is the more secure way of the same thing, using "multi stage build" instead.
If you need an image that has the git repo directory without the private key stored in one of its layers, you need two images, and you only use the second in the end. That means, you need FROM two times, and you can then copy only the git repo directory from the first to the second image, see the official guide "Use multi-stage builds".
We use "alpine" as the smallest possible base image which uses apk instead of apt-get; you can also use apt-get with the above code instead using FROM ubuntu:latest.
The Dockerfile looks as follows:
# first image only to download the git repo
FROM alpine as MY_TMP_GIT_IMAGE
RUN apk add --no-cache git
RUN mkdir -p /root/.ssh && chmod 700 /root/.ssh
COPY /.ssh/id_ed25519 /root/.ssh/id_ed25519
RUN chmod 600 /root/.ssh/id_ed25519
RUN apk -yqq add --no-cache openssh-client && ssh-keyscan -t ed25519 -H gitlab.com >> /root/.ssh/known_hosts
RUN git clone git#gitlab.com:GITLAB_USERNAME/test.git
RUN rm -r /root/.ssh
# Start of the second image
FROM MY_BASE_IMAGE
COPY --from=MY_TMP_GIT_IMAGE /MY_GIT_REPO ./MY_GIT_REPO
We see here that FROM is just a namespace, it is like a header for the lines below it and can be addressed with an alias. Without an alias, --from=0 would be the first image (=FROM namespace).
You could now publish or share the second image, as the private key is not in its layers, and you would not necessarily need to remove the public key from the git website after one usage! Thus, you do not need to create a new key pair at every cloning of the repo. Of course, be aware that a passwordless private key is still insecure if someone might get a hand on your data in another way. If you are not sure about this, better remove the public key from the server after usage, and have a new key pair at every run.
A guide how to build the image from the Dockerfile
Install Docker Desktop; or use docker inside WSL2 or Linux in a VirtualBox; or use docker in a standalone Linux partition / hard drive.
Open a command prompt (PowerShell, terminal, ...).
Go to the directory of the Dockerfile.
Create a subfolder ".ssh/".
For security reasons, create a new public and private SSH key pair - even if you already have another one lying around - for each Dockerfile run. In the command prompt, in your Dockerfile's folder, enter (mind, this overwrites without asking):
Write-Output "y" | ssh-keygen -q -t ed25519 -f ./.ssh/id_ed25519 -N '""'
(if you use PowerShell) or
echo "y" | ssh-keygen -q -t ed25519 -f ./.ssh/id_ed25519 -N ''
(if you do not use PowerShell).
Your key pair will now be in the subfolder .ssh/. It is up to you whether you use that subfolder at all, you can also change the code to COPY id_ed25519 /root/.ssh/id_ed25519; then your private key needs to be in the Dockerfile's directory that you are in.
Open the public key in an editor, copy the content and publish it to your server (e.g. GitHub / GitLab --> profile --> SSH keys). You can choose whatever name and end date. The final readable comment of the public key string (normally your computer name if you did not add a -C comment in the parameters of ssh-keygen) is not important, just leave it there.
Start (Do not forget the "." at the end which is the build context):
docker build -t test .
Only for 1.):
After the run, remove the public key from the server (most important, and at best at once). The script removes the private key from the image, and you may also remove the private key from your local computer, since you should never use the key pair again. The reason: someone could get the private key from the image even if it was removed from the image. Quoting a user's comment:
If anyone gets a hold of your
image, they can retrieve the key... even if you delete that file in a
later layer, b/c they can go back to Step 7 when you added it
The attacker could wait with this private key until you use the key pair again.
Only for 2.):
After the run, since the second image is the only image remaining after a build, we do not necessarily need to remove the key pair from client and host. We still have a small risk that the passwordless private key is taken from a local computer somewhere. That is why you may still remove the public key from the git server. You may also remove any stored private keys. But it is probably not needed in many projects where the main aim is rather to automate building the image, and less the security.
At last, some more meta noise
As to the dangerously wrong advice in the two highly upvoted answers here that use the problematic echo-of-the-private-key approach, here are the votes at the time of writing:
https://stackoverflow.com/a/42125241/11154841 176 upvotes (top 1)
https://stackoverflow.com/a/48565025/11154841 55 upvotes (top 5)
While the question at 326k views, got a lot more: 376 upvotes
We see here that something must be wrong in the answers, as the top 1 answer votes are not at least on the level of the question votes.
There was just one small and unvoted comment at the end of the comment list of the top 1 answer naming the same echo-of-the-private-key problem (which is also quoted in this answer). And: that critical comment was made three years after the answer.
I have upvoted the top 1 answer myself. I only realised later that it would not work for me. Thus, swarm intelligence is working, but on a low flame? If anyone can explain to me why echoing the private key might work for others, but not for me, please comment. Else, 326k views (minus 2 comments ;) ) would have overseen or left aside the error of the top 1 answer. I would not write such a long text here if that echo-of-the-private-key code line would not have cost me many working days, with absolutely frustrating code picking from everything on the net.
'you can selectively let remote servers access your local ssh-agent as if it was running on the server'
https://developer.github.com/guides/using-ssh-agent-forwarding/
You can also link your .ssh directory between the host and the container, I don't know if this method has any security implications but it may be the easiest method. Something like this should work:
$ sudo docker run -it -v /root/.ssh:/root/.ssh someimage bash
Remember that docker runs with sudo (unless you don't), if this is the case you'll be using the root ssh keys.
A concise overview of the challenges of SSH inside Docker containers is detailed here. For connecting to trusted remotes from within a container without leaking secrets there are a few ways:
SSH agent forwarding (Linux-only, not straight-forward)
Inbuilt SSH with BuildKit (Experimental, not yet supported by Compose)
Using a bind mount to expose ~/.ssh to container. (Development only, potentially insecure)
Docker Secrets (Cross-platform, adds complexity)
Beyond these there's also the possibility of using a key-store running in a separate docker container accessible at runtime when using Compose. The drawback here is additional complexity due to the machinery required to create and manage a keystore such as Vault by HashiCorp.
For SSH key use in a stand-alone Docker container see the methods linked above and consider the drawbacks of each depending on your specific needs. If, however, you're running inside Compose and want to share a key to an app at runtime (reflecting practicalities of the OP) try this:
Create a docker-compose.env file and add it to your .gitignore file.
Update your docker-compose.yml and add env_file for service requiring the key.
Access public key from environment at application runtime, e.g. process.node.DEPLOYER_RSA_PUBKEY in the case of a Node.js application.
The above approach is ideal for development and testing and, while it could satisfy production requirements, in production you're better off using one of the other methods identified above.
Additional resources:
Docker Docs: Use bind mounts
Docker Docs: Manage sensitive data with Docker secrets
Stack Overflow: Using SSH keys inside docker container
Stack Overflow: Using ssh-agent with docker on macOS
If you don't care about the security of your SSH keys, there are many good answers here. If you do, the best answer I found was from a link in a comment above to this GitHub comment by diegocsandrim. So that others are more likely to see it, and just in case that repo ever goes away, here is an edited version of that answer:
Most solutions here end up leaving the private key in the image. This is bad, as anyone with access to the image has access to your private key. Since we don't know enough about the behavior of squash, this may still be the case even if you delete the key and squash that layer.
We generate a pre-sign URL to access the key with aws s3 cli, and limit the access for about 5 minutes, we save this pre-sign URL into a file in repo directory, then in dockerfile we add it to the image.
In dockerfile we have a RUN command that do all these steps: use the pre-sing URL to get the ssh key, run npm install, and remove the ssh key.
By doing this in one single command the ssh key would not be stored in any layer, but the pre-sign URL will be stored, and this is not a problem because the URL will not be valid after 5 minutes.
The build script looks like:
# build.sh
aws s3 presign s3://my_bucket/my_key --expires-in 300 > ./pre_sign_url
docker build -t my-service .
Dockerfile looks like this:
FROM node
COPY . .
RUN eval "$(ssh-agent -s)" && \
wget -i ./pre_sign_url -q -O - > ./my_key && \
chmod 700 ./my_key && \
ssh-add ./my_key && \
ssh -o StrictHostKeyChecking=no git#github.com || true && \
npm install --production && \
rm ./my_key && \
rm -rf ~/.ssh/*
ENTRYPOINT ["npm", "run"]
CMD ["start"]
A simple and secure way to achieve this without saving your key in a Docker image layer, or going through ssh_agent gymnastics is:
As one of the steps in your Dockerfile, create a .ssh directory by adding:
RUN mkdir -p /root/.ssh
Below that indicate that you would like to mount the ssh directory as a volume:
VOLUME [ "/root/.ssh" ]
Ensure that your container's ssh_config knows where to find the public keys by adding this line:
RUN echo " IdentityFile /root/.ssh/id_rsa" >> /etc/ssh/ssh_config
Expose you local user's .ssh directory to the container at runtime:
docker run -v ~/.ssh:/root/.ssh -it image_name
Or in your dockerCompose.yml add this under the service's volume key:
- "~/.ssh:/root/.ssh"
Your final Dockerfile should contain something like:
FROM node:6.9.1
RUN mkdir -p /root/.ssh
RUN echo " IdentityFile /root/.ssh/id_rsa" >> /etc/ssh/ssh_config
VOLUME [ "/root/.ssh" ]
EXPOSE 3000
CMD [ "launch" ]
I put together a very simple solution that works for my use case where I use a "builder" docker image to build an executable that gets deployed separately. In other words my "builder" image never leaves my local machine and only needs access to private repos/dependencies during the build phase.
You do not need to change your Dockerfile for this solution.
When you run your container, mount your ~/.ssh directory (this avoids having to bake the keys directly into the image, but rather ensures they're only available to a single container instance for a short period of time during the build phase). In my case I have several build scripts that automate my deployment.
Inside my build-and-package.sh script I run the container like this:
# do some script stuff before
...
docker run --rm \
-v ~/.ssh:/root/.ssh \
-v "$workspace":/workspace \
-w /workspace builder \
bash -cl "./scripts/build-init.sh $executable"
...
# do some script stuff after (i.e. pull the built executable out of the workspace, etc.)
The build-init.sh script looks like this:
#!/bin/bash
set -eu
executable=$1
# start the ssh agent
eval $(ssh-agent) > /dev/null
# add the ssh key (ssh key should not have a passphrase)
ssh-add /root/.ssh/id_rsa
# execute the build command
swift build --product $executable -c release
So instead of executing the swift build command (or whatever build command is relevant to your environment) directly in the docker run command, we instead execute the build-init.sh script which starts the ssh-agent, then adds our ssh key to the agent, and finally executes our swift build command.
Note 1: For this to work you'll need to make sure your ssh key does not have a passphrase, otherwise the ssh-add /root/.ssh/id_rsa line will ask for a passphrase and interrupt the build script.
Note 2: Make sure you have the proper file permissions set on your script files so that they can be run.
Hopefully this provides a simple solution for others with a similar use case.
In later versions of docker (17.05) you can use multi stage builds. Which is the safest option as the previous builds can only ever be used by the subsequent build and are then destroyed
See the answer to my stackoverflow question for more info
I'm trying to work the problem the other way: adding public ssh key to an image. But in my trials, I discovered that "docker cp" is for copying FROM a container to a host. Item 3 in the answer by creak seems to be saying you can use docker cp to inject files into a container. See https://docs.docker.com/engine/reference/commandline/cp/
excerpt
Copy files/folders from a container's filesystem to the host path.
Paths are relative to the root of the filesystem.
Usage: docker cp CONTAINER:PATH HOSTPATH
Copy files/folders from the PATH to the HOSTPATH
You can pass the authorised keys in to your container using a shared folder and set permissions using a docker file like this:
FROM ubuntu:16.04
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
EXPOSE 22
RUN cp /root/auth/id_rsa.pub /root/.ssh/authorized_keys
RUN rm -f /root/auth
RUN chmod 700 /root/.ssh
RUN chmod 400 /root/.ssh/authorized_keys
RUN chown root. /root/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
And your docker run contains something like the following to share an auth directory on the host (holding the authorised_keys) with the container then open up the ssh port which will be accessable through port 7001 on the host.
-d -v /home/thatsme/dockerfiles/auth:/root/auth -–publish=127.0.0.1:7001:22
You may want to look at https://github.com/jpetazzo/nsenter which appears to be another way to open a shell on a container and execute commands within a container.
Late to the party admittedly, how about this which will make your host operating system keys available to root inside the container, on the fly:
docker run -v ~/.ssh:/mnt -it my_image /bin/bash -c "ln -s /mnt /root/.ssh; ssh user#10.20.30.40"
I'm not in favour of using Dockerfile to install keys since iterations of your container may leave private keys behind.
You can use secrets to manage any sensitive data which a container
needs at runtime but you don’t want to store in the image or in source
control, such as:
Usernames and passwords
TLS certificates and keys
SSH keys
Other important data such as the name of a database or internal server
Generic strings or binary content (up to 500 kb in size)
https://docs.docker.com/engine/swarm/secrets/
I was trying to figure out how to add signing keys to a container to use during runtime (not build) and came across this question. Docker secrets seem to be the solution for my use case, and since nobody has mentioned it yet I'll add it.
In my case I had a problem with nodejs and 'npm i' from a remote repository. I fixed it added 'node' user to nodejs container and 700 to ~/.ssh in container.
Dockerfile:
USER node #added the part
COPY run.sh /usr/local/bin/
CMD ["run.sh"]
run.sh:
#!/bin/bash
chmod 700 -R ~/.ssh/; #added the part
docker-compose.yml:
nodejs:
build: ./nodejs/10/
container_name: nodejs
restart: always
ports:
- "3000:3000"
volumes:
- ../www/:/var/www/html/:delegated
- ./ssh:/home/node/.ssh #added the part
links:
- mailhog
networks:
- work-network
after that it started works