Related
Today I started learning ansible and first thing I came across while trying to run the command ping on remote server was
192.168.1.100 | UNREACHABLE! => {
"changed": false,
"msg": "(u'192.168.1.100', <paramiko.rsakey.RSAKey object at 0x103c8d250>, <paramiko.rsakey.RSAKey object at 0x103c62f50>)",
"unreachable": true
}
so I manually setup the SSH key, I think I faced this as no writeup or Tutorial by any devops explains the step why they don't need it or if they have manually set it up before the writing a tutorial or a video.
So I think it would be great if we can automate this step too..
If ssh keys haven't been set up you can always prompt for an ssh password
-k, --ask-pass ask for connection password
I use these commands for setting up keys on CentOS 6.8 under the root account:
cat ~/.ssh/id_rsa.pub | ssh ${user}#${1} -o StrictHostKeyChecking=no 'mkdir .ssh > /dev/null 2>&1; restorecon -R /root/; cat >> .ssh/authorized_keys'
ansible $1 -u $user -i etc/ansible/${hosts} -m raw -a "yum -y install python-simplejson"
ansible $1 -u $user -i etc/ansible/${hosts} -m yum -a "name=libselinux-python state=latest"
${1} is the first parameter passed to the script and should be the machine name.
I set ${user} elsewhere, but you could make it a parameter also.
${hosts} is my hosts file, and it has a default, but can be overridden with a parameter.
The restorecon command is to appease selinux. I just hardcoded it to run against the /root/ directory, and I can't remember exactly why. If you run this to setup a non-root user, I think that command is nonsense.
I think those installs, python-simplejson and libselinux-python are needed.
This will spam the authorized_keys files with duplicate entries if you run it repeatedly. There are probably better ways, but this is my quick and dirty run once script.
I made some slight variations in the script for CentOS 7 and Ubuntu.
Not sure what types of servers these are, but nearly all Ansible tutorials cover the fact that Ansible uses SSH and you need SSH access to use it.
Depending on how you are provisioning the server in the first place you may be able to inject an ssh key on first boot, but if you are starting with password-only login you can use the --ask-pass flag when running Playbooks. You could then have your first play use the authorized_key module to set up your key on the server.
I want to create a Docker image for devs that reproduces our production servers. Those servers are configured by Ansible.
My idea is to run an ansible-pull to apply all the configuration inside the container. The problem is that I need the SSH key to pull the playbook, but I don't want to share the SSH key on the Docker image.
So, there is a way to have the SSH keys on build time without having them on run time?
Nice question. The simple way to do it is by removing the SSH keys after the Ansible stuff in the build - but because Docker stores images as layers, someone could still find the old layer with the keys in it.
If you build this Dockerfile:
FROM ubuntu
COPY ansible-ssh-key.rsa /key.rsa
RUN [ansible stuff]
RUN rm /key.rsa
The final image will have all your Ansible state and the SSH key will be gone but someone could easily run docker history to look at all the image layers, and just start a container from an intermediate layer before the key was deleted, and grab the key.
The trick would be to do something like this and then use Jason Wilder's docker-squash tool to squash the final image. In the squashed image the intermediate layer is gone and there's no way to get at the deleted key.
I'd setup some local file serving facility available only in your build environment.
E.g. start lighttpd on your build host to serve your pem-files only to local clients.
And in your Dockerfile do add/pull/cleanup in a single run:
RUN curl -sO http://build-host:8888/key.pem && ansible-pull -U myrepo && rm -rf key.pem
In this case it should be done in a single layer, so there should be no trace of key.pem left after layer commit.
This is another solution by using this repo, dockito/vault,
Secret store to be used on Docker image building.
I create a service dockito/vault and Ubuntu image where I attach my private key to the volume and run it as a process using,
docker run -it -v ~/.ssh:/vault/.ssh ubuntu /bin/bash -c "echo mysupersecret > /vault/.ssh/key"
docker run -d -p 14242:3000 -v ~/.ssh:/vault/.ssh dockito/vault
And, here is my Dockerfile
FROM ubuntu:14.04
RUN apt-get update -y && \
apt-get install -y curl && \
curl -L $(ip route|awk '/default/{print $3}'):14242/ONVAULT >
/usr/local
/bin/ONVAULT && \
chmod +x /usr/local/bin/ONVAULT
ENV REV_BREAK_CACHE=1
RUN ONVAULT echo ENV: && env && echo TOKEN ENV && echo $TOKEN
RUN ONVAULT ls -lha ~/.ssh/
RUN ONVAULT cat ~/.ssh/key
You can use the alpine linux to reduce final build size, and built the image as,
docker build -f Dockerfile -t mohan08p/VaultTest .
And, you are done. You can inspect the image. Secrets has not stored inside the image as its empty.
docker run -it mohan08p/VaultTest ls /root/.ssh
This is good technique to pass the .ssh at the build time. Only disadvantage is I need to keep additional Vault service running.
You could mount the SSH Keys into the Container on runtime.
docker run -v /path/to/ssh/key:/path/to/key/in/container image command
I have GitLab & GitLab CI set up to host and test some of my private repos. For my composer modules under this system, I have Satis set up to resolve my private packages.
Obviously these private packages require an ssh key to clone them, and I have this working in the terminal - I can run composer install and get these packages, so long as I have the key added with ssh-add in the shell.
However, when running my tests in GitLab CI, if a project has any of these dependencies the tests will not complete as my GitLab instance needs authentication to get the deps (obviously), and the test fails saying Host key verification failed.
My question is how do I set this up so that when the runner runs the test it can authenticate to gitlab without a password? I have tried putting a password-less ssh-key in my runners ~/.ssh folder, however the build wont even add the key, "eval ssh-agent -s" followed by ssh-add seems to fail saying the agent isn't running...
See also other solutions:
git submodule permission (see Marco A.'s answer)
job token and override repo in git config (see a544jh's answer)
Here a full howto with SSH keys:
General Design
generating a pair of SSH keys
adding the private one as a secure environment variable of your project
making the private one available to your test scripts on GitLab-CI
adding the public one as a deploy key on each of your private dependencies
Generating a pair of public and private SSH keys
Generate a pair of public and private SSH keys without passphrase:
ssh-keygen -b 4096 -C "<name of your project>" -N "" -f /tmp/name_of_your_project.key
Adding the private SSH key to your project
You need to add the key as a secure environment variable to your project as
following:
browse https://<gitlab_host>/<group>/<project_name>/variables
click on "Add a variable"
fill the text field Key with SSH_PRIVATE_KEY
fill the text field Value with the private SSH key itself
click on "Save changes"
Exposing the private SSH key to your test scripts
In order to make your private key available to your test scripts you need to add
the following to your .gitlab-ci.yml file:
before_script:
# install ssh-agent
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
# run ssh-agent
- eval $(ssh-agent -s)
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# disable host key checking (NOTE: makes you susceptible to man-in-the-middle attacks)
# WARNING: use only in docker container, if you use it with shell you will overwrite your user's ssh config
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
Code Snippet comes from GitLab documentation
Adding the public SSH key as a deploy key to all your private dependencies
You need to register the public SSH key as deploy key to all your private
dependencies as following:
browse https://<gitlab_host>/<group>/<dependency_name>/deploy_keys
click on "New deploy key"
fill the text field Title with the name of your project
fill the text field Key with the public SSH key itself
click on "Create deploy key"
If you don't want to fiddle around with ssh keys or submodules, you can override the repo in git's configuration to authenticate with the job token instead (in gitlab-ci.yml):
before_script:
- git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/group/repo.git".insteadOf git#gitlab.example.com:group/repo.git
I'm posting this as an answer since others weren't completely clear and/or detailed IMHO
Starting from GitLab 8.12+, assuming the submodule repo is in the same server as the one requesting it, you can now:
Set up the repo with git submodules as usual (git submodule add git#somewhere:folder/mysubmodule.git)
Modify your .gitmodules file as follows
[submodule "mysubmodule"]
path = mysubmodule
url = ../../group/mysubmodule.git
where ../../group/mysubmodule.git is a relative path from your repository to the submodule's one.
Add the following lines to gitlab-ci.yml
variables:
GIT_SUBMODULE_STRATEGY: recursive
to instruct the runner to fetch all submodules before the build.
Caveat: if your runner seems to ignore the GIT_SUBMODULE_STRATEGY directive, you should probably consider updating it.
(source: https://docs.gitlab.com/ce/ci/git_submodules.html)
The currently accepted answer embeds Gitlab-specific requirements into my .gitmodules file. This forces a specific directory layout for local development and would complicate moving to another version control platform.
Instead, I followed the advice in Juddling's answer. Here's a more complete answer.
My .gitmodules files has the following contents:
[submodule "myproject"]
url = git#git.myhost.com:mygroup/myproject.git
In my gitlab-ci.yml I have the following:
build:
stage: build
before_script:
- git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#git.myhost.com/".insteadOf "git#git.myhost.com:"
- git submodule sync && git submodule update --init
The trailing / and : are critical in the git config line, since we are mapping from SSH authentication to HTTPS. This tripped me up for a while with "Illegal port number" errors.
I like this solution because it embeds the Gitlab-specific requirements in a Gitlab-specific file, which is ignored by everything else.
I used deploy tokens to solve this issue, as setting up SSH keys for a test runner seems a little long winded.
git clone http://<username>:<deploy_token>#gitlab.example.com/tanuki/awesome_project.git
The deploy tokens are per project and are read only.
One way to solve this without changing the git repository's structure is to perform the following steps:
1. get ssh host keys
Get the ssh host keys of the server that you are running on. For gitlab.com:
run ssh-keyscan gitlab.com > known_hosts
check that ssh-keygen -lf known_hosts agrees with the fingerprints reported here.
copy the content of the known_hosts and paste it on a variable called SSH_KNOWN_HOSTS on the repository.
This step is only needed once.
2. configure the job to use ssh
before_script:
- git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.com".insteadOf "git#gitlab.com:"
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
The "ssh://git#gitlab.com" bit may be different if you are trying to do git clone git#gitlab.com: or pip install -e git+ssh://git#gitlab.com/...; adjust it accordingly to your needs.
At this point, your CI is able to use ssh to fetch from another (private) repository.
3. [Bonus DRY]
Use this trick to write it generically:
.enable_ssh: &enable_ssh |-
git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.com".insteadOf "ssh://git#gitlab.com"
mkdir -p ~/.ssh
chmod 700 ~/.ssh
echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
chmod 644 ~/.ssh/known_hosts
and enable it on jobs that need it
test:
stage: test
before_script:
- *enable_ssh
script:
- ...
If your CI runner is running on a container model, you need to use the deploy key. doc: https://docs.gitlab.com/ee/user/project/deploy_tokens/#git-clone-a-repository
git clone https://<username>:<deploy_token>#gitlab.example.com/tanuki/awesome_project.git
Create your deploy token
Add your token in CI pipeline Variable
make sure your container has the git and change the git URL by insteadOf
image: docker:latest
before_script:
- apk add --no-cache curl jq python3 py3-pip git
- git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.come/".insteadOf 'git#gitlab.example.come:'
for replace URL: https://docs.gitlab.com/ee/user/project/working_with_projects.html#authenticate-git-fetches
I had a scenario where I had to use my ssh key in 3 different scripts, so I put the ssh key stuff in a single shell script and called it first, before the other 3 scripts. This ended up not working, I think due to the ssh-agent not persisting between shell scripts, or something to that effect. I ended up actually just outputting the private key into the ~/.ssh/id_rsa file, which will for sure persist to other scripts.
.gitlab-ci.yml
script:
- ci/init_ssh.sh
- git push # or whatever you need ssh for
ci/init_ssh.sh
# only run in docker:
[[ ! -e /.dockerenv ]] && exit 0
mkdir -p ~/.ssh
echo "$GITLAB_RUNNER_SSH_KEY" > ~/.ssh/id_rsa
chmod 400 ~/.ssh/id_rsa
echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > /.ssh/config
It works like a charm!
If you are using an alpine-based image (maybe docker:latest or docker:dind), your before_script might look like this:
before_script:
- apk add --no-cache openssh-client git
- mkdir -p /.ssh && touch /.ssh/known_hosts
- ssh-keyscan gitlab.com >> /.ssh/known_hosts
- echo $SSH_KEY | base64 -d >> /.ssh/id_rsa && chmod 600 /.ssh/id_rsa
- git clone git#git.myhost.com:mygroup/myproject.git
Adding this to .gitlab-ci.yml did the trick for me.
(as mentioned here: https://docs.gitlab.com/ee/user/project/new_ci_build_permissions_model.html#dependent-repositories)
before_script:
echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc
(I tried setting up SSH_PRIVATE_KEY as mentioned in one of the answers above, it won't work)
Gitlab 15.9.0 introduces an update to the pre-defined variable CI_JOB_TOKEN. Now you can control other projects' access to your private repository, see the release note and documentation.
Once access is granted, you can clone private repositories by adding this line to your job's scripts or before_scripts.
git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/<namespace>/<project>
Unfortunately, this still does not play nicely with the submodule integration with Gitlab CI/CD. Instead, I do this in my projects.
# .gitlab-ci.yml
default:
before_script:
- |
cat << EOF > ~/.gitconfig
[url "https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/<namespace>/<project>.git"]
insteadOf = git#gitlab.example.com/<namespace>/<project>.git
EOF
- git submodule update --init --recursive
And this is what my .gitmodules would look like
[submodule "terraform-eks"]
path = modules/<project>
url = git#gitlab.example.com/<namespace>/<project>.git
branch = main
Hope this help!
Seems there is finally a reasonable solution.
In short as of GitLab 8.12 all you need to do is use relative paths in the .submodules, and the git submodule update --init will simply work
I have an app that executes various fun stuff with Git (like running git clone & git push) and I'm trying to docker-ize it.
I'm running into an issue though where I need to be able to add an SSH key to the container for the container 'user' to use.
I tried copying it into /root/.ssh/, changing $HOME, creating a git ssh wrapper, and still no luck.
Here is the Dockerfile for reference:
#DOCKER-VERSION 0.3.4
from ubuntu:12.04
RUN apt-get update
RUN apt-get install python-software-properties python g++ make git-core openssh-server -y
RUN add-apt-repository ppa:chris-lea/node.js
RUN echo "deb http://archive.ubuntu.com/ubuntu precise universe" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install nodejs -y
ADD . /src
ADD ../../home/ubuntu/.ssh/id_rsa /root/.ssh/id_rsa
RUN cd /src; npm install
EXPOSE 808:808
CMD [ "node", "/src/app.js"]
app.js runs the git commands like git pull
It's a harder problem if you need to use SSH at build time. For example if you're using git clone, or in my case pip and npm to download from a private repository.
The solution I found is to add your keys using the --build-arg flag. Then you can use the new experimental --squash command (added 1.13) to merge the layers so that the keys are no longer available after removal. Here's my solution:
Build command
$ docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .
Dockerfile
FROM python:3.6-slim
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
# Avoid cache purge by adding requirements first
ADD ./requirements.txt /app/requirements.txt
WORKDIR /app/
RUN pip install -r requirements.txt
# Remove SSH keys
RUN rm -rf /root/.ssh/
# Add the rest of the files
ADD . .
CMD python manage.py runserver
Update: If you're using Docker 1.13 and have experimental features on you can append --squash to the build command which will merge the layers, removing the SSH keys and hiding them from docker history.
Turns out when using Ubuntu, the ssh_config isn't correct. You need to add
RUN echo " IdentityFile ~/.ssh/id_rsa" >> /etc/ssh/ssh_config
to your Dockerfile in order to get it to recognize your ssh key.
Note: only use this approach for images that are private and will always be!
The ssh key remains stored within the image, even if you remove the key in a layer command after adding it (see comments in this post).
In my case this is ok, so this is what I am using:
# Setup for ssh onto github
RUN mkdir -p /root/.ssh
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN echo "Host github.com\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config
If you are using Docker Compose an easy choice is to forward SSH agent like that:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
or equivalently, if using docker run:
$ docker run --mount type=bind,source=$SSH_AUTH_SOCK,target=/ssh-agent \
--env SSH_AUTH_SOCK=/ssh-agent \
some-image
Expanding Peter Grainger's answer I was able to use multi-stage build available since Docker 17.05. Official page states:
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.
Keeping this in mind here is my example of Dockerfile including three build stages. It's meant to create a production image of client web application.
# Stage 1: get sources from npm and git over ssh
FROM node:carbon AS sources
ARG SSH_KEY
ARG SSH_KEY_PASSPHRASE
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan bitbucket.org > /root/.ssh/known_hosts && \
echo "${SSH_KEY}" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
WORKDIR /app/
COPY package*.json yarn.lock /app/
RUN eval `ssh-agent -s` && \
printf "${SSH_KEY_PASSPHRASE}\n" | ssh-add $HOME/.ssh/id_rsa && \
yarn --pure-lockfile --mutex file --network-concurrency 1 && \
rm -rf /root/.ssh/
# Stage 2: build minified production code
FROM node:carbon AS production
WORKDIR /app/
COPY --from=sources /app/ /app/
COPY . /app/
RUN yarn build:prod
# Stage 3: include only built production files and host them with Node Express server
FROM node:carbon
WORKDIR /app/
RUN yarn add express
COPY --from=production /app/dist/ /app/dist/
COPY server.js /app/
EXPOSE 33330
CMD ["node", "server.js"]
.dockerignore repeats contents of .gitignore file (it prevents node_modules and resulting dist directories of the project from being copied):
.idea
dist
node_modules
*.log
Command example to build an image:
$ docker build -t ezze/geoport:0.6.0 \
--build-arg SSH_KEY="$(cat ~/.ssh/id_rsa)" \
--build-arg SSH_KEY_PASSPHRASE="my_super_secret" \
./
If your private SSH key doesn't have a passphrase just specify empty SSH_KEY_PASSPHRASE argument.
This is how it works:
1). On the first stage only package.json, yarn.lock files and private SSH key are copied to the first intermediate image named sources. In order to avoid further SSH key passphrase prompts it is automatically added to ssh-agent. Finally yarn command installs all required dependencies from NPM and clones private git repositories from Bitbucket over SSH.
2). The second stage builds and minifies source code of web application and places it in dist directory of the next intermediate image named production. Note that source code of installed node_modules is copied from the image named sources produced on the first stage by this line:
COPY --from=sources /app/ /app/
Probably it also could be the following line:
COPY --from=sources /app/node_modules/ /app/node_modules/
We have only node_modules directory from the first intermediate image here, no SSH_KEY and SSH_KEY_PASSPHRASE arguments anymore. All the rest required for build is copied from our project directory.
3). On the third stage we reduce a size of the final image that will be tagged as ezze/geoport:0.6.0 by including only dist directory from the second intermediate image named production and installing Node Express for starting a web server.
Listing images gives an output like this:
REPOSITORY TAG IMAGE ID CREATED SIZE
ezze/geoport 0.6.0 8e8809c4e996 3 hours ago 717MB
<none> <none> 1f6518644324 3 hours ago 1.1GB
<none> <none> fa00f1182917 4 hours ago 1.63GB
node carbon b87c2ad8344d 4 weeks ago 676MB
where non-tagged images correpsond to the first and the second intermediate build stages.
If you run
$ docker history ezze/geoport:0.6.0 --no-trunc
you will not see any mentions of SSH_KEY and SSH_KEY_PASSPHRASE in the final image.
In order to inject you ssh key, within a container, you have multiple solutions:
Using a Dockerfile with the ADD instruction, you can inject it during your build process
Simply doing something like cat id_rsa | docker run -i <image> sh -c 'cat > /root/.ssh/id_rsa'
Using the docker cp command which allows you to inject files while a container is running.
This is now available since 18.09 release!
According to the documentation:
The docker build has a --ssh option to allow the Docker Engine to
forward SSH agent connections.
Here is an example of Dockerfile using SSH in the container:
# syntax=docker/dockerfile:experimental
FROM alpine
# Install ssh client and git
RUN apk add --no-cache openssh-client git
# Download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# Clone private repository
RUN --mount=type=ssh git clone git#github.com:myorg/myproject.git myproject
Once the Dockerfile is created, use the --ssh option for connectivity with the SSH agent:
$ docker build --ssh default .
Also, take a look at https://medium.com/#tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
One cross-platform solution is to use a bind mount to share the host's .ssh folder to the container:
docker run -v /home/<host user>/.ssh:/home/<docker user>/.ssh <image>
Similar to agent forwarding this approach will make the public keys accessible to the container. An additional upside is that it works with a non-root user too and will get you connected to GitHub. One caveat to consider, however, is that all contents (including private keys) from the .ssh folder will be shared so this approach is only desirable for development and only for trusted container images.
Starting from docker API 1.39+ (Check API version with docker version) docker build allows the --ssh option with either an agent socket or keys to allow the Docker Engine to forward SSH agent connections.
Build Command
export DOCKER_BUILDKIT=1
docker build --ssh default=~/.ssh/id_rsa .
Dockerfile
# syntax=docker/dockerfile:experimental
FROM python:3.7
# Install ssh client (if required)
RUN apt-get update -qq
RUN apt-get install openssh-client -y
# Download public key for github.com
RUN --mount=type=ssh mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# Clone private repository
RUN --mount=type=ssh git clone git#github.com:myorg/myproject.git myproject
More Info:
https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds
https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md#run---mounttypessh
This line is a problem:
ADD ../../home/ubuntu/.ssh/id_rsa /root/.ssh/id_rsa
When specifying the files you want to copy into the image you can only use relative paths - relative to the directory where your Dockerfile is. So you should instead use:
ADD id_rsa /root/.ssh/id_rsa
And put the id_rsa file into the same directory where your Dockerfile is.
Check this out for more details: http://docs.docker.io/reference/builder/#add
Docker containers should be seen as 'services' of their own. To separate concerns you should separate functionalities:
1) Data should be in a data container: use a linked volume to clone the repo into. That data container can then be linked to the service needing it.
2) Use a container to run the git cloning task, (i.e it's only job is cloning) linking the data container to it when you run it.
3) Same for the ssh-key: put it is a volume (as suggested above) and link it to the git clone service when you need it
That way, both the cloning task and the key are ephemeral and only active when needed.
Now if your app itself is a git interface, you might want to consider github or bitbucket REST APIs directly to do your work: that's what they were designed for.
We had similar problem when doing npm install in docker build time.
Inspired from solution from Daniel van Flymen and combining it with git url rewrite, we found a bit simpler method for authenticating npm install from private github repos - we used oauth2 tokens instead of the keys.
In our case, the npm dependencies were specified as "git+https://github.com/..."
For authentication in container, the urls need to be rewritten to either be suitable for ssh authentication (ssh://git#github.com/) or token authentication (https://${GITHUB_TOKEN}#github.com/)
Build command:
docker build -t sometag --build-arg GITHUB_TOKEN=$GITHUB_TOKEN .
Unfortunately, I'm on docker 1.9, so --squash option is not there yet, eventually it needs to be added
Dockerfile:
FROM node:5.10.0
ARG GITHUB_TOKEN
#Install dependencies
COPY package.json ./
# add rewrite rule to authenticate github user
RUN git config --global url."https://${GITHUB_TOKEN}#github.com/".insteadOf "https://github.com/"
RUN npm install
# remove the secret token from the git config file, remember to use --squash option for docker build, when it becomes available in docker 1.13
RUN git config --global --unset url."https://${GITHUB_TOKEN}#github.com/".insteadOf
# Expose the ports that the app uses
EXPOSE 8000
#Copy server and client code
COPY server /server
COPY clients /clients
Forward the ssh authentication socket to the container:
docker run --rm -ti \
-v $SSH_AUTH_SOCK:/tmp/ssh_auth.sock \
-e SSH_AUTH_SOCK=/tmp/ssh_auth.sock \
-w /src \
my_image
Your script will be able to perform a git clone.
Extra: If you want cloned files to belong to a specific user you need to use chown since using other user than root inside the container will make git fail.
You can do this publishing to the container's environment some additional variables:
docker run ...
-e OWNER_USER=$(id -u) \
-e OWNER_GROUP=$(id -g) \
...
After you clone you must execute chown $OWNER_USER:$OWNER_GROUP -R <source_folder> to set the proper ownership before you leave the container so the files are accessible by a non-root user outside the container.
You can use multi stage build to build containers
This is the approach you can take :-
Stage 1 building an image with ssh
FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp
RUN apt-get update && \
apt-get install -y git npm
RUN mkdir /root/.ssh/ &&\
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
chmod 600 /root/.ssh/id_rsa &&\
touch /root/.ssh/known_hosts &&\
ssh-keyscan github.com >> /root/.ssh/known_hosts
COPY package*.json ./
RUN npm install
RUN cp -R node_modules prod_node_modules
Stage 2: build your container
FROM node:10-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY ./ ./
COPY --from=sshImage /root/temp/prod_node_modules ./node_modules
EXPOSE 3006
CMD ["npm", "run", "dev"]
add env attribute in your compose file:
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
then pass args from build script like this:
docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
And remove the intermediate container it for security.
This Will help you cheers.
I ran into the same problem today and little bit modified version with previous posts I found this approach more useful to me
docker run -it -v ~/.ssh/id_rsa:/root/.my-key:ro image /bin/bash
(Note that readonly flag so container will not mess my ssh key in any case.)
Inside container I can now run:
ssh-agent bash -c "ssh-add ~/.my-key; git clone <gitrepourl> <target>"
So I don't get that Bad owner or permissions on /root/.ssh/.. error which was noted by #kross
This issue is really an annoying one. Since you can't add/copy any file outside the dockerfile context, which means it's impossible to just link ~/.ssh/id_rsa into image's /root/.ssh/id_rsa, and when you definitely need a key to do some sshed thing like git clone from a private repo link..., during the building of your docker image.
Anyways, I found a solution to workaround, not so persuading but did work for me.
in your dockerfile:
add this file as /root/.ssh/id_rsa
do what you want, such as git clone, composer...
rm /root/.ssh/id_rsa at the end
a script to do in one shoot:
cp your key to the folder holding dockerfile
docker build
rm the copied key
anytime you have to run a container from this image with some ssh requirements, just add -v for the run command, like:
docker run -v ~/.ssh/id_rsa:/root/.ssh/id_rsa --name container image command
This solution results in no private key in both you project source and the built docker image, so no security issue to worry about anymore.
As eczajk already commented in Daniel van Flymen's answer it does not seem to be safe to remove the keys and use --squash, as they still will be visible in the history (docker history --no-trunc).
Instead with Docker 18.09, you can now use the "build secrets" feature. In my case I cloned a private git repo using my hosts SSH key with the following in my Dockerfile:
# syntax=docker/dockerfile:experimental
[...]
RUN --mount=type=ssh git clone [...]
[...]
To be able to use this, you need to enable the new BuildKit backend prior to running docker build:
export DOCKER_BUILDKIT=1
And you need to add the --ssh default parameter to docker build.
More info about this here: https://medium.com/#tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
At first, some meta noise
There is a dangerously wrong advice in two highly upvoted answers here.
I commented, but since I have lost many days with this, please MIND:
Do not echo the private key into a file (meaning: echo "$ssh_prv_key" > /root/.ssh/id_ed25519). This will destroy the needed line format, at least in my case.
Use COPY or ADD instead. See Docker Load key “/root/.ssh/id_rsa”: invalid format for details.
This was also confirmed by another user:
I get Error loading key "/root/.ssh/id_ed25519": invalid format. Echo will
remove newlines/tack on double quotes for me. Is this only for ubuntu
or is there something different for alpine:3.10.3?
1. A working way that keeps the private key in the image (not so good!)
If the private key is stored in the image, you need to pay attention that you delete the public key from the git website, or that you do not publish the image. If you take care of this, this is secure. See below (2.) for a better way where you could also "forget to pay attention".
The Dockerfile looks as follows:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y git
RUN mkdir -p /root/.ssh && chmod 700 /root/.ssh
COPY /.ssh/id_ed25519 /root/.ssh/id_ed25519
RUN chmod 600 /root/.ssh/id_ed25519 && \
apt-get -yqq install openssh-client && \
ssh-keyscan -t ed25519 -H gitlab.com >> /root/.ssh/known_hosts
RUN git clone git#gitlab.com:GITLAB_USERNAME/test.git
RUN rm -r /root/.ssh
2. A working way that does not keep the private key in the image (good!)
The following is the more secure way of the same thing, using "multi stage build" instead.
If you need an image that has the git repo directory without the private key stored in one of its layers, you need two images, and you only use the second in the end. That means, you need FROM two times, and you can then copy only the git repo directory from the first to the second image, see the official guide "Use multi-stage builds".
We use "alpine" as the smallest possible base image which uses apk instead of apt-get; you can also use apt-get with the above code instead using FROM ubuntu:latest.
The Dockerfile looks as follows:
# first image only to download the git repo
FROM alpine as MY_TMP_GIT_IMAGE
RUN apk add --no-cache git
RUN mkdir -p /root/.ssh && chmod 700 /root/.ssh
COPY /.ssh/id_ed25519 /root/.ssh/id_ed25519
RUN chmod 600 /root/.ssh/id_ed25519
RUN apk -yqq add --no-cache openssh-client && ssh-keyscan -t ed25519 -H gitlab.com >> /root/.ssh/known_hosts
RUN git clone git#gitlab.com:GITLAB_USERNAME/test.git
RUN rm -r /root/.ssh
# Start of the second image
FROM MY_BASE_IMAGE
COPY --from=MY_TMP_GIT_IMAGE /MY_GIT_REPO ./MY_GIT_REPO
We see here that FROM is just a namespace, it is like a header for the lines below it and can be addressed with an alias. Without an alias, --from=0 would be the first image (=FROM namespace).
You could now publish or share the second image, as the private key is not in its layers, and you would not necessarily need to remove the public key from the git website after one usage! Thus, you do not need to create a new key pair at every cloning of the repo. Of course, be aware that a passwordless private key is still insecure if someone might get a hand on your data in another way. If you are not sure about this, better remove the public key from the server after usage, and have a new key pair at every run.
A guide how to build the image from the Dockerfile
Install Docker Desktop; or use docker inside WSL2 or Linux in a VirtualBox; or use docker in a standalone Linux partition / hard drive.
Open a command prompt (PowerShell, terminal, ...).
Go to the directory of the Dockerfile.
Create a subfolder ".ssh/".
For security reasons, create a new public and private SSH key pair - even if you already have another one lying around - for each Dockerfile run. In the command prompt, in your Dockerfile's folder, enter (mind, this overwrites without asking):
Write-Output "y" | ssh-keygen -q -t ed25519 -f ./.ssh/id_ed25519 -N '""'
(if you use PowerShell) or
echo "y" | ssh-keygen -q -t ed25519 -f ./.ssh/id_ed25519 -N ''
(if you do not use PowerShell).
Your key pair will now be in the subfolder .ssh/. It is up to you whether you use that subfolder at all, you can also change the code to COPY id_ed25519 /root/.ssh/id_ed25519; then your private key needs to be in the Dockerfile's directory that you are in.
Open the public key in an editor, copy the content and publish it to your server (e.g. GitHub / GitLab --> profile --> SSH keys). You can choose whatever name and end date. The final readable comment of the public key string (normally your computer name if you did not add a -C comment in the parameters of ssh-keygen) is not important, just leave it there.
Start (Do not forget the "." at the end which is the build context):
docker build -t test .
Only for 1.):
After the run, remove the public key from the server (most important, and at best at once). The script removes the private key from the image, and you may also remove the private key from your local computer, since you should never use the key pair again. The reason: someone could get the private key from the image even if it was removed from the image. Quoting a user's comment:
If anyone gets a hold of your
image, they can retrieve the key... even if you delete that file in a
later layer, b/c they can go back to Step 7 when you added it
The attacker could wait with this private key until you use the key pair again.
Only for 2.):
After the run, since the second image is the only image remaining after a build, we do not necessarily need to remove the key pair from client and host. We still have a small risk that the passwordless private key is taken from a local computer somewhere. That is why you may still remove the public key from the git server. You may also remove any stored private keys. But it is probably not needed in many projects where the main aim is rather to automate building the image, and less the security.
At last, some more meta noise
As to the dangerously wrong advice in the two highly upvoted answers here that use the problematic echo-of-the-private-key approach, here are the votes at the time of writing:
https://stackoverflow.com/a/42125241/11154841 176 upvotes (top 1)
https://stackoverflow.com/a/48565025/11154841 55 upvotes (top 5)
While the question at 326k views, got a lot more: 376 upvotes
We see here that something must be wrong in the answers, as the top 1 answer votes are not at least on the level of the question votes.
There was just one small and unvoted comment at the end of the comment list of the top 1 answer naming the same echo-of-the-private-key problem (which is also quoted in this answer). And: that critical comment was made three years after the answer.
I have upvoted the top 1 answer myself. I only realised later that it would not work for me. Thus, swarm intelligence is working, but on a low flame? If anyone can explain to me why echoing the private key might work for others, but not for me, please comment. Else, 326k views (minus 2 comments ;) ) would have overseen or left aside the error of the top 1 answer. I would not write such a long text here if that echo-of-the-private-key code line would not have cost me many working days, with absolutely frustrating code picking from everything on the net.
'you can selectively let remote servers access your local ssh-agent as if it was running on the server'
https://developer.github.com/guides/using-ssh-agent-forwarding/
You can also link your .ssh directory between the host and the container, I don't know if this method has any security implications but it may be the easiest method. Something like this should work:
$ sudo docker run -it -v /root/.ssh:/root/.ssh someimage bash
Remember that docker runs with sudo (unless you don't), if this is the case you'll be using the root ssh keys.
A concise overview of the challenges of SSH inside Docker containers is detailed here. For connecting to trusted remotes from within a container without leaking secrets there are a few ways:
SSH agent forwarding (Linux-only, not straight-forward)
Inbuilt SSH with BuildKit (Experimental, not yet supported by Compose)
Using a bind mount to expose ~/.ssh to container. (Development only, potentially insecure)
Docker Secrets (Cross-platform, adds complexity)
Beyond these there's also the possibility of using a key-store running in a separate docker container accessible at runtime when using Compose. The drawback here is additional complexity due to the machinery required to create and manage a keystore such as Vault by HashiCorp.
For SSH key use in a stand-alone Docker container see the methods linked above and consider the drawbacks of each depending on your specific needs. If, however, you're running inside Compose and want to share a key to an app at runtime (reflecting practicalities of the OP) try this:
Create a docker-compose.env file and add it to your .gitignore file.
Update your docker-compose.yml and add env_file for service requiring the key.
Access public key from environment at application runtime, e.g. process.node.DEPLOYER_RSA_PUBKEY in the case of a Node.js application.
The above approach is ideal for development and testing and, while it could satisfy production requirements, in production you're better off using one of the other methods identified above.
Additional resources:
Docker Docs: Use bind mounts
Docker Docs: Manage sensitive data with Docker secrets
Stack Overflow: Using SSH keys inside docker container
Stack Overflow: Using ssh-agent with docker on macOS
If you don't care about the security of your SSH keys, there are many good answers here. If you do, the best answer I found was from a link in a comment above to this GitHub comment by diegocsandrim. So that others are more likely to see it, and just in case that repo ever goes away, here is an edited version of that answer:
Most solutions here end up leaving the private key in the image. This is bad, as anyone with access to the image has access to your private key. Since we don't know enough about the behavior of squash, this may still be the case even if you delete the key and squash that layer.
We generate a pre-sign URL to access the key with aws s3 cli, and limit the access for about 5 minutes, we save this pre-sign URL into a file in repo directory, then in dockerfile we add it to the image.
In dockerfile we have a RUN command that do all these steps: use the pre-sing URL to get the ssh key, run npm install, and remove the ssh key.
By doing this in one single command the ssh key would not be stored in any layer, but the pre-sign URL will be stored, and this is not a problem because the URL will not be valid after 5 minutes.
The build script looks like:
# build.sh
aws s3 presign s3://my_bucket/my_key --expires-in 300 > ./pre_sign_url
docker build -t my-service .
Dockerfile looks like this:
FROM node
COPY . .
RUN eval "$(ssh-agent -s)" && \
wget -i ./pre_sign_url -q -O - > ./my_key && \
chmod 700 ./my_key && \
ssh-add ./my_key && \
ssh -o StrictHostKeyChecking=no git#github.com || true && \
npm install --production && \
rm ./my_key && \
rm -rf ~/.ssh/*
ENTRYPOINT ["npm", "run"]
CMD ["start"]
A simple and secure way to achieve this without saving your key in a Docker image layer, or going through ssh_agent gymnastics is:
As one of the steps in your Dockerfile, create a .ssh directory by adding:
RUN mkdir -p /root/.ssh
Below that indicate that you would like to mount the ssh directory as a volume:
VOLUME [ "/root/.ssh" ]
Ensure that your container's ssh_config knows where to find the public keys by adding this line:
RUN echo " IdentityFile /root/.ssh/id_rsa" >> /etc/ssh/ssh_config
Expose you local user's .ssh directory to the container at runtime:
docker run -v ~/.ssh:/root/.ssh -it image_name
Or in your dockerCompose.yml add this under the service's volume key:
- "~/.ssh:/root/.ssh"
Your final Dockerfile should contain something like:
FROM node:6.9.1
RUN mkdir -p /root/.ssh
RUN echo " IdentityFile /root/.ssh/id_rsa" >> /etc/ssh/ssh_config
VOLUME [ "/root/.ssh" ]
EXPOSE 3000
CMD [ "launch" ]
I put together a very simple solution that works for my use case where I use a "builder" docker image to build an executable that gets deployed separately. In other words my "builder" image never leaves my local machine and only needs access to private repos/dependencies during the build phase.
You do not need to change your Dockerfile for this solution.
When you run your container, mount your ~/.ssh directory (this avoids having to bake the keys directly into the image, but rather ensures they're only available to a single container instance for a short period of time during the build phase). In my case I have several build scripts that automate my deployment.
Inside my build-and-package.sh script I run the container like this:
# do some script stuff before
...
docker run --rm \
-v ~/.ssh:/root/.ssh \
-v "$workspace":/workspace \
-w /workspace builder \
bash -cl "./scripts/build-init.sh $executable"
...
# do some script stuff after (i.e. pull the built executable out of the workspace, etc.)
The build-init.sh script looks like this:
#!/bin/bash
set -eu
executable=$1
# start the ssh agent
eval $(ssh-agent) > /dev/null
# add the ssh key (ssh key should not have a passphrase)
ssh-add /root/.ssh/id_rsa
# execute the build command
swift build --product $executable -c release
So instead of executing the swift build command (or whatever build command is relevant to your environment) directly in the docker run command, we instead execute the build-init.sh script which starts the ssh-agent, then adds our ssh key to the agent, and finally executes our swift build command.
Note 1: For this to work you'll need to make sure your ssh key does not have a passphrase, otherwise the ssh-add /root/.ssh/id_rsa line will ask for a passphrase and interrupt the build script.
Note 2: Make sure you have the proper file permissions set on your script files so that they can be run.
Hopefully this provides a simple solution for others with a similar use case.
In later versions of docker (17.05) you can use multi stage builds. Which is the safest option as the previous builds can only ever be used by the subsequent build and are then destroyed
See the answer to my stackoverflow question for more info
I'm trying to work the problem the other way: adding public ssh key to an image. But in my trials, I discovered that "docker cp" is for copying FROM a container to a host. Item 3 in the answer by creak seems to be saying you can use docker cp to inject files into a container. See https://docs.docker.com/engine/reference/commandline/cp/
excerpt
Copy files/folders from a container's filesystem to the host path.
Paths are relative to the root of the filesystem.
Usage: docker cp CONTAINER:PATH HOSTPATH
Copy files/folders from the PATH to the HOSTPATH
You can pass the authorised keys in to your container using a shared folder and set permissions using a docker file like this:
FROM ubuntu:16.04
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
EXPOSE 22
RUN cp /root/auth/id_rsa.pub /root/.ssh/authorized_keys
RUN rm -f /root/auth
RUN chmod 700 /root/.ssh
RUN chmod 400 /root/.ssh/authorized_keys
RUN chown root. /root/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
And your docker run contains something like the following to share an auth directory on the host (holding the authorised_keys) with the container then open up the ssh port which will be accessable through port 7001 on the host.
-d -v /home/thatsme/dockerfiles/auth:/root/auth -–publish=127.0.0.1:7001:22
You may want to look at https://github.com/jpetazzo/nsenter which appears to be another way to open a shell on a container and execute commands within a container.
Late to the party admittedly, how about this which will make your host operating system keys available to root inside the container, on the fly:
docker run -v ~/.ssh:/mnt -it my_image /bin/bash -c "ln -s /mnt /root/.ssh; ssh user#10.20.30.40"
I'm not in favour of using Dockerfile to install keys since iterations of your container may leave private keys behind.
You can use secrets to manage any sensitive data which a container
needs at runtime but you don’t want to store in the image or in source
control, such as:
Usernames and passwords
TLS certificates and keys
SSH keys
Other important data such as the name of a database or internal server
Generic strings or binary content (up to 500 kb in size)
https://docs.docker.com/engine/swarm/secrets/
I was trying to figure out how to add signing keys to a container to use during runtime (not build) and came across this question. Docker secrets seem to be the solution for my use case, and since nobody has mentioned it yet I'll add it.
In my case I had a problem with nodejs and 'npm i' from a remote repository. I fixed it added 'node' user to nodejs container and 700 to ~/.ssh in container.
Dockerfile:
USER node #added the part
COPY run.sh /usr/local/bin/
CMD ["run.sh"]
run.sh:
#!/bin/bash
chmod 700 -R ~/.ssh/; #added the part
docker-compose.yml:
nodejs:
build: ./nodejs/10/
container_name: nodejs
restart: always
ports:
- "3000:3000"
volumes:
- ../www/:/var/www/html/:delegated
- ./ssh:/home/node/.ssh #added the part
links:
- mailhog
networks:
- work-network
after that it started works
I am running into this error of:
$ git push heroku master
Warning: Permanently added the RSA host key for IP address '50.19.85.132' to the list of known hosts.
! Your key with fingerprint b7:fd:15:25:02:8e:5f:06:4f:1c:af:f3:f0:c3:c2:65 is not authorized to access bitstarter.
I tried to add the keys and I get this error below:
$ ssh-add ~/.ssh/id_rsa.pub
Could not open a connection to your authentication agent.
Did You Start ssh-agent?
You might need to start ssh-agent before you run the ssh-add command:
eval `ssh-agent -s`
ssh-add
Note that this will start the agent for msysgit Bash on Windows. If you're using a different shell or operating system, you might need to use a variant of the command, such as those listed in the other answers.
See the following answers:
ssh-add complains: Could not open a connection to your authentication agent
Git push requires username and password (contains detailed instructions on how to use ssh-agent)
How to run (git/ssh) authentication agent?.
Could not open a connection to your authentication agent
To automatically start ssh-agent and allow a single instance to work in multiple console windows, see Start ssh-agent on login.
Why do we need to use eval instead of just ssh-agent?
SSH needs two things in order to use ssh-agent: an ssh-agent instance running in the background, and an environment variable set that tells SSH which socket it should use to connect to the agent (SSH_AUTH_SOCK IIRC). If you just run ssh-agent then the agent will start, but SSH will have no idea where to find it.
from this comment.
Public vs Private Keys
Also, whenever I use ssh-add, I always add private keys to it. The file ~/.ssh/id_rsa.pub looks like a public key, I'm not sure if that will work. Do you have a ~/.ssh/id_rsa file? If you open it in a text editor, does it say it's a private key?
I tried the other solutions to no avail. I made more research and found that the following command worked. I am using Windows 7 and Git Bash.
eval $(ssh-agent)
More information in: https://coderwall.com/p/rdi_wq (web archive version)
The following command worked for me. I am using CentOS.
exec ssh-agent bash
Could not open a connection to your authentication agent
To resolve this error:
bash:
$ eval `ssh-agent -s`
tcsh:
$ eval `ssh-agent -c`
Then use ssh-add as you normally would.
Hot Tip:
I was always forgetting what to type for the above ssh-agent commands, so I created an alias in my .bashrc file like this:
alias ssh-agent-cyg='eval `ssh-agent -s`'
Now instead of using ssh-agent, I can use ssh-agent-cyg
E.g.
$ ssh-agent-cyg
SSH_AUTH_SOCK=/tmp/ssh-n16KsxjuTMiM/agent.32394; export SSH_AUTH_SOCK;
SSH_AGENT_PID=32395; export SSH_AGENT_PID;
echo Agent pid 32395;
$ ssh-add ~/.ssh/my_pk
Original Source of fix:
http://cygwin.com/ml/cygwin/2011-10/msg00313.html
MsysGit or Cygwin
If you're using Msysgit or Cygwin you can find a good tutorial at SSH-Agent in msysgit and cygwin and bash:
Add a file called .bashrc to your home folder.
Open the file and paste in:
#!/bin/bash
eval `ssh-agent -s`
ssh-add
This assumes that your key is in the conventional ~/.ssh/id_rsa location. If it isn't, include a full path after the ssh-add command.
Add to or create file ~/.ssh/config with the contents
ForwardAgent yes
In the original tutorial the ForwardAgent param is Yes, but it's a typo. Use all lowercase or you'll get errors.
Restart Msysgit. It will ask you to enter your passphrase once, and that's it (until you end the session, or your ssh-agent is killed.)
Mac/OS X
If you don't want to start a new ssh-agent every time you open a terminal, check out Keychain. I'm on a Mac now, so I used the tutorial ssh-agent with zsh & keychain on Mac OS X to set it up, but I'm sure a Google search will have plenty of info for Windows.
Update: A better solution on Mac is to add your key to the Mac OS Keychain:
ssh-add -K ~/.ssh/id_rsa
Simple as that.
Run
ssh-agent bash
ssh-add
To get more details you can search
ssh-agent
or run
man ssh-agent
ssh-add and ssh (assuming you are using the openssh implementations) require an environment variable to know how to talk to the ssh agent. If you started the agent in a different command prompt window to the one you're using now, or if you started it incorrectly, neither ssh-add nor ssh will see that environment variable set (because the environment variable is set locally to the command prompt it's set in).
You don't say which version of ssh you're using, but if you're using cygwin's, you can use this recipe from SSH Agent on Cygwin:
# Add to your Bash config file
SSHAGENT=/usr/bin/ssh-agent
SSHAGENTARGS="-s"
if [ -z "$SSH_AUTH_SOCK" -a -x "$SSHAGENT" ]; then
eval `$SSHAGENT $SSHAGENTARGS`
trap "kill $SSH_AGENT_PID" 0
fi
This will start an agent automatically for each new command prompt window that you open (which is suboptimal if you open multiple command prompts in one session, but at least it should work).
I faced the same problem for Linux, and here is what I did:
Basically, the command ssh-agent starts the agent, but it doesn't really set the environment variables for it to run. It just outputs those variables to the shell.
You need to:
eval `ssh-agent`
and then do ssh-add. See Could not open a connection to your authentication agent.
Instead of using ssh-agent -s, I used eval `ssh-agent -s` to solve this issue.
Here is what I performed step by step (step 2 onwards on Git Bash):
Cleaned up my .ssh folder at C:\user\<username>\.ssh\
Generated a new SSH key:
ssh-keygen -t rsa -b 4096 -C "xyz#abc.com"
Check if any process id(ssh agent) is already running.
ps aux | grep ssh
(Optional) If found any in step 3, kill those
kill <pids>
Started the SSH agent
$ eval `ssh-agent -s`
Added SSH key generated in step 2 to the SSH agent
ssh-add ~/.ssh/id_rsa
Try to do the following steps:
Open Git Bash and run: cd ~/.ssh
Try to run agent: eval $(ssh-agent)
Right now, you can run the following command: ssh-add -l
In Windows 10 I tried all answers listed here, but none of them seemed to work. In fact, they give a clue. To solve a problem, simply you need three commands. The idea of this problem is that ssh-add needs the SSH_AUTH_SOCK and SSH_AGENT_PID environment variables to be set with the current ssh-agent sock file path and pid number.
ssh-agent -s > temp.txt
This will save the output of ssh-agent in a file. The text file content will be something like this:
SSH_AUTH_SOCK=/tmp/ssh-kjmxRb2764/agent.2764; export SSH_AUTH_SOCK;
SSH_AGENT_PID=3044; export SSH_AGENT_PID;
echo Agent pid 3044;
Copy something like "/tmp/ssh-kjmxRb2764/agent.2764" from the text file and run the following command directly in the console:
set SSH_AUTH_SOCK=/tmp/ssh-kjmxRb2764/agent.2764
Copy something like "3044" from the text file and run the following command directly in the console:
set SSH_AGENT_PID=3044
Now when environment variables (SSH_AUTH_SOCK and SSH_AGENT_PID) are set for the current console session, run your ssh-add command and it will not fail again to connect to ssh agent.
One thing I came across was that eval did not work for me using Cygwin, what worked for me was ssh-agent ssh-add id_rsa.
After that I came across an issue that my private key was too open, the solution I managed to find for that (from here):
chgrp Users id_rsa
as well as
chmod 600 id_rsa
finally I was able to use:
ssh-agent ssh-add id_rsa
For Windows users, I found cmd eval `ssh-agent -s` didn't work, but using Git Bash worked a treat:
eval `ssh-agent -s`; ssh-add KEY_LOCATION
And making sure the Windows service "OpenSSH Key Management" wasn't disabled.
To amplify on n3o's answer for Windows 7...
My problem was indeed that some required environment variables weren't set, and n3o is correct that ssh-agent tells you how to set those environment variables, but doesn't actually set them.
Since Windows doesn't let you do "eval," here's what to do instead:
Redirect the output of ssh-agent to a batch file with
ssh-agent > temp.bat
Now use a text editor such as Notepad to edit temp.bat. For each of the first two lines:
Insert the word "set" and a space at the beginning of the line.
Delete the first semicolon and everything that follows.
Now delete the third line. Your temp.bat should look something like this:
set SSH_AUTH_SOCK=/tmp/ssh-EorQv10636/agent.10636
set SSH_AGENT_PID=8608
Run temp.bat. This will set the environment variables that are needed for ssh-add to work.
I just got this working. Open your ~/.ssh/config file.
Append the following-
Host github.com
IdentityFile ~/.ssh/github_rsa
The page that gave me the hint Set up SSH for Git
said that the single space indentation is important... though I had a configuration in here from Heroku that did not have that space and works properly.
If you follow these instructions, your problem would be solved.
If you’re on a Mac or Linux machine, type:
eval "$(ssh-agent -s)"
If you’re on a Windows machine, type:
ssh-agent -s
I had the same problem on Ubuntu and the other solutions didn't help me.
I finally realized what my problem was. I had created my SSH keys in the /root/.ssh folder, so even when I ran ssh-add as root, it couldn't do its work and kept saying:
Could not open a connection to your authentication agent.
I created my SSH public and private keys in /home/myUsername/ folder and I used
ssh-agent /bin/sh
Then I ran
ssh-add /home/myUsername/.ssh/id_rsa
And problem was solved this way.
Note: For accessing your repository in Git, add your Git password when you are creating SSH keys with ssh-keygen -t rsa -C "your Git email here".
Let me offer another solution. If you have just installed Git 1.8.2.2 or thereabouts, and you want to enable SSH, follow the well-writen directions.
Everything through to Step 5.6 where you might encounter a slight snag. If an SSH agent is already be running you could get the following error message when you restart bash
Could not open a connection to your authentication agent
If you do, use the following command to see if more than one ssh-agent process is running
ps aux | grep ssh
If you see more than one ssh-agent service, you will need to kill all of these processes. Use the kill command as follows (the PID will be unique on your computer)
kill <PID>
Example:
kill 1074
After you have removed all of the ssh-agent processes, run the px aux | grep ssh command again to be sure they are gone, then restart Bash.
Voila, you should now get something like this:
Initializing new SSH agent...
succeeded
Enter passphrase for /c/Users/username/.ssh/id_rsa:
Now you can continue on Step 5.7 and beyond.
This will run the SSH agent and authenticate only the first time you need it, not every time you open your Bash terminal. It can be used for any program using SSH in general, including ssh itself and scp. Just add this to /etc/profile.d/ssh-helper.sh:
ssh-auth() {
# Start the SSH agent only if not running
[[ -z $(ps | grep ssh-agent) ]] && echo $(ssh-agent) > /tmp/ssh-agent-data.sh
# Identify the running SSH agent
[[ -z $SSH_AGENT_PID ]] && source /tmp/ssh-agent-data.sh > /dev/null
# Authenticate (change key path or make a symlink if needed)
[[ -z $(ssh-add -l | grep "/home/$(whoami)/.ssh/id_rsa") ]] && ssh-add
}
# You can repeat this for other commands using SSH
git() { ssh-auth; command git "$#"; }
Note: this is an answer to this question, which has been merged with this one.
That question was for Windows 7, meaning my answer was for Cygwin/MSYS/MSYS2. This one seems for some Unix, where I wouldn't expect the SSH agent needing to be managed like this.
The basic solution to run ssh-agent is answered in many answers. However runing ssh-agent many times (per each opened terminal or per remote login) will create a many copies ot ssh-agent running in memory. The scripts which is suggested to avoid that problem is long and need to write and/or copy separated file or need to write too many strings in ~/.profile or ~/.schrc. Let me suggest simple two string solution:
For sh, bash, etc:
# ~/.profile
if ! pgrep -q -U `whoami` -x 'ssh-agent'; then ssh-agent -s > ~/.ssh-agent.sh; fi
. ~/.ssh-agent.sh
For csh, tcsh, etc:
# ~/.schrc
sh -c 'if ! pgrep -q -U `whoami` -x 'ssh-agent'; then ssh-agent -c > ~/.ssh-agent.tcsh; fi'
eval `cat ~/.ssh-agent.tcsh`
What is here:
search the process ssh-agent by name and by current user
create appropriate shell script file by calling ssh-agent and run ssh-agent itself if no current user ssh-agent process found
evaluate created shell script which configure appropriate environment
It is not necessary to protect created shell script ~/.ssh-agent.tcsh or ~/.ssh-agent.sh from another users access because: at-first communication with ssh-agent is processed through protected socket which is not accessible to another users, and at-second another users can found ssh-agent socket simple by enumeration files in /tmp/ directory. As far as about access to ssh-agent process it is the same things.
In Windows 10, using the Command Prompt terminal, the following works for me:
ssh-agent cmd
ssh-add
You should then be asked for a passphrase after this:
Enter passphrase for /c/Users/username/.ssh/id_rsa:
Try the following:
ssh-agent sh -c 'ssh-add && git push heroku master'
Use parameter -A when you connect to server, example:
ssh -A root#myhost
from man page :
-A Enables forwarding of the authentication agent connection.
This can also be specified on a per-host basis in a configuration file.
Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's
UNIX-domain socket) can access the local agent through the forwarded
connection. An attacker cannot obtain key material from the agent,
however they can perform operations on the keys that enable them to
authenticate using the identities loaded into the agent.
I had this problem, when I started ssh-agent, when it was already running. It seems that the multiple instances conflict with each other.
To see if ssh-agent is already running, check the value of the SSH_AGENT_SOCK environment variable with:
echo $SSH_AGENT_SOCK
If it is set, then the agent is presumably running.
To check if you have more than one ssh-agent running, you can review:
ps -ef | grep ssh
Of course, then you should kill any additional instances that you created.
Read user456814's answer for explanations. Here I only try to automate the fix.
If you using a Cygwin terminal with Bash, add the following to the $HOME/.bashrc file. This only starts ssh-agent once in the first Bash terminal and adds the keys to ssh-agent. (I am not sure if this is required on Linux.)
###########################
# start ssh-agent for
# ssh authentication with github.com
###########################
SSH_AUTH_SOCK_FILE=/tmp/SSH_AUTH_SOCK.sh
if [ ! -e $SSH_AUTH_SOCK_FILE ]; then
# need to find SSH_AUTH_SOCK again.
# restarting is an easy option
pkill ssh-agent
fi
# check if already running
SSH_AGENT_PID=`pgrep ssh-agent`
if [ "x$SSH_AGENT_PID" == "x" ]; then
# echo "not running. starting"
eval $(ssh-agent -s) > /dev/null
rm -f $SSH_AUTH_SOCK_FILE
echo "export SSH_AUTH_SOCK=$SSH_AUTH_SOCK" > $SSH_AUTH_SOCK_FILE
ssh-add $HOME/.ssh/github.com_id_rsa 2>&1 > /dev/null
#else
# echo "already running"
fi
source $SSH_AUTH_SOCK_FILE
Don’t forget to add your correct keys in the "ssh-add" command.
I had a similar problem when I was trying to get this to work on Windows to connect to the stash via SSH.
Here is the solution that worked for me.
Turns out I was running the Pageant ssh agent on my Windows box - I would check what you are running. I suspect it is Pageant as it comes as default with PuTTY and WinSCP.
The ssh-add does not work from command line with this type of agent
You need to add the private key via the Pageant UI window which you can get by double-clicking the Pageant icon in the taskbar (once it is started).
Before you add the key to Pageant you need to convert it to PPK format. Full instructions are available here How to convert SSH key to ppk format
That is it. Once I uploaded my key to stash I was able to use Sourcetree to create a local repository and clone the remote.
For Bash built into Windows 10, I added this to file .bash_profile:
if [ -z $SSH_AUTH_SOCK ]; then
if [ -r ~/.ssh/env ]; then
source ~/.ssh/env
if [ `ps -p $SSH_AGENT_PID | wc -l` = 1 ]; then
rm ~/.ssh/env
unset SSH_AUTH_SOCK
fi
fi
fi
if [ -z $SSH_AUTH_SOCK ]; then
ssh-agent -s | sed 's/^echo/#echo/'> ~/.ssh/env
chmod 600 ~/.ssh/env
source ~/.ssh/env > /dev/null 2>&1
fi
Using Git Bash on Windows 8.1 E, my resolution was as follows:
eval $(ssh-agent) > /dev/null
ssh-add ~/.ssh/id_rsa
I resolved the error by force stopping (killed) git processes (ssh agent), then uninstalling Git, and then installing Git again.
This worked for me.
In the CMD window, type the following command:
cd path-to-Git/bin # (for example,cd C:\Program Files\Git\bin)
bash
exec ssh-agent bash
ssh-add path/to/.ssh/id_rsa