Error message
After having added the ssh key of a user of a GitLab server and repository that is hosted over tor, a test was performed that tried to clone a private repository (to which the testing user is added) over tor. The cloning was attempted with command:
torsocks git clone git#some_onion_domain.onion:root/test.git
Which returns error:
Cloning into 'test'... 1620581859 ERROR torsocks[50856]: Connection
refused to Tor SOCKS (in socks5_recv_connect_reply() at socks5.c:543)
ssh: connect to host some_onion_domain.onion port 22: Connection
refused fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository
exists.
GitLab SSH Cloning Verification
However, to verify the ssh access is available to the test user, the cloning was verified without tor using command:
git clone git#127.0.0.1:root/test.git
Which successfully returned:
Cloning into 'test'... remote: Enumerating objects: 3, done. remote:
Counting objects: 100% (3/3), done. remote: Total 3 (delta 0), reused
0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done.
Server side hypothesis
My first guess is that it is a server-side issue that has to do with the lack of https, in following setting in the /etc/gitlab/gitlab.rb file:
external_url 'http://127.0.0.1'
However setting external_url 'https://127.0.0.1 requires an https certificate, e.g. from Let's encrypt, which seem to not be provided for onion domains.
Client-side hypothesis
My second guess would be that it is a client-side issue related to some SOCKS setting is incorrect at the test user side that runs the torsocks command, similar to an issue w.r.t. the SOCKS 5 protocol that seems to be described here.
Question
Hence I would like to ask:
How can I resolve the connect to host some_onion_domain.onion port 22: Connection refused error when users try to clone the repo over tor?
One can set the ssh port of the GitLab instance to 9001, e.g. with:
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 22:9001 \
--name gitlab \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
gitlab/gitlab-ee:latest
Next, add port 9001 and port 22 to the ssh configuration in /etc/ssh/sshd_config by adding:
Port 9001
Port 22
then restart the ssh service with: systemctl restart ssh.
It is essential that one adds a public ssh key to the GitLab server for each computer you want to download the repo from, even if one wants to clone a public repository. You can make a new GitLab account for each computer, or add multiple public ssh keys to a single GitLab account. These instructions explain how to do that, tl;dr
ssh-keygen -t ed25519
<enter>
<enter>
<enter>
systemctl restart ssh
xclip -sel clip < ~/.ssh/id_ed25519.pub
Ps. if xclip does not work, one can manually copy the ssh key with: cat ~/.ssh/id_ed25519.pub.
Then open a browser and go to https://gitlab.com/-/profile/keys so for your own tor GitLab server that would be: someoniondomain.onion/-/profile/keys, and copy paste that key in there.
That is it, now one can clone the repository over tor with:
torify -p 22 git clone ssh://git#someoniondomain.onion:9001/root/public.git
Note
As a side note, in the question I happened to have tested git clone git#127.0.0.1:root/test.git however, instead of using 127.0.0.1 I should have used either the output of hostname -I or the public ip address of the device that hosts the GitLab server. Furthermore, I should have verified whether the GitLab server was accessible through ssh by testing:
ssh -T git#youronionserver.onion
Which should return Congratulations... It would not have done so if I had tested that, indicating the problem was in the ssh access to the GitLab server (or the ssh connection to the device). I could have determined whether the ssh problem was with the device or the ssh server by testing if I could log into the device with: ssh deviceusername#device_ip, which would have been successfull indicating, the ssh problem with at the GitLab server.
Related
Connection to created Droplet via SSH by Github Actions runner.
My steps:
ssh-keygen -t rsa -f ~/.ssh/KEY_NAME -P ""
doctl compute ssh-key create KEY --public-key "CONTENT OF KEY_NAME.pub"
doctl compute droplet create --image ubuntu-20-04-x64 --size s-1vcpu-1gb --region fra1 DROPLET_NAME --ssh-keys FINGERPRINT --wait
ssh -vvv -i ~/.ssh/KEY_NAME root#DROPLET_IP
✔️ Tested on Windows local machine using doctl.exe runned from cmd - works!
✔️ Tested on Docker (installed on Windows) based on Linux image using doctl script - works!
⚠️ Tested on Github Actions runner based on ubuntu-latest using digitalocean/action-doctl script - doesn't work!
Received message is: connect to host ADDRESS_IP port 22: Connection refused.
So the steps are correct, so why does this not work for Github Actions?
If you are using the GitHub Action digitalocean/action-doctl, check issue 14 first:
In order to SSH into a Droplet, doctl needs access to the private half of the SSH key pair whose public half is on the Droplet.
Currently the doctl Action is based on a Docker container.
If you were using the Docker container directly, you could invoke it with:
docker run --rm --interactive --tty \
--env=DIGITALOCEAN_ACCESS_TOKEN=<YOUR-DO-API-TOKEN> \
-v $HOME/.ssh/id_rsa:/root/.ssh/id_rsa \
digitalocean/doctl compute ssh <DROPLET-ID>
in order to mount the SSH key from outside the container.
You might be better off just using doctl to grep the Droplet's IP address and using this Action that is more focused on SSH related use cases and provides a lot of additional functionality: marketplace/actions/ssh-remote-commands.
We have a RHEL 7 remote server where I created a dummy user called gitlabci.
While SSH'd into the remote server, I generated a public-private key pair (for use when grabbing files from GitLab)
Uploaded the public key as a deploy key for use later when we get our CI set up
Generated another public-private key pair in my local machine (for use when SSH'ing into the remote server from the GitLab Runner)
Added the public key to the remote server's authorized_keys
Added the private key to the project's CI environment variables
The idea is when the CI runs, the GitLab runner will SSH into the remote server as the gitlabci user I created then fetch the branch into the web directory using the deploy keys.
I thought I have set up the keys properly but whenever the runner tries to SSH, the connection gets refused.
$ which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )
...
$ eval $(ssh-agent -s)
Agent pid 457
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
Identity added: (stdin) (GitLab CI)
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ ssh gitlabci#random.server.com
Pseudo-terminal will not be allocated because stdin is not a terminal.
ssh: connect to host random.server.com port 22: Connection refused
ERROR: Job failed: exit code 1
When I tried to SSH into the remote server via GitBash on my local machine using the key pair I generated it did work.
$ ssh -i ~/.ssh/gitlabci gitlabci#random.server.com
Last login: Mon Nov 4 13:49:59 2019 from machine01.work.server.com
ssh: connect to host random.server.com port 22: Connection refused
"Connection refused" means that the ssh client transmitted a connection request to the named host and port, and it received in response a so-called "reset" packet, indicating that the remote server was refusing to accept the connection.
If you can connect to random.server.com from one host but get connection refused from another host, a few possible explanations come to mind:
You might have an entry in your .ssh/config file which substitutes a different name or address for random.server.com. For example, an entry like the following would cause ssh to connect to random2.server.com when you request random.server.com:
Host random.server.com
Hostname random2.server.com
The IP address lookup for "random.server.com" is returning the wrong address somehow, so ssh is trying to connect to the wrong server. For example, someone might have added an entry to /etc/hosts for that hostname.
Some firewall or other packet inspection software is interfering with the connection attempt by responding with a fake reset packet.
Using Ansible to provision Vagrant box, Ansible fails when cloning Git repo: Host key verification failed. fatal: Could not read from remote repository.. Oddly I can clone from Git with no issues when I SSH into the box and run git clone <GIT_URL>. Have set sudo: no in Ansible task but still fails. ssh-agent is running correctly on both host and box.
Host key verification failed.
is not related to the agent forwarding. As noted in the comments, it is related to the known_hosts file.
Before the first connection to the server (github.com), you need to manually verify it's host key, or use similar process as noted in comments, using keyscan:
ssh-keyscan -H github.com >> ~/.ssh/known_hosts
The other (not recommended) possibility is to turn off the host key verification in the ~/.ssh/config:
Host git
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
in the home directory of the user running the git clone.
I am using Docker on Mac OS X with Docker Machine (with the default boot2docker machine), and I use docker-compose to setup my development environment.
Let's say that one of the containers is called "stack". Now what I want to do is call:
docker-composer run stack ssh user#stackoverflow.com
My public key (which has been added to stackoverflow.com and which will be used to authenticate me) is located on the host machine. I want this key to be available to the Docker Machine container so that I will be able to authenticate myself against stackoverflow using that key from within the container. Preferably without physically copying my key to Docker Machine.
Is there any way to do this? Also, if my key is password protected, is there any way to unlock it once so after every injection I will not have to manually enter the password?
You can add this to your docker-compose.yml (assuming your user inside container is root):
volumes:
- ~/.ssh:/root/.ssh
Also you can check for more advanced solution with ssh agent (I did not tried it myself)
WARNING: This feature seems to have limited support in Docker Compose and is more designed for Docker Swarm.
(I haven't checked to make sure, but) My current impression is that:
In Docker Compose secrets are just bind mount volumes, so there's no additional security compared to volumes
Ability to change secrets permissions with Linux host may be limited
See answer comments for more details.
Docker has a feature called secrets, which can be helpful here. To use it one could add the following code to docker-compose.yml:
---
version: '3.1' # Note the minimum file version for this feature to work
services:
stack:
...
secrets:
- host_ssh_key
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
Then the new secret file can be accessed in Dockerfile like this:
RUN mkdir ~/.ssh && ln -s /run/secrets/host_ssh_key ~/.ssh/id_rsa
Secret files won't be copied into container:
When you grant a newly-created or running service access to a secret, the decrypted secret is mounted into the container in an in-memory filesystem
For more details please refer to:
https://docs.docker.com/engine/swarm/secrets/
https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets
If you're using OS X and encrypted keys this is going to be PITA. Here are the steps I went through figuring this out.
Straightforward approach
One might think that there’s no problem. Just mount your ssh folder:
...
volumes:
- ~/.ssh:/root/.ssh:ro
...
This should be working, right?
User problem
Next thing we’ll notice is that we’re using the wrong user id. Fine, we’ll write a script to copy and change the owner of ssh keys. We’ll also set ssh user in config so that ssh server knows who’s connecting.
...
volumes:
- ~/.ssh:/root/.ssh-keys:ro
command: sh -c ‘./.ssh-keys.sh && ...’
environment:
SSH_USER: $USER
...
# ssh-keys.sh
mkdir -p ~/.ssh
cp -r /root/.ssh-keys/* ~/.ssh/
chown -R $(id -u):$(id -g) ~/.ssh
cat <<EOF >> ~/.ssh/config
User $SSH_USER
EOF
SSH key passphrase problem
In our company we protect SSH keys using a passphrase. That wouldn’t work in docker since it’s impractical to enter a passphrase each time we start a container.
We could remove a passphrase (see example below), but there’s a security concern.
openssl rsa -in id_rsa -out id_rsa2
# enter passphrase
# replace passphrase-encrypted key with plaintext key:
mv id_rsa2 id_rsa
SSH agent solution
You may have noticed that locally you don’t need to enter a passphrase each time you need ssh access. Why is that?
That’s what SSH agent is for. SSH agent is basically a server which listens to a special file, unix socket, called “ssh auth sock”. You can see its location on your system:
echo $SSH_AUTH_SOCK
# /run/user/1000/keyring-AvTfL3/ssh
SSH client communicates with SSH agent through this file so that you’d enter passphrase only once. Once it’s unencrypted, SSH agent will store it in memory and send to SSH client on request.
Can we use that in Docker? Sure, just mount that special file and specify a corresponding environment variable:
environment:
SSH_AUTH_SOCK: $SSH_AUTH_SOCK
...
volumes:
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
We don’t even need to copy keys in this case.
To confirm that keys are available we can use ssh-add utility:
if [ -z "$SSH_AUTH_SOCK" ]; then
echo "No ssh agent detected"
else
echo $SSH_AUTH_SOCK
ssh-add -l
fi
The problem of unix socket mount support in Docker for Mac
Unfortunately for OS X users, Docker for Mac has a number of shortcomings, one of which is its inability to share Unix sockets between Mac and Linux. There’s an open issue in D4M Github. As of February 2019 it’s still open.
So, is that a dead end? No, there is a hacky workaround.
SSH agent forwarding solution
Luckily, this issue isn’t new. Long before Docker there was a way to use local ssh keys within a remote ssh session. This is called ssh agent forwarding. The idea is simple: you connect to a remote server through ssh and you can use all the same remote servers there, thus sharing your keys.
With Docker for Mac we can use a smart trick: share ssh agent to the docker virtual machine using TCP ssh connection, and mount that file from virtual machine to another container where we need that SSH connection. Here’s a picture to demonstrate the solution:
First, we create an ssh session to the ssh server inside a container inside a linux VM through a TCP port. We use a real ssh auth sock here.
Next, ssh server forwards our ssh keys to ssh agent on that container. SSH agent has a Unix socket which uses a location mounted to Linux VM. I.e. Unix socket works in Linux. Non-working Unix socket file in Mac has no effect.
After that we create our useful container with an SSH client. We share the Unix socket file which our local SSH session uses.
There’s a bunch of scripts that simplifies that process:
https://github.com/avsm/docker-ssh-agent-forward
Conclusion
Getting SSH to work in Docker could’ve been easier. But it can be done. And it’ll likely to be improved in the future. At least Docker developers are aware of this issue. And even solved it for Dockerfiles with build time secrets. And there's a suggestion how to support Unix domain sockets.
You can forward SSH agent:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
You can use multi stage build to build containers This is the approach you can take :-
Stage 1 building an image with ssh
FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp
RUN apt-get update && \
apt-get install -y git npm
RUN mkdir /root/.ssh/ &&\
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
chmod 600 /root/.ssh/id_rsa &&\
touch /root/.ssh/known_hosts &&\
ssh-keyscan github.com >> /root/.ssh/known_hosts
COPY package*.json ./
RUN npm install
RUN cp -R node_modules prod_node_modules
Stage 2: build your container
FROM node:10-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY ./ ./
COPY --from=sshImage /root/temp/prod_node_modules ./node_modules
EXPOSE 3006
CMD ["npm", "run", "dev"]
add env attribute in your compose file:
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
then pass args from build script like this:
docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
And remove the intermediate container it for security. This Will help you cheers.
Docker for Mac now supports mounting the ssh agent socket on macOS.
I am trying to ssh into a VM from another VM in Google Cloud using the gcloud compute ssh command. It fails with the below message:
/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/compute/lib/base_classes.py:9: DeprecationWarning: the sets module is deprecated
import sets
Connection timed out
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255]. See https://cloud.google.com/compute/docs/troubleshooting#ssherrors for troubleshooting hints.
I made sure the ssh keys are in place but still it doesn't work. What am I missing here?
There is an assumption that you have connected to the externally-visible instance using SSH beforehand with gcloud.
From your local machine, start ssh-agent with the following command to manage your keys for you:
me#local:~$ eval `ssh-agent`
Call ssh-add to load the gcloud compute public keys from your local computer into the agent, and use them for all SSH commands for authentication:
me#local:~$ ssh-add ~/.ssh/google_compute_engine
Log into an instance with an external IP address while supplying the -A argument to enable authentication agent forwarding.
gcloud compute ssh --ssh-flag="-A" INSTANCE
source: https://cloud.google.com/compute/docs/instances/connecting-to-instance#sshbetweeninstances.
I am not sure about the 'flags' because it's not working for me bu maybe I have a different OS or Gcloud version and it will work for you.
Here are the steps I ran on my Mac to connect to the Google Dataproc master VM and then hop onto a worker VM from the master MV. I ssh'd to the master VM to get the IP.
$ gcloud compute ssh cluster-for-cameron-m
Warning: Permanently added '104.197.45.35' (ECDSA) to the list of known hosts.
I then exited. I enabled forwarding for that host.
$ nano ~/.ssh/config
Host 104.197.45.35
ForwardAgent yes
I added the gcloud key.
$ ssh-add ~/.ssh/google_compute_engine
I then verified that it was added by listing the key fingerprints with ssh-add -l. I reconnected to the master VM and ran ssh-add -l again to verify that the keys were indeed forwarded. After that, connecting to the worker node worked just fine.
ssh cluster-for-cameron-w-0
About using SSH Agent Forwarding...
Because instances are frequently created and destroyed on the cloud, the (recreated) host fingerprint keeps changing. If the new fingerprint doesn't match with ~/.ssh/known_hosts, SSH automatically disables Agent Forwarding. The solution is:
$ ssh -A -o UserKnownHostsFile=/dev/null ...