Cannot connect via SSH from Github Action workflow - ssh

Connection to created Droplet via SSH by Github Actions runner.
My steps:
ssh-keygen -t rsa -f ~/.ssh/KEY_NAME -P ""
doctl compute ssh-key create KEY --public-key "CONTENT OF KEY_NAME.pub"
doctl compute droplet create --image ubuntu-20-04-x64 --size s-1vcpu-1gb --region fra1 DROPLET_NAME --ssh-keys FINGERPRINT --wait
ssh -vvv -i ~/.ssh/KEY_NAME root#DROPLET_IP
✔️ Tested on Windows local machine using doctl.exe runned from cmd - works!
✔️ Tested on Docker (installed on Windows) based on Linux image using doctl script - works!
⚠️ Tested on Github Actions runner based on ubuntu-latest using digitalocean/action-doctl script - doesn't work!
Received message is: connect to host ADDRESS_IP port 22: Connection refused.
So the steps are correct, so why does this not work for Github Actions?

If you are using the GitHub Action digitalocean/action-doctl, check issue 14 first:
In order to SSH into a Droplet, doctl needs access to the private half of the SSH key pair whose public half is on the Droplet.
Currently the doctl Action is based on a Docker container.
If you were using the Docker container directly, you could invoke it with:
docker run --rm --interactive --tty \
--env=DIGITALOCEAN_ACCESS_TOKEN=<YOUR-DO-API-TOKEN> \
-v $HOME/.ssh/id_rsa:/root/.ssh/id_rsa \
digitalocean/doctl compute ssh <DROPLET-ID>
in order to mount the SSH key from outside the container.
You might be better off just using doctl to grep the Droplet's IP address and using this Action that is more focused on SSH related use cases and provides a lot of additional functionality: marketplace/actions/ssh-remote-commands.

Related

How to clone gitlab repo over tor using ssh?

Error message
After having added the ssh key of a user of a GitLab server and repository that is hosted over tor, a test was performed that tried to clone a private repository (to which the testing user is added) over tor. The cloning was attempted with command:
torsocks git clone git#some_onion_domain.onion:root/test.git
Which returns error:
Cloning into 'test'... 1620581859 ERROR torsocks[50856]: Connection
refused to Tor SOCKS (in socks5_recv_connect_reply() at socks5.c:543)
ssh: connect to host some_onion_domain.onion port 22: Connection
refused fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository
exists.
GitLab SSH Cloning Verification
However, to verify the ssh access is available to the test user, the cloning was verified without tor using command:
git clone git#127.0.0.1:root/test.git
Which successfully returned:
Cloning into 'test'... remote: Enumerating objects: 3, done. remote:
Counting objects: 100% (3/3), done. remote: Total 3 (delta 0), reused
0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done.
Server side hypothesis
My first guess is that it is a server-side issue that has to do with the lack of https, in following setting in the /etc/gitlab/gitlab.rb file:
external_url 'http://127.0.0.1'​
However setting external_url 'https://127.0.0.1 requires an https certificate, e.g. from Let's encrypt, which seem to not be provided for onion domains.
Client-side hypothesis
My second guess would be that it is a client-side issue related to some SOCKS setting is incorrect at the test user side that runs the torsocks command, similar to an issue w.r.t. the SOCKS 5 protocol that seems to be described here.
Question
Hence I would like to ask:
How can I resolve the connect to host some_onion_domain.onion port 22: Connection refused error when users try to clone the repo over tor?
One can set the ssh port of the GitLab instance to 9001, e.g. with:
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 22:9001 \
--name gitlab \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
gitlab/gitlab-ee:latest
Next, add port 9001 and port 22 to the ssh configuration in /etc/ssh/sshd_config by adding:
Port 9001
Port 22
then restart the ssh service with: systemctl restart ssh.
It is essential that one adds a public ssh key to the GitLab server for each computer you want to download the repo from, even if one wants to clone a public repository. You can make a new GitLab account for each computer, or add multiple public ssh keys to a single GitLab account. These instructions explain how to do that, tl;dr
ssh-keygen -t ed25519
<enter>
<enter>
<enter>
systemctl restart ssh
xclip -sel clip < ~/.ssh/id_ed25519.pub
Ps. if xclip does not work, one can manually copy the ssh key with: cat ~/.ssh/id_ed25519.pub.
Then open a browser and go to https://gitlab.com/-/profile/keys so for your own tor GitLab server that would be: someoniondomain.onion/-/profile/keys, and copy paste that key in there.
That is it, now one can clone the repository over tor with:
torify -p 22 git clone ssh://git#someoniondomain.onion:9001/root/public.git
Note
As a side note, in the question I happened to have tested git clone git#127.0.0.1:root/test.git however, instead of using 127.0.0.1 I should have used either the output of hostname -I or the public ip address of the device that hosts the GitLab server. Furthermore, I should have verified whether the GitLab server was accessible through ssh by testing:
ssh -T git#youronionserver.onion
Which should return Congratulations... It would not have done so if I had tested that, indicating the problem was in the ssh access to the GitLab server (or the ssh connection to the device). I could have determined whether the ssh problem was with the device or the ssh server by testing if I could log into the device with: ssh deviceusername#device_ip, which would have been successfull indicating, the ssh problem with at the GitLab server.

Inject host's SSH keys into Docker Machine with Docker Compose

I am using Docker on Mac OS X with Docker Machine (with the default boot2docker machine), and I use docker-compose to setup my development environment.
Let's say that one of the containers is called "stack". Now what I want to do is call:
docker-composer run stack ssh user#stackoverflow.com
My public key (which has been added to stackoverflow.com and which will be used to authenticate me) is located on the host machine. I want this key to be available to the Docker Machine container so that I will be able to authenticate myself against stackoverflow using that key from within the container. Preferably without physically copying my key to Docker Machine.
Is there any way to do this? Also, if my key is password protected, is there any way to unlock it once so after every injection I will not have to manually enter the password?
You can add this to your docker-compose.yml (assuming your user inside container is root):
volumes:
- ~/.ssh:/root/.ssh
Also you can check for more advanced solution with ssh agent (I did not tried it myself)
WARNING: This feature seems to have limited support in Docker Compose and is more designed for Docker Swarm.
(I haven't checked to make sure, but) My current impression is that:
In Docker Compose secrets are just bind mount volumes, so there's no additional security compared to volumes
Ability to change secrets permissions with Linux host may be limited
See answer comments for more details.
Docker has a feature called secrets, which can be helpful here. To use it one could add the following code to docker-compose.yml:
---
version: '3.1' # Note the minimum file version for this feature to work
services:
stack:
...
secrets:
- host_ssh_key
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
Then the new secret file can be accessed in Dockerfile like this:
RUN mkdir ~/.ssh && ln -s /run/secrets/host_ssh_key ~/.ssh/id_rsa
Secret files won't be copied into container:
When you grant a newly-created or running service access to a secret, the decrypted secret is mounted into the container in an in-memory filesystem
For more details please refer to:
https://docs.docker.com/engine/swarm/secrets/
https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets
If you're using OS X and encrypted keys this is going to be PITA. Here are the steps I went through figuring this out.
Straightforward approach
One might think that there’s no problem. Just mount your ssh folder:
...
volumes:
- ~/.ssh:/root/.ssh:ro
...
This should be working, right?
User problem
Next thing we’ll notice is that we’re using the wrong user id. Fine, we’ll write a script to copy and change the owner of ssh keys. We’ll also set ssh user in config so that ssh server knows who’s connecting.
...
volumes:
- ~/.ssh:/root/.ssh-keys:ro
command: sh -c ‘./.ssh-keys.sh && ...’
environment:
SSH_USER: $USER
...
# ssh-keys.sh
mkdir -p ~/.ssh
cp -r /root/.ssh-keys/* ~/.ssh/
chown -R $(id -u):$(id -g) ~/.ssh
cat <<EOF >> ~/.ssh/config
User $SSH_USER
EOF
SSH key passphrase problem
In our company we protect SSH keys using a passphrase. That wouldn’t work in docker since it’s impractical to enter a passphrase each time we start a container.
We could remove a passphrase (see example below), but there’s a security concern.
openssl rsa -in id_rsa -out id_rsa2
# enter passphrase
# replace passphrase-encrypted key with plaintext key:
mv id_rsa2 id_rsa
SSH agent solution
You may have noticed that locally you don’t need to enter a passphrase each time you need ssh access. Why is that?
That’s what SSH agent is for. SSH agent is basically a server which listens to a special file, unix socket, called “ssh auth sock”. You can see its location on your system:
echo $SSH_AUTH_SOCK
# /run/user/1000/keyring-AvTfL3/ssh
SSH client communicates with SSH agent through this file so that you’d enter passphrase only once. Once it’s unencrypted, SSH agent will store it in memory and send to SSH client on request.
Can we use that in Docker? Sure, just mount that special file and specify a corresponding environment variable:
environment:
SSH_AUTH_SOCK: $SSH_AUTH_SOCK
...
volumes:
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
We don’t even need to copy keys in this case.
To confirm that keys are available we can use ssh-add utility:
if [ -z "$SSH_AUTH_SOCK" ]; then
echo "No ssh agent detected"
else
echo $SSH_AUTH_SOCK
ssh-add -l
fi
The problem of unix socket mount support in Docker for Mac
Unfortunately for OS X users, Docker for Mac has a number of shortcomings, one of which is its inability to share Unix sockets between Mac and Linux. There’s an open issue in D4M Github. As of February 2019 it’s still open.
So, is that a dead end? No, there is a hacky workaround.
SSH agent forwarding solution
Luckily, this issue isn’t new. Long before Docker there was a way to use local ssh keys within a remote ssh session. This is called ssh agent forwarding. The idea is simple: you connect to a remote server through ssh and you can use all the same remote servers there, thus sharing your keys.
With Docker for Mac we can use a smart trick: share ssh agent to the docker virtual machine using TCP ssh connection, and mount that file from virtual machine to another container where we need that SSH connection. Here’s a picture to demonstrate the solution:
First, we create an ssh session to the ssh server inside a container inside a linux VM through a TCP port. We use a real ssh auth sock here.
Next, ssh server forwards our ssh keys to ssh agent on that container. SSH agent has a Unix socket which uses a location mounted to Linux VM. I.e. Unix socket works in Linux. Non-working Unix socket file in Mac has no effect.
After that we create our useful container with an SSH client. We share the Unix socket file which our local SSH session uses.
There’s a bunch of scripts that simplifies that process:
https://github.com/avsm/docker-ssh-agent-forward
Conclusion
Getting SSH to work in Docker could’ve been easier. But it can be done. And it’ll likely to be improved in the future. At least Docker developers are aware of this issue. And even solved it for Dockerfiles with build time secrets. And there's a suggestion how to support Unix domain sockets.
You can forward SSH agent:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
You can use multi stage build to build containers This is the approach you can take :-
Stage 1 building an image with ssh
FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp
RUN apt-get update && \
apt-get install -y git npm
RUN mkdir /root/.ssh/ &&\
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
chmod 600 /root/.ssh/id_rsa &&\
touch /root/.ssh/known_hosts &&\
ssh-keyscan github.com >> /root/.ssh/known_hosts
COPY package*.json ./
RUN npm install
RUN cp -R node_modules prod_node_modules
Stage 2: build your container
FROM node:10-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY ./ ./
COPY --from=sshImage /root/temp/prod_node_modules ./node_modules
EXPOSE 3006
CMD ["npm", "run", "dev"]
add env attribute in your compose file:
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
then pass args from build script like this:
docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
And remove the intermediate container it for security. This Will help you cheers.
Docker for Mac now supports mounting the ssh agent socket on macOS.

gcloud compute ssh from one VM to another VM on Google Cloud

I am trying to ssh into a VM from another VM in Google Cloud using the gcloud compute ssh command. It fails with the below message:
/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/compute/lib/base_classes.py:9: DeprecationWarning: the sets module is deprecated
import sets
Connection timed out
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255]. See https://cloud.google.com/compute/docs/troubleshooting#ssherrors for troubleshooting hints.
I made sure the ssh keys are in place but still it doesn't work. What am I missing here?
There is an assumption that you have connected to the externally-visible instance using SSH beforehand with gcloud.
From your local machine, start ssh-agent with the following command to manage your keys for you:
me#local:~$ eval `ssh-agent`
Call ssh-add to load the gcloud compute public keys from your local computer into the agent, and use them for all SSH commands for authentication:
me#local:~$ ssh-add ~/.ssh/google_compute_engine
Log into an instance with an external IP address while supplying the -A argument to enable authentication agent forwarding.
gcloud compute ssh --ssh-flag="-A" INSTANCE
source: https://cloud.google.com/compute/docs/instances/connecting-to-instance#sshbetweeninstances.
I am not sure about the 'flags' because it's not working for me bu maybe I have a different OS or Gcloud version and it will work for you.
Here are the steps I ran on my Mac to connect to the Google Dataproc master VM and then hop onto a worker VM from the master MV. I ssh'd to the master VM to get the IP.
$ gcloud compute ssh cluster-for-cameron-m
Warning: Permanently added '104.197.45.35' (ECDSA) to the list of known hosts.
I then exited. I enabled forwarding for that host.
$ nano ~/.ssh/config
Host 104.197.45.35
ForwardAgent yes
I added the gcloud key.
$ ssh-add ~/.ssh/google_compute_engine
I then verified that it was added by listing the key fingerprints with ssh-add -l. I reconnected to the master VM and ran ssh-add -l again to verify that the keys were indeed forwarded. After that, connecting to the worker node worked just fine.
ssh cluster-for-cameron-w-0
About using SSH Agent Forwarding...
Because instances are frequently created and destroyed on the cloud, the (recreated) host fingerprint keeps changing. If the new fingerprint doesn't match with ~/.ssh/known_hosts, SSH automatically disables Agent Forwarding. The solution is:
$ ssh -A -o UserKnownHostsFile=/dev/null ...

SSH directly into a docker container

i've got some docker conatiners and now I want to access into one with ssh. Thats working I got a connection via ssh to the docker container.
But now I have the problem I don't know with which user I can access into this container?
I've tried it with both users I have on the host machine (web & root). But they don't work.
What to do know?
You can drop directly into a running container with:
$ docker exec -it myContainer /bin/bash
You can get a shell on a container that is not running with:
$ docker run -it myContainer /bin/bash
This is the preferred method of getting a shell on a container. Running an SSH server is considered not a good practice and, although there are some use cases out there, should be avoided when possible.
If you want to connect directly into a Docker Container, without connecting to the docker host, your Dockerfile should include the following:
# SSH login fix. Otherwise user is kicked off after login
RUN echo 'root:pass' | chpasswd
RUN mkdir /var/run/sshd
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then use docker run with -p and -d flags. Example:
docker run -p 8022:22 -d your-docker-image
You can connect with:
ssh root#your-host -p8022
1.issue the command docker inspect (containerId or name)
You will get a result like this
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"my_bridge": {
"IPAMConfig": {
"IPv4Address": "172.17.0.20"
},
"Links": null,
"Aliases": [
"3784372432",
"xxx",
"xxx2"
],
"NetworkID": "ff7ea463ae3e6e6a099e0e044610cdcdc45b21f7e8c77a814aebfd3b2becd306",
"EndpointID": "6be4ea138f546b030bb08cf2c8af0f637e8e4ba81959c33fb5125ea0d93af967",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.20",
"IPPrefixLen": 24,
...
read out and copy the IP address from there, connect to it via command ssh existingUser#IpAddress , eg
someExistingUser#172.17.0.20. If the user doesn't exist, create him in the guest image, preferably with the sudo privileges. Probably don't use a root user directly, since as far as I know, that user is preset for connecting to the image via ssh keys, or has a preset password and changing it would probably end up in not being able to ssh connect to the image terminal via a regular way of doing it docker exec -it containerName /bin/bash or docker-compose exec containerName /bin/bash
For some case, enabling SSH in docker container is useful, specially when we want to test some scripts.
The link bellow give a good example how to create and image with ssh enabled and how to get it's IP and connect to it.
Here
If a true SSH connection into the container is needed (i.e. to allow isolated access over the internet), this image from the linuxserver.io guys could be a great solution: https://hub.docker.com/r/linuxserver/openssh-server
Much more robust solution is pulling down nsenter to your sever, then sshing in and running docker-enter from there. That way you don't need to run multiple processes in the container (ssh server + whatever the container is for), or worry about all the extra overhead of ssh users and such (not to mention security concerns).
The idea behind containers is that a container runs a single process so that it can be monitored by the daemon. If this process stops || fails for some reason, it can be restarted depending on your preference in your config. An ssh server is a running process. Therefore, if you need ssh access to your setup, make an ssh server service, which can share Volumes with other containers that are running alongside it in the setup.
To open a shell on a container in a host directly:
Imagine you are on your PC at home and you have a remote machine that runs docker and has running containers, and you want to open a shell on the container directly without "stopping by" on the remote host:
(The -t flag exposes tty)
ssh -t user#remote.host 'docker exec -it running_container_name /bin/bash'
If you are already on the host, like the accepted answer:
(The -i interactive -t tty)
docker exec -it running_container_name /bin/bash

Connect from one Docker container to another

I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
[edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
Just get your container ip, and connect to it from another container:
CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CONTAINER_ID)
echo $CONTAINER_IP
When you specify -p 5672, What docker does is open up a new port, such as 49xxx on the host and forwards it to port 5672 of the container.
you should be able to see which port is forwarding to the container by running:
sudo docker ps -a
From there, you can connect directly to the host IP address like so:
amqp://guest#HOST_IP:49xxx
You can't use localhost, because each container is basically its own localhost.
Create Image:
docker build -t "imagename1" .
docker build -t "imagename2" .
Run Docker image:
docker run -it -p 8000:8000 --name=imagename1 imagename1
docker run -it -p 8080:8080 --name=imagename2 imagename2
Create Network:
docker network create -d bridge "networkname"
Connect the network with container(imagename) created after running the image:
docker network connect "networkname" "imagename1"
docker network connect "networkname" "imagename2"
We can add any number of containers to the network.
docker network inspect ''networkname"
I think you can't connect to another container directly by design - that would be the responsibility of the host. An example of sharing data between containers using Volumes is given here http://docs.docker.io/en/latest/examples/couchdb_data_volumes/, but I don't think that that is what you're looking for.
I recently found out about https://github.com/toscanini/maestro - that might suit your needs. Let us know if it does :), I haven't tried it myself yet.
Edit. Note that you can read here that native "Container wiring and service discovery" is on the roadmap. I guess 7.0 or 8.0 at the latest.
You can get the docker instance IP with...
CID=$(sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server); sudo docker inspect $CID | grep IPAddress
But that's not very useful.
You can use pipework to create a private network between docker containers.
This is currently on the 0.8 roadmap:
https://github.com/dotcloud/docker/issues/1143