How do I setup passwordless ssh on AWS - ssh

How do I setup passwordless ssh between nodes on AWS cluster

Following steps to setup password less authentication are tested thoroughly for Centos and Ubuntu.
Assumptions:
You already have access to your EC2 machine. May be using the pem key or you have credentials for a unix user which has root permissions.
You have already setup RSA keys on you local machine. Private key and public key are available at "~/.ssh/id_rsa" and "~/.ssh/id_rsa.pub" respectively.
Steps:
Login to you EC2 machine as a root user.
Create a new user
useradd -m <yourname>
sudo su <yourname>
cd
mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
Append contents of file ~/.ssh/id_rsa.pub on you local machine to ~/.ssh/authorized_keys on EC2 machine.
chmod -R 700 ~/.ssh
chmod 600 ~/.ssh/*
Make sure sshing is permitted by the machine. In file /etc/ssh/sshd_config, make sure that line containing "PasswordAuthentication yes" is uncommented. Restart sshd service if you make any change in this file:
service sshd restart # On Centos
service ssh restart # On Ubuntu
Your passwordless login should work now. Try following on your local machine:
ssh -A <yourname>#ec2-xx-xx-xxx-xxx.ap-southeast-1.compute.amazonaws.com
Making yourself a super user. Open /etc/sudoers. Make sure following two lines are uncommented:
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Add yourself to wheel group.
usermod -aG wheel <yourname>

This may help someone
Copy the pem file on the machine then copy the content of pem file to the .ssh/id_rsa file you can use bellow command or your own
cat my.pem > ~/.ssh/id_rsa
try ssh localhost it should work and same with the other machines in the cluster

how I made Paswordless shh work between two instances is the following:
create ec2 instances – they should be in the same subnet and have the same security group
Open ports between them – make sure instances can communicate to each other. Use the default security group which has one rule relevant for this case:
Type: All Traffic
Source: Custom – id of the security group
Log in to the instance you want to connect from to the other instance
Run:
1 ssh-keygen -t rsa -N "" -f /home/ubuntu/.ssh/id_rsa
to generate a new rsa key.
Copy your private AWS key as ~/.ssh/my.key (or whatever name you want to use)
Make sure you change the permission to 600
1 chmod 600 .ssh/my.key
Copy the public key to the instance you wish to connect to passwordless
1 cat ~/.ssh/id_rsa.pub | ssh -i ~/.ssh/my.key ubuntu#10.0.0.X "cat >> ~/.ssh/authorized_keys"
If you test the passwordless ssh to the other machine, it should work.
1 ssh 10.0.0.X

you can use ssh keys like described here:
http://pkeck.myweb.uga.edu/ssh/

Related

Permission denied (publickey). when disabling PasswordAuthentication

I have 2 machines:
Windows machine with WSL installed, that serves as a client.
Ubuntu machine, with a test-user user, that serves as a server.
Both computer are on the same network.
On the Ubuntu computer, what I did:
I used ssh-keygen to generate two keys, I copied the id_rsa file to the WSL.
Make sure the ssh service is up, with systemctl status ssh.
On the WSL, what I did:
Copied the id_rsa file as key.
Changed the permission of the key file with chmod 600 key.
Connect to the server machine :
ssh -i key test-user#XXX.XXX.XXX.XXX
This works well, but it also ask me the password of the user.
hamuto#DESKTOP-HLSFHPR:~$ ssh -i key test-user#XXX.XXX.XXX.XXX
test-user#XXX.XXX.XXX.XXX's password:
The problem with this thing is, that with Github Actions, I can't enter the password.
So I changed the file /etc/ssh/sshd_config in the server:
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication no <-- I've changed that to no, and uncomment the line
#PermitEmptyPasswords no
When I retry to connect with ssh:
hamuto#DESKTOP-HLSFHPR:~$ ssh -i key test-user#XXX.XXX.XXX.XXX
test-user#XXX.XXX.XXX.XXX: Permission denied (publickey).
Why is that?
After days of research, I found the solution:
First thing first, I needed to understand that you only need one pair of key, generated on the Ubuntu server.
In the server, you have to copy the id_rsa.pub in the ~/.ssh/authorized_keys.
Set the permission correctly:
chown -R username:username /home/username/.ssh
chmod 700 /home/username/.ssh
chmod 600 /home/username/.ssh/authorized_keys
Change the value of PubkeyAuthentication in the file /etc/ssh/sshd_config to yes and uncomment it.
Copy the private id_rsa key, to the client. Set the permission to 600.
You can connect to the server:
ssh -i ~/.ssh/id_rsa test-user#XXX.XXX.XX.XX
Now it works.

Inject host's SSH keys into Docker Machine with Docker Compose

I am using Docker on Mac OS X with Docker Machine (with the default boot2docker machine), and I use docker-compose to setup my development environment.
Let's say that one of the containers is called "stack". Now what I want to do is call:
docker-composer run stack ssh user#stackoverflow.com
My public key (which has been added to stackoverflow.com and which will be used to authenticate me) is located on the host machine. I want this key to be available to the Docker Machine container so that I will be able to authenticate myself against stackoverflow using that key from within the container. Preferably without physically copying my key to Docker Machine.
Is there any way to do this? Also, if my key is password protected, is there any way to unlock it once so after every injection I will not have to manually enter the password?
You can add this to your docker-compose.yml (assuming your user inside container is root):
volumes:
- ~/.ssh:/root/.ssh
Also you can check for more advanced solution with ssh agent (I did not tried it myself)
WARNING: This feature seems to have limited support in Docker Compose and is more designed for Docker Swarm.
(I haven't checked to make sure, but) My current impression is that:
In Docker Compose secrets are just bind mount volumes, so there's no additional security compared to volumes
Ability to change secrets permissions with Linux host may be limited
See answer comments for more details.
Docker has a feature called secrets, which can be helpful here. To use it one could add the following code to docker-compose.yml:
---
version: '3.1' # Note the minimum file version for this feature to work
services:
stack:
...
secrets:
- host_ssh_key
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
Then the new secret file can be accessed in Dockerfile like this:
RUN mkdir ~/.ssh && ln -s /run/secrets/host_ssh_key ~/.ssh/id_rsa
Secret files won't be copied into container:
When you grant a newly-created or running service access to a secret, the decrypted secret is mounted into the container in an in-memory filesystem
For more details please refer to:
https://docs.docker.com/engine/swarm/secrets/
https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets
If you're using OS X and encrypted keys this is going to be PITA. Here are the steps I went through figuring this out.
Straightforward approach
One might think that there’s no problem. Just mount your ssh folder:
...
volumes:
- ~/.ssh:/root/.ssh:ro
...
This should be working, right?
User problem
Next thing we’ll notice is that we’re using the wrong user id. Fine, we’ll write a script to copy and change the owner of ssh keys. We’ll also set ssh user in config so that ssh server knows who’s connecting.
...
volumes:
- ~/.ssh:/root/.ssh-keys:ro
command: sh -c ‘./.ssh-keys.sh && ...’
environment:
SSH_USER: $USER
...
# ssh-keys.sh
mkdir -p ~/.ssh
cp -r /root/.ssh-keys/* ~/.ssh/
chown -R $(id -u):$(id -g) ~/.ssh
cat <<EOF >> ~/.ssh/config
User $SSH_USER
EOF
SSH key passphrase problem
In our company we protect SSH keys using a passphrase. That wouldn’t work in docker since it’s impractical to enter a passphrase each time we start a container.
We could remove a passphrase (see example below), but there’s a security concern.
openssl rsa -in id_rsa -out id_rsa2
# enter passphrase
# replace passphrase-encrypted key with plaintext key:
mv id_rsa2 id_rsa
SSH agent solution
You may have noticed that locally you don’t need to enter a passphrase each time you need ssh access. Why is that?
That’s what SSH agent is for. SSH agent is basically a server which listens to a special file, unix socket, called “ssh auth sock”. You can see its location on your system:
echo $SSH_AUTH_SOCK
# /run/user/1000/keyring-AvTfL3/ssh
SSH client communicates with SSH agent through this file so that you’d enter passphrase only once. Once it’s unencrypted, SSH agent will store it in memory and send to SSH client on request.
Can we use that in Docker? Sure, just mount that special file and specify a corresponding environment variable:
environment:
SSH_AUTH_SOCK: $SSH_AUTH_SOCK
...
volumes:
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
We don’t even need to copy keys in this case.
To confirm that keys are available we can use ssh-add utility:
if [ -z "$SSH_AUTH_SOCK" ]; then
echo "No ssh agent detected"
else
echo $SSH_AUTH_SOCK
ssh-add -l
fi
The problem of unix socket mount support in Docker for Mac
Unfortunately for OS X users, Docker for Mac has a number of shortcomings, one of which is its inability to share Unix sockets between Mac and Linux. There’s an open issue in D4M Github. As of February 2019 it’s still open.
So, is that a dead end? No, there is a hacky workaround.
SSH agent forwarding solution
Luckily, this issue isn’t new. Long before Docker there was a way to use local ssh keys within a remote ssh session. This is called ssh agent forwarding. The idea is simple: you connect to a remote server through ssh and you can use all the same remote servers there, thus sharing your keys.
With Docker for Mac we can use a smart trick: share ssh agent to the docker virtual machine using TCP ssh connection, and mount that file from virtual machine to another container where we need that SSH connection. Here’s a picture to demonstrate the solution:
First, we create an ssh session to the ssh server inside a container inside a linux VM through a TCP port. We use a real ssh auth sock here.
Next, ssh server forwards our ssh keys to ssh agent on that container. SSH agent has a Unix socket which uses a location mounted to Linux VM. I.e. Unix socket works in Linux. Non-working Unix socket file in Mac has no effect.
After that we create our useful container with an SSH client. We share the Unix socket file which our local SSH session uses.
There’s a bunch of scripts that simplifies that process:
https://github.com/avsm/docker-ssh-agent-forward
Conclusion
Getting SSH to work in Docker could’ve been easier. But it can be done. And it’ll likely to be improved in the future. At least Docker developers are aware of this issue. And even solved it for Dockerfiles with build time secrets. And there's a suggestion how to support Unix domain sockets.
You can forward SSH agent:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
You can use multi stage build to build containers This is the approach you can take :-
Stage 1 building an image with ssh
FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp
RUN apt-get update && \
apt-get install -y git npm
RUN mkdir /root/.ssh/ &&\
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
chmod 600 /root/.ssh/id_rsa &&\
touch /root/.ssh/known_hosts &&\
ssh-keyscan github.com >> /root/.ssh/known_hosts
COPY package*.json ./
RUN npm install
RUN cp -R node_modules prod_node_modules
Stage 2: build your container
FROM node:10-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY ./ ./
COPY --from=sshImage /root/temp/prod_node_modules ./node_modules
EXPOSE 3006
CMD ["npm", "run", "dev"]
add env attribute in your compose file:
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
then pass args from build script like this:
docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
And remove the intermediate container it for security. This Will help you cheers.
Docker for Mac now supports mounting the ssh agent socket on macOS.

Avoid to insert path of SSH key pair when connecting through passwordless login

I've set a passwordless connection through ssh using SSH key pair.
So if I run the command:
ssh -i /root/.ssh/root_master master#ip
I'm able to connect to master#ip without typing the pwd.
However I would like to connect without typing
-i /root/.ssh/root_master
but just typing
ssh master#ip
Can anyone help me?
localHost $ ssh remotePassword#remoteHostname
If you want to connect to remote server just by typing above command; you must create ssh trust between your local host and remote host.
Step 1: Create ssh setup on both the host. ( usually, .ssh directory is present at ~ directory )
Step 2: Generate RSA key pair on both the hosts. To generate RSA key pair
cd ~; mkdir -p .ssh; cd .ssh
ssh-keygen -t rsa -f "id_rsa" -N "\" -P "\"; chmod 400 id_rsa
touch authorized_keys; touch known_hosts
Step 3: Write id_rsa.pub file of local host to authorized_keys file of remote host and vice-versa (in case, you want to build both sides trust)
Step 4: Also make entry into known_hosts file or it will automatically create when you will connect for the first time.
This way you can create ssh trust between host and so make them passwordless.
Another way to do this is to usee new ssh module of perl.

oneadmin opennebula ssh localhost

We've been trying to use opennebula to simulate a cluster but ssh is driving us crazy.
For some, still unknown reasons, it is necessary that user oneadmin (created by opennebula) is able to ssh to local host. The "home" directory of opennebula (created by it) is /var/lib/one and inside "one" we can find .ssh directory. So here's what I've done up to now:
sudo -su oneadmin
oneadmin#pc:$ cd /var/lib/one/.ssh
oneadmin#pc:/var/lib/one/.ssh$ ssh-keygen -t rsa
oneadmin#pc:/var/lib/one/.ssh$ cat id_rsa.pub >> authorized_keys
Moreover, I've changed all permissions: all files and directory have oneadmin as owner and 600 (as I can read from the opennebula guide)
and finally, by root, I do
service ssh restart
Then I login from one terminal as oneadmin again but when I perform:
ssh oneadmin#localhost
here's what I get
Permission denied (publickey).
where am I making this damned mistake? We've lost more than one day for all these permissions!
I've just run into a similar problem - turns out Open Nebula didn't get on with selinux.
Finally found the solution over here - http://n40lab.wordpress.com/2012/11/26/69/ - we need to restore the context to ~/.ssh/authorized_keys:
$ chcon -v --type=ssh_home_t /var/lib/one/.ssh/authorized_keys
$ semanage fcontext -a -t ssh_home_t /var/lib/one/.ssh/authorized_keys

existing virtualbox machine exported using vagramt but I can't use it

I had an existing opensuse 64 bit machine which i exported using
vagrant package --base opensuse64 --output opensuse.box
After creating box I created another folder 'package-test' and copied the created box file there. Then I used
vagrant init opensuse opensuse.box
and then
vagrant up
but I am unable to connect to it via ssh.
Am I doing something wrong?
Thanks
To make vagrant ssh work, your OpenSUSE VM has to be configured for Public Key Authentication using Vagrant's key pair.
If you want to use password authentication, you'll have to specify the ssh port and use username/password known to you.
NOTE: If this is a vagrant base box, by default you can login as vagrant/vagrant with sudo privilege, as per the packaging guide.
If you want to use your own key pair, you can copy the public key and add it to the VM's ~/.ssh/authorized_keys.
Examples
Manual (1 liner)
cat /path/to/vagrant.pub | ssh user#host 'cat >> ~/.ssh/authorized_keys'
Use ssh-copy-id
# -i defaults to ~/.ssh/id_rsa.pub
ssh-copy-id user#host
# custom pub key
ssh-copy-id -i vagrant.pub user#host
NOTE: make sure ~/.ssh and ~/.ssh/authorized_keys in the VM have proper permission.