I've setup 2 Google Compute Engine instances and I can easily SSH in both of them by using the key created by gcloud compute ssh command. But when I try the following...
myself#try-master ~] ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
myself#try-master ~] cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
myself#try-master ~] chmod 0600 ~/.ssh/authorized_keys
myself#try-master ~] ssh-copy-id -i ~/.ssh/id_rsa.pub myself#try-slave-1
... it does not work, and ssh-copy-id shows the message below:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
If I copy the google_compute_engine private and public key on try-master, and can use it to log on both instances, but I find unsatisfactory to move a private key over the network. I guess this is somewhat related to this topic:
How can this be solved?
[1] https://cloud.google.com/compute/docs/instances#sshbetweeninstances
Using CentOS7 images, and a CentOs7 as local host:
gcloud compute instances create try-master --image centos-7
gcloud compute instances create try-slave-1 --image centos-7
This can be solved by using authentication forwarding during initial SSH keys setup:
Set up authentication forwarding for once on local machine (note the "-A" flag). First you need to run:
eval `ssh-agent -s`
And then
ssh-add ~/.ssh/google_compute_engine
gcloud compute ssh --ssh-flag="-A" try-master
Perform the steps above (from keygen to ssh-copy-id)
myself#try-master ~] ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
myself#try-master ~] cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
myself#try-master ~] chmod 0600 ~/.ssh/authorized_keys
myself#try-master ~] ssh-copy-id -i ~/.ssh/id_rsa.pub myself#try-slave-1
myself#try-master ~] exit
Login again into try-master without SSH authentication forwarding:
gcloud compute ssh try-master
myself#try-master ~] ssh myself#try-slave-1
myself#try-slave-1 ~]
Initial approach didn't work because GCE instances only allow public key authentication by default. So, ssh-copy-id is unable to authenticate against try-slave to copy the new public key, because there is no public key configured in try-master available in try-slave yet.
Using authentication forwarding, the private key from your local machine is forwarded from your local machine to try-master, and from there to try-slave. GCE account manager in try-slave will fetch the public key from your project metadata and thus ssh-copy-id will be able to copy work.
Related
For security reasons and compliance, we're required to set up 2FA on our hosts. We implement it by forcing authentication with passwords AND a public key with the AuthenticationMethods setting in sshd_config. The private key is required to have a password as well.
So in order to run playbooks on these hosts, we need to be able to enter the login password and the password of the private key. I've used the -k flag together with the ansible_ssh_private_key_file option in the hosts file (or with the --private-key flag). It asks for the SSH login password but then it just hangs and never asks me for the private key passphrase. When I set the -vvvv flat I see that the key is passed correctly, but the SSH login password isn't passed with the command:
<10.1.2.2> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=22022 -o 'IdentityFile="/home/me/.ssh/id_ed25519"' -o 'User="me"' -o ConnectTimeout=10 -o ControlPath=/home/me/.ansible/cp/db574551ae 10.1.2.2 '/bin/sh -c '"'"'echo ~me && sleep 0'"'"''
How can I make Ansible work with both passwords and public keys?
As stated in the Ansible Documentation:
Ansible does not expose a channel to allow communication between the user and the
ssh process to accept a password manually to decrypt an ssh key when using the ssh
connection plugin (which is the default). The use of ssh-agent is highly recommended.
This is why you don't get prompted to type in your private key password. As said in the comments, setup a ssh agent, when you'll be prompted for it:
$ ssh-agent bash
$ ssh-add ~/.ssh/id_rsa
Then, after playbook execution, clear out identities so to be asked for passwords the next time:
# Deletes all identities from the agent:
ssh-add -D
# or, instead of adding identities, removes identities (selectively) from the agent:
ssh-add -d <file>
You may pack key addition, playbook execution and cleaning into one wrapper shell script.
I'm trying to access localhost:6006 on my remote ubuntu machine using public keys, and put it on my localhost:6006.
The command is something similar to:
ssh -N -L 127.0.0.1:6006:127.0.0.1:6006 ubuntu#XXX.XX.XX.XXX
but I keep getting public key denied (but I can access the computer with my keys via normal ssh)
You should specify your private key with option -i.
ssh -i [path_of_your_private_key] -N -L 127.0.0.1:6006:127.0.0.1:6006 ubuntu#XXX.XX.XX.XXX
I want to copy big files from one linux server(SLES11) to another(SunOS) via bash scripting. I dont want to have a password promt so I used ssh-keygen to generate key about this connection.These are the steps I followed:
ssh-keygen -t rsa -b 2048
ssh-copy-id -i /home/username/.ssh/id_rsa.pub swtrans#111.111.111.111
ssh -i id_rsa.pub swtrans#111.111.111.111
After this scp command still requests password.
I am not 'root' user in both servers.
I changed permissions to 700 to the .ssh directory and 640 to the file authorized_keys in the remote server.
ssh -i id_rsa.pub swtrans#111.111.111.111
The -i argument accepts the private key, not the public one. You should use
ssh -i id_rsa swtrans#111.111.111.111
If it will not help, please provide the errors you can see in the server log and in the client
I have a Vagrant box with centos 7 where I am creating LXC containers. An Ansible run in the Vagrant box. I create the container with Ansible like this:
- name: Create containers
lxc_container:
name: localdev_nginx
container_log: true
template: centos
container_config:
- 'lxc.network.ipv4 = 192.168.42.110/24'
- 'lxc.network.ipv4.gateway = 192.168.42.1'
container_command: |
yum -y install openssh-server
echo "Som*th1ng" | passwd root --stdin
ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
state: started
This is create the container for me, but after this I can't access to the container from the Ansible. Just if I take the container ssh pubkey to the Vagrant known_hosts like this:
- name: Tell the host about our servers it might want to ssh to
shell: ssh-keyscan -t rsa 192.168.42.110 >> /root/.ssh/known_hosts
And if I add the container root password in the Ansible hosts file like this:
[dev-webservers]
loc-dev-www1.internavenue.com hostname=loc-dev-www1.internavenue.com ansible_ssh_host=192.168.42.110 ansible_connection=ssh ansible_user=root ansible_ssh_pass=Som*th1ng
I hope it has a better solution, because really bad. How can I do it normally?
I copy the Vagrant box public key to the container's authorized_keys and in the hosts using this tag:
ansible_ssh_extra_args="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
This is only allowed from Ansible >2.0
How do I setup passwordless ssh between nodes on AWS cluster
Following steps to setup password less authentication are tested thoroughly for Centos and Ubuntu.
Assumptions:
You already have access to your EC2 machine. May be using the pem key or you have credentials for a unix user which has root permissions.
You have already setup RSA keys on you local machine. Private key and public key are available at "~/.ssh/id_rsa" and "~/.ssh/id_rsa.pub" respectively.
Steps:
Login to you EC2 machine as a root user.
Create a new user
useradd -m <yourname>
sudo su <yourname>
cd
mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
Append contents of file ~/.ssh/id_rsa.pub on you local machine to ~/.ssh/authorized_keys on EC2 machine.
chmod -R 700 ~/.ssh
chmod 600 ~/.ssh/*
Make sure sshing is permitted by the machine. In file /etc/ssh/sshd_config, make sure that line containing "PasswordAuthentication yes" is uncommented. Restart sshd service if you make any change in this file:
service sshd restart # On Centos
service ssh restart # On Ubuntu
Your passwordless login should work now. Try following on your local machine:
ssh -A <yourname>#ec2-xx-xx-xxx-xxx.ap-southeast-1.compute.amazonaws.com
Making yourself a super user. Open /etc/sudoers. Make sure following two lines are uncommented:
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Add yourself to wheel group.
usermod -aG wheel <yourname>
This may help someone
Copy the pem file on the machine then copy the content of pem file to the .ssh/id_rsa file you can use bellow command or your own
cat my.pem > ~/.ssh/id_rsa
try ssh localhost it should work and same with the other machines in the cluster
how I made Paswordless shh work between two instances is the following:
create ec2 instances – they should be in the same subnet and have the same security group
Open ports between them – make sure instances can communicate to each other. Use the default security group which has one rule relevant for this case:
Type: All Traffic
Source: Custom – id of the security group
Log in to the instance you want to connect from to the other instance
Run:
1 ssh-keygen -t rsa -N "" -f /home/ubuntu/.ssh/id_rsa
to generate a new rsa key.
Copy your private AWS key as ~/.ssh/my.key (or whatever name you want to use)
Make sure you change the permission to 600
1 chmod 600 .ssh/my.key
Copy the public key to the instance you wish to connect to passwordless
1 cat ~/.ssh/id_rsa.pub | ssh -i ~/.ssh/my.key ubuntu#10.0.0.X "cat >> ~/.ssh/authorized_keys"
If you test the passwordless ssh to the other machine, it should work.
1 ssh 10.0.0.X
you can use ssh keys like described here:
http://pkeck.myweb.uga.edu/ssh/