SSH authorization in Ansible between local and remote host - ssh

I have a Vagrant box with centos 7 where I am creating LXC containers. An Ansible run in the Vagrant box. I create the container with Ansible like this:
- name: Create containers
lxc_container:
name: localdev_nginx
container_log: true
template: centos
container_config:
- 'lxc.network.ipv4 = 192.168.42.110/24'
- 'lxc.network.ipv4.gateway = 192.168.42.1'
container_command: |
yum -y install openssh-server
echo "Som*th1ng" | passwd root --stdin
ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
state: started
This is create the container for me, but after this I can't access to the container from the Ansible. Just if I take the container ssh pubkey to the Vagrant known_hosts like this:
- name: Tell the host about our servers it might want to ssh to
shell: ssh-keyscan -t rsa 192.168.42.110 >> /root/.ssh/known_hosts
And if I add the container root password in the Ansible hosts file like this:
[dev-webservers]
loc-dev-www1.internavenue.com hostname=loc-dev-www1.internavenue.com ansible_ssh_host=192.168.42.110 ansible_connection=ssh ansible_user=root ansible_ssh_pass=Som*th1ng
I hope it has a better solution, because really bad. How can I do it normally?

I copy the Vagrant box public key to the container's authorized_keys and in the hosts using this tag:
ansible_ssh_extra_args="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
This is only allowed from Ansible >2.0

Related

Use ansible vault passwords for ask-become-pass and ssh password

I would like to use ansible vault passwords for the ssh and become passwords when running ansible-playbook. This way I dont need to type them in when using the parameters --ask-become-pass or the ssh password.
Problem:
Every time I run my ansible-playbook command I am prompted for a ssh and become password.
My original command where I need to type the SSH and become password:
ansible-playbook playbook.yaml --ask-become-pass -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k --ask-vault-pass -T 40
Command I have tried to make ansible-playbook use my vault passwords instead of my typing them in:
ansible-playbook playbook.yaml -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k -T 40 --extra-vars #group_vars/all/main.yaml
I tried creating the directory structure from where the command is run group_vars/all/main.yaml, where main.yaml has my ansible vault passwords for "ansible_ssh_user", "ansible_ssh_pass", and "ansible_become_pass"
I even tried putting my password in the command:
ansible-playbook playbook.yaml -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k -T 40 --extra-vars ansible_ssh_pass=$'"MyP455word"'
ansible-playbook playbook.yaml -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k -T 40 --extra-vars ansible_ssh_pass='MyP455word'
Every time I run my playbook command, I keep getting prompted for a SSH pass and become pass. What am I missing here?
I have already read these two posts, both of which were not clear to me on the exact process, so neither helped:
https://serverfault.com/questions/686347/ansible-command-line-retriving-ssh-password-from-vault
Ansible vault password in group_vars not detected
Any recommendations?
EDIT: Including my playbook, role, settings.yaml, and inventory file as well.
Here is my playbook:
- name: Enable NFS server
hosts: nfs_server
gather_facts: False
become: yes
roles:
- { role: nfs_enable }
Here is the role located in roles/nfs_enable/tasks/main.yaml
- name: Include vars
include_vars:
file: ../../../settings.yaml
name: settings
- name: Start NFS service on server
systemd:
state: restarted
name: nfs-kernel-server.service
Here is my settings file
#nfs share directory
nfs_ssh_user: admin
nfs_share_dir: "/nfs-share/logs/"
ansible_become_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
55543131373731393764333932626261383765326432613239356638616234643335643438326165
3332363366623937386635653463656537353663326139360a316436356634386135653038643238
61313123656332663232633833366133373630396434346165336337623364383261356234653461
3335386135553835610a303666346561376161366330353935363937663233353064653938646263
6539
ansible_ssh_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
55543131373731393764333932626261383765326432613239356638616234643335643438326165
3332363366623937386635653463656537353663326139360a316436356634386135653038643238
61313123656332663232633833366133373630396434346165336337623364383261356234653461
3335386135553835610a303666346561376161366330353935363937663233353064653938646263
6539
Here is my inventory
[nfs_server]
10.10.10.10 ansible_ssh_user=admin ansible_ssh_private_key_file=~/.ssh/id_ed25519

How to run Ansible playbook to multiple servers in a right way?

Ansible use ssh to setup softwares to remote hosts.
If there are some fresh machines just been installed, run Ansible playbook from one host will not connect them because of no authorized_keys on remote hosts.
If copy the Ansible host's pub key to those target hosts like:
$ ssh user#server "echo \"`cat .ssh/id_rsa.pub`\" >> .ssh/authorized_keys"
First should ssh login and make file on every remote host:
$ mkdir .ssh
$ touch .ssh/authorized_keys
Is this the common way to run Ansible playbook to remote servers? Is there a better way exist?
I think it's better to do that using Ansible as well, with the authorized_key module. For example, to authorize your key for user root:
ansible <hosts> -m authorized_key -a "user=root state=present key=\"$(cat ~/.ssh/id_rsa.pub)\"" --ask-pass
This can be done in a playbook also, with the target user as a variable that defaults to root:
- hosts: <NEW_HOSTS>
vars:
- username: root
tasks:
- name: Add authorized key
authorized_key:
user: "{{ username }}"
state: present
key: "{{ lookup('file', '/home/<YOUR_USER>/.ssh/id_rsa.pub') }}"
And executed with:
ansible-playbook auth.yml --ask-pass -e username=<TARGET_USER>
Your user should have privileges, if not use became.

ansible ssh permission denied

I'm generated ssh key, and copy it to remote server. When I try to ssh to that server everything works fine:
ssh user#ip_address
User is not a root. If I try to ssh throw ansible:
ansible-playbook -i hosts playbook.yml
with ansible playbook:
---
- hosts: web
remote_user: user
tasks:
- name: test connection
ping:
and hosts file:
[web]
192.168.0.103
I got error:
...
Permission denied (publickey,password)
What's the problem?
Ansible is using different key compared to what you are using to connect to that 'web' machine.
You can explicitly configure ansible to use a specific private key by
private_key_file=/path/to/key_rsa
as mentioned in the docs Make sure that you authorize that key which ansible uses, to the remote user in remote machine with ssh-copy-id -i /path/to/key_rsa.pub user#webmachine_ip_address
In my case I got similar error while running ansible playbook when host changed it's fingerprint. I found this, trying to establish ssh connection from command line. So, after running ssh-keygen -f "/root/.ssh/known_hosts" -R my_ip this problem was solved.
Hi Run the play as below. by default ansible plays using root.
ansible-playbook -i hosts playbook.yml -u user
If you still get the error, run below and paste the out-put here.
ansible-playbook -i hosts playbook.yml -u user -vvv

Google cloud engine, ssh between two centos7 instances fails

I've setup 2 Google Compute Engine instances and I can easily SSH in both of them by using the key created by gcloud compute ssh command. But when I try the following...
myself#try-master ~] ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
myself#try-master ~] cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
myself#try-master ~] chmod 0600 ~/.ssh/authorized_keys
myself#try-master ~] ssh-copy-id -i ~/.ssh/id_rsa.pub myself#try-slave-1
... it does not work, and ssh-copy-id shows the message below:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
If I copy the google_compute_engine private and public key on try-master, and can use it to log on both instances, but I find unsatisfactory to move a private key over the network. I guess this is somewhat related to this topic:
How can this be solved?
[1] https://cloud.google.com/compute/docs/instances#sshbetweeninstances
Using CentOS7 images, and a CentOs7 as local host:
gcloud compute instances create try-master --image centos-7
gcloud compute instances create try-slave-1 --image centos-7
This can be solved by using authentication forwarding during initial SSH keys setup:
Set up authentication forwarding for once on local machine (note the "-A" flag). First you need to run:
eval `ssh-agent -s`
And then
ssh-add ~/.ssh/google_compute_engine
gcloud compute ssh --ssh-flag="-A" try-master
Perform the steps above (from keygen to ssh-copy-id)
myself#try-master ~] ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
myself#try-master ~] cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
myself#try-master ~] chmod 0600 ~/.ssh/authorized_keys
myself#try-master ~] ssh-copy-id -i ~/.ssh/id_rsa.pub myself#try-slave-1
myself#try-master ~] exit
Login again into try-master without SSH authentication forwarding:
gcloud compute ssh try-master
myself#try-master ~] ssh myself#try-slave-1
myself#try-slave-1 ~]
Initial approach didn't work because GCE instances only allow public key authentication by default. So, ssh-copy-id is unable to authenticate against try-slave to copy the new public key, because there is no public key configured in try-master available in try-slave yet.
Using authentication forwarding, the private key from your local machine is forwarded from your local machine to try-master, and from there to try-slave. GCE account manager in try-slave will fetch the public key from your project metadata and thus ssh-copy-id will be able to copy work.

How do I setup passwordless ssh on AWS

How do I setup passwordless ssh between nodes on AWS cluster
Following steps to setup password less authentication are tested thoroughly for Centos and Ubuntu.
Assumptions:
You already have access to your EC2 machine. May be using the pem key or you have credentials for a unix user which has root permissions.
You have already setup RSA keys on you local machine. Private key and public key are available at "~/.ssh/id_rsa" and "~/.ssh/id_rsa.pub" respectively.
Steps:
Login to you EC2 machine as a root user.
Create a new user
useradd -m <yourname>
sudo su <yourname>
cd
mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
Append contents of file ~/.ssh/id_rsa.pub on you local machine to ~/.ssh/authorized_keys on EC2 machine.
chmod -R 700 ~/.ssh
chmod 600 ~/.ssh/*
Make sure sshing is permitted by the machine. In file /etc/ssh/sshd_config, make sure that line containing "PasswordAuthentication yes" is uncommented. Restart sshd service if you make any change in this file:
service sshd restart # On Centos
service ssh restart # On Ubuntu
Your passwordless login should work now. Try following on your local machine:
ssh -A <yourname>#ec2-xx-xx-xxx-xxx.ap-southeast-1.compute.amazonaws.com
Making yourself a super user. Open /etc/sudoers. Make sure following two lines are uncommented:
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Add yourself to wheel group.
usermod -aG wheel <yourname>
This may help someone
Copy the pem file on the machine then copy the content of pem file to the .ssh/id_rsa file you can use bellow command or your own
cat my.pem > ~/.ssh/id_rsa
try ssh localhost it should work and same with the other machines in the cluster
how I made Paswordless shh work between two instances is the following:
create ec2 instances – they should be in the same subnet and have the same security group
Open ports between them – make sure instances can communicate to each other. Use the default security group which has one rule relevant for this case:
Type: All Traffic
Source: Custom – id of the security group
Log in to the instance you want to connect from to the other instance
Run:
1 ssh-keygen -t rsa -N "" -f /home/ubuntu/.ssh/id_rsa
to generate a new rsa key.
Copy your private AWS key as ~/.ssh/my.key (or whatever name you want to use)
Make sure you change the permission to 600
1 chmod 600 .ssh/my.key
Copy the public key to the instance you wish to connect to passwordless
1 cat ~/.ssh/id_rsa.pub | ssh -i ~/.ssh/my.key ubuntu#10.0.0.X "cat >> ~/.ssh/authorized_keys"
If you test the passwordless ssh to the other machine, it should work.
1 ssh 10.0.0.X
you can use ssh keys like described here:
http://pkeck.myweb.uga.edu/ssh/