Ansible `authorized_key` copies the key to remote user but not working when trying to ssh - ssh

I have the following task in my ansible playbook that adds my ssh public key for a remote user pranjal that was already created by a previous task.
- authorized_key:
user: pranjal
key: "{{ lookup('file', 'pranjal.pub') }}"
When I run the ansible playbook, it runs successfully. However when I try logging in to the server using: ssh pranjal#<server_ip>
I get a Permission denied (publickey) error.
To be sure I logged into server from another user and double checked that key listed in /home/pranjal/.ssh/authorized_keys matches with my local public key that I am using to login.
The issue that I am guessing here could be a permissions issue and I understood the solution from a related question.
But how do we change permissions of authorized_key from within the Ansible task itself? (So that I don't have to separately log into the instance to modify permissions of .ssh/authorized_keys)

- file: path=/home/pranjal/.ssh state=directory owner=pranjal mode=0700
- file: path=/home/pranjal/.ssh/authorized_keys state=file owner=pranjal mode=0600
You may also want to check/verify /etc/ssh/sshd_config has the following:
PubkeyAuthentication yes
You can debug further by ssh -vvv pranjal#<server_ip>

Related

Ansible Inventory Specifying the Same Host with Different Users and Keys for Initial SSH User Setup and Disabling Root Access

I am attempting to have playbooks that run once to set up a new user and disable root ssh access.
For now, I am doing that by declaring all of my inventory twice. Each host needs an entry that accesses with the root user, used to create a new user, set up ssh settings, and then disable root access.
Then each host needs another entry with the new user that gets created.
My current inventory looks like this. It's only one host for now, but with a larger inventory, the repetition would just take up a ton of unnecessary space:
---
# ./hosts.yaml
---
all:
children:
master_roots:
hosts:
demo_master_root:
ansible_host: a.b.c.d # same ip as below
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
Is there a cleaner way to do this?
Is this an anti-pattern in any way? It is not idempotent. It would be nice to have this run in a way that running the same playbook twice always has the same output - either "success", or "no change".
I am using DigitalOcean and they have a functionality to have this done via a bash script before the VM comes up for the first time, but I would prefer a platform-independent solution.
Here is the playbook for setting up the users & ssh settings and disabling root access
---
# ./initial-host-setup.yaml
---
# References
# Digital Ocean recommended droplet setup script:
# - https://docs.digitalocean.com/droplets/tutorials/recommended-setup
# Digital Ocean tutorial on installing kubernetes with Ansible:
# - https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-debian-9
# Ansible Galaxy (Community) recipe for securing ssh:
# - https://github.com/vitalk/ansible-secure-ssh
---
- hosts: master_roots
become: 'yes'
tasks:
- name: create the 'infraops' user
user:
state: present
name: infraops
password_lock: 'yes'
groups: sudo
append: 'yes'
createhome: 'yes'
shell: /bin/bash
- name: add authorized keys for the infraops user
authorized_key: 'user=infraops key="{{item}}"'
with_file:
'{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}.pub'
- name: allow infraops user to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'infraops ALL=(ALL) NOPASSWD: ALL'
validate: visudo -cf %s
- name: disable empty password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitEmptyPasswords'
line: PermitEmptyPasswords no
notify: restart sshd
- name: disable password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^(#\s*)?PasswordAuthentication '
line: PasswordAuthentication no
notify: restart sshd
- name: Disable remote root user login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
Everything after this would use the masters inventory.
EDIT
After some research I have found that "init scripts"/"startup scripts"/"user data" scripts are supported across AWS, GCP, and DigitalOcean, potentially via cloud-init (this is what DigitalOcean uses, didn't research the others), which is cross-provider enough for me to just stick with a bash init script solution.
I would still be interested & curious if someone had a killer Ansible-only solution for this, although I am not sure there is a great way to make this happen without a pre-init script.
Regardless of any ansible limitations, it seems that without using the cloud init script, you can't have this. Either the server starts with a root or similar user to perform these actions, or the server starts without a user with those powers, then you can't perform these actions.
Further, I have seen Ansible playbooks and bash scripts that try to solve the desired "idempotence" (complete with no errors even if root is already disabled) by testing root ssh access, then falling back to another user, but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh.
EDIT 2 placing this here, since I can't use newlines in my response to a comment:
β.εηοιτ.βε responded to my assertion:
"but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh
with
then, try to ssh with infraops and assert that PermitRootLogin no is in the ssh daemon config file?"
It sounds like the suggestion is:
- attempt ssh with root
- if success, we know user/ssh setup tasks have not completed, so run those tasks
- if failure, attempt ssh with infraops
- if success, go ahead and run everything except the user creation again to ensure ssh config is as desired
- if failure... ? something else is probably wrong, since I can't ssh with either user
I am not sure what this sort of if-then failure recovery actually looks like in an Ansible script
You can overwrite host variables for a given play by using vars.
- hosts: masters
become: 'yes'
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "~/.ssh/id_rsa_infra_ops"
tasks:
You could only define the demo_master group and alter the ansible_user and ansible_ssh_private_key_file at run time, using command flags --user and --private-key.
So with an host.yaml containing
all:
children:
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
And run on - hosts: master, the first run would, for example be with
ansible-playbook initial-host-setup.yaml \
--user root \
--private-key ~/.ssh/id_rsa_root
When the subsequent runs would simply by
ansible-playbook subsequent-host-setup.yaml
Since all the required values are in the inventory already.

Ansible sudo run ("as root") on Cygwin

Need to run bash-script at sudo-user on remote hosts using Ansible. My working machine is Win10 + Cygwin (sorry, it wasn't my fault).
So, i tested it on non-sudo scripts (it doesn't need root access) - and it works.
No, first time it didn't work at all: Failed to connect to the host via ssh: my_user#server1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)
So, i used this: "ssh-keygen -t rsa" -> "ssh-copy-id my_user#server1" || "ssh-copy-id my_user#server2" under my_user: created an ssh-key and shered it to remote hosts. After that i could run scripts under my_user on server1, server2 and so on...
Now, i need run sudo-scripts. But i can't understand, how it'll be.
on Cygwin there're no ROOT-user. And i don't know, how can to generete ssh-key for nonexistent user.
how to run ansible playbook with root? remote_user: root goes with error: Failed to connect to the host via ssh: my_user#server1: Permission denied Look, it's my_user, not root. Does it run as my_user or root-user?
Maybe i do it wrong at all, and are there any "best practice"-vay to run sudo-scripts?
Oh, please, give me a help to solve my problem.
Seems like auth as root disabled on remote server.
In /etc/ssh/sshd_config find PermitRootLogin and set it on Yes, but I'll not recommend you to do that.
Actually, use exactly root user - it's bad practice.
Check permissions for your my_user. Maybe you can grant it sudo rights without password.
To do that edit /etc/sudoers as root, find this line:
# Allow members of group sudo to execute any command
And after it add this:
my_user ALL=(ALL) NOPASSWD: ALL
After it you'll be able to execute any sudo command without password on remote machine.
I did it, but what i did?
So, steps of solution:
set become: true at playbook, abuote here:
hosts:
test_hosts
become: true
vars:
Next, run playbook with "-K" attibute: ansible-playbook ./your_playbook.yml -K
So, it works: ran and even exec scripts under sudo.
But i can't understand, how can i set what user i use as "executable user".

Permission denied with Vagrant

When I do a vagrant ssh in my project on a windows 10 laptop I get this error:
vagrant#127.0.0.1: Permission denied (publickey).
When I then delete .vagrant/machines/default/virtualbox/private_key and do vagrant ssh again, I get access to the VM.
But when I then exit the VM and do `vagrant halt', I get this error:
==> default: Attempting graceful shutdown of VM...
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
translation missing: en.vagrant_ps.errors.powershell_error.powershell_error
It seems to me that it tries to add my SSH key, but something goed wrong. Any idea how I can solve this?
you can simply run following command in your cmd:
set VAGRANT_PREFER_SYSTEM_BIN=0
vagrant ssh
successfully tested under the windows 10 with vagrant 2.1.5
you can also see: https://www.vagrantup.com/docs/other/environmental-variables.html#vagrant_prefer_system_bin
I solved error:
vagrant#127.0.0.1: Permission denied (publickey)
editing my Vagrantfile.
It seems Vagrant didn't like this configuration:
config.vm.synced_folder "app", "/home/vagrant"
Edited it to:
config.vm.synced_folder "app", "/vagrant"
The solution provided by #rekinz works, but I want to add some further explanation.
set VAGRANT_PREFER_SYSTEM_BIN=0
Vagrant will default to using a system provided SSH on Windows. This environment variable can also be used to disable that behavior to force Vagrant to use the embedded SSH executable by setting it to 0.
I also used Vagrant halt to clean up a previous installation. And then, when I provisioned it again, I had got the same error as the OP.
I think the SSH provided by Windows is not working and using this VAGRANT_PREFER_SYSTEM_BIN has reset the same.
The problem can be that the sshClient windows feature intercepting the operation, try opening powershell as admin and run the following:
Remove-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
if that doesn't solve then install sshclient again
Get-WindowsCapability -Online | ? Name -like 'OpenSSH*'
You can also check the permission of the file
.vagrant/machines/default/virtualbox/private_key
In my case the permissions for this file were for an Unknown user (likely from a previous OS installation) - setting the permissions for this file to myself fixed the issue
It works for me when I point to the private_key (check permission of it first )
ssh -i ${vagrant_home}/.vagrant/machines/default/virtualbox/private_key vagrant#127.0.0.1 -p 2222
On Windows 10, when we try to login to the Virtual machine node (eg. node01) using
vagrant ssh node01
If you get the error
vagrant#127.0.0.1: Permission denied (publickey)
Try to follow the steps below:
In the Power Shell, set the environmental variable VAGRANT_PREFER_SYSTEM_BIN to prefer using the local ssh instead of the packaged ssh (Read more about the variable here)
$Env:VAGRANT_PREFER_SYSTEM_BIN += 0
As per issue listed in Vagrant Github:
vagrant#127.0.0.1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Once done, do the vagrant ssh to the vm which was not accessible earlier

Is it possible to add an ssh key to the agent for a private repo in an ansible playbook?

I am using Ansible to provision a Vagrant environment. As part of the provisioning process, I need to connect from the currently-provisioning VM to a private external repository using an ssh key in order to use composer to pull in modules for an application. I've done a lot of reading on this before asking this question, but still can't seem to comprehend what's going on.
What I want to happen is:
As part of the playbook, on the Vagrant VM, I add the ssh key to the private repo to the ssh-agent
Using that private key, I am then able to use composer to require modules from the external source
I've read articles which highlight specifying the key in playbook execution. (E.g. ansible-play -u username --private-key play.yml) As far as I understand, this isn't for me, as I'm calling the playbook via Vagrant file. I've also read articles which mention ssh forwarding. (SSH Agent Forwarding with Ansible). Based on what I have read, this is what I've done:
On the VM being provisioned, I insert a known_hosts file which consists of the host entries of the machines which house the repos I need:
On the VM being provisioned, I have the following in ~/.ssh/config:
Host <VM IP>
ForwardAgent yes
I have the following entries in my ansible.cfg to support ssh forwarding:
[defaults]
transport = ssh
[ssh_connection]
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
[privilege_escalation]
pipelining = False
I have also added the following task to the playbook which tries to
use composer:
- name: Add ssh agent line to sudoers
become: true
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
I exit the ansible provisioner and add the private key on the provisioned VM to the agent via a shell provisioner (This is where I suspect I'm going wrong)
Then, I attempt to use composer, or call git via the command module. Like this, for example, to test:
- name: Test connection
command: ssh -T git#github.com
Finally, just in case I wasn't understanding ssh connection forwarding correctly, I assumed that what was supposed to happen was that I needed to first add the key to my local machine's agent, then forward that through to the provisioned VM to use to grab the repositories via composer. So I used ssh-add on my local machine before executing vagrant up and running the provisioner.
No matter what, though, I always get permission denied when I do this. I'd greatly appreciate some understanding as to what I may be missing in my understanding of how ssh forwarding should be working here, as well as any guidance for making this connection happen.
I'm not certain I understand your question correctly, but I often setup machines that connect to a private bitbucket repository in order to clone it. You don't need to (and shouldn't) use agent forwarding for that ("ssh forwarding" is unclear; there's "authentication agent forwarding" and "port forwarding", but you need neither in this case).
Just to be clear with terminology, you are running Ansible in your local machine, you are provisioning the controlled machine, and you want to ssh from the controlled machine to a third-party server.
What I do is I upload the ssh key to the controlled machine, in /root/.ssh (more generally $HOME/.ssh where $HOME is the home directory of the controlled machine user who will connect to the third-party server—in my case that's root). I don't use the names id_rsa and id_rsa.pub, because I don't want to touch the default keys of that user (these might have a different purpose; for example, I use them to backup the controlled machine). So this is the code:
- name: Install bitbucket aptiko_ro ssh key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa
mode: 0600
content: "{{ aptiko_ro_ssh_key }}"
- name: Install bitbucket aptiko_ro ssh public key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa.pub
content: "{{ aptiko_ro_ssh_pub_key }}"
Next, you need to tell the controlled machine ssh this: "When you connect to the third-party server, use key X instead of the default key, and logon as user Y". You tell it in this way:
- name: Install ssh config that uses aptiko_ro keys on bitbucket
copy:
dest: /root/.ssh/config
content: |
Host bitbucket.org
IdentityFile ~/.ssh/aptiko_ro_id_rsa
User aptiko_ro

Normal gitlab user with working keys cannot use PubkeyAuthentication to login to bash shell prompt

On an Ubuntu server, 'foo.com', that serves gitlab, a gitlab user, 'bar', can clone, push, and pull without having to use a password, with no problem (public key is set up on the gitlab server for user 'bar').
User 'bar' wants to use the command line on the server 'foo', and does ssh bar#foo.com. When user 'bar's ssh keys are not in 'foo''s authorized_keys, 'bar' is logged momentarily into Gitlab:
debug2: shell request accepted on channel 0
Welcome to GitLab, bar
and then that session promptly exits.
When user 'bar's ssh key - even one that is not registered with GitLab - is in 'foo.com''s authorized_keys, then that user gets the expected result when doing ssh bar#foo.com. However, then user bar (on their local computer) is unable to push, pull, clone, etc. from their gitlab-managed repository, with the error message being that "'some-group/some-project.git' does not appear to be a git repository".
It appears that there is a misconfiguration such that shell access is mixed up with gitlab project access.
How can user 'foo' be able to both login via ssh to a regular shell prompt and also use git normally (interacting with the remote git server from their local box)?
After a lot of searching I got to know why this was happening on my end. I had the same issue. I wanted to use the same SSH key for both SSH login as well as GitLab access.
I found this thread helpful:
https://gist.github.com/hanseartic/368a63933afb7c9f7e6b
In the authorized_keys file, the gitlab-shell enters specific commands to limit the access. It adds the limitation once the user enters the public key through web interface. It uses the command option to do so.
We would need to modify the command option to allow access to bash and remember to remove the option of no-pty if listed in the comma-separated section. For example in my case I had this within the line: no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty and had to remove no-pty from the list.
A sample modified command should look like this:
command="if [ -t 0 ]; then bash; else /home/ec2-user/gitlab_service/gitlab-shell/bin/gitlab-shell key-11; fi",no-port-forwarding,no-X11-forwarding,no-agent-forwarding ssh-rsa AAA...
Need to be mindful to edit the correct command by checking the key number or the publickey and username associated with the command.
This did not require any service restart.