Ansible unable to create folder on localhost with different user - ssh

I'm executing ansible playbook with appuser whereas I wish to create folder with user webuser on localhost.
ssh keys are setup for webuser on my localhost. So after login with appuser I can simply ssh webuser#localhost to switch user to webuser.
Note: I do not have sudo priveledges so I cannot sudo to switch to webuser from appuser.
Below is my playbook that is run with user appuser but needs to create a folder 04May2020 on localhost using webuser
- name: "Play 1"
hosts: localhost
remote_user: "webuser"
vars:
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
ansible_ssh_private_key_file: /app/misc_automation/ssh_keys_id_rsa
tasks:
- name: create folder for today's print
file:
path: "/webWeb/htdocs/print/04May2020"
state: directory
remote_user: webuser
However, the output shows that the folder is created with appuser instead of webuser. See output showing ssh connectivity with appuser instead of webuser.
ansible-playbook /app/Ansible/playbook/print_oracle/print.yml -i /app/Ansible/playbook/print_oracle/allhosts.hosts -vvv
TASK [create folder for today] ***********************************
task path: /app/Ansible/playbook/print_oracle/print.yml:33
Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0'
Can you please suggest if it is possible without sudo?

Putting all my comments together in a comprehensive answer.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
This is indicating that you are connecting to localhost through the local connection plugin, either because you explicitelly re-declared the host as such or because you are using the implicit localhost. From discussions, you are in the second situation.
When using the local connection plugin, as indicated in the above documentation, the remote_user is ignored. Trying to change the user has no effect as you can see in the below test run (user (u)ids changed):
# Check we are locally running as user1
$ id -a
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Running the same command through ansible returns the same result
$ ansible localhost -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Trying to change the remote user has no effect
$ ansible localhost -u whatever -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
Without changing your playbook and/or inventory, the only solution is to launch the playbook as the user who needs to create the directory.
Since you have ssh available, an other solution is to declare a new host that you will use only for this purpose, which will target the local IP through ssh. (Note: you can explicitly declare localhost like this but then all connections will go through ssh which might not be what you want to do).
Somewhere at the top of you inventory, add the line:
localssh ansible_host=127.0.0.1
And in your playbook, change
hosts: localssh
Now the connection to your local machine will go through ssh and the remote_user will be obeyed correctly.

One way you can try is by setting the ansible_connection to localhost. To do this, in the directory from which you are running ansible commands, create a host_vars directory. In that sub-directory, create a file named localhost, containing the line ansible_connection: smart

Related

Access to jumpbox as normal user and change to root user in ansible

Here is my situation. I want to access a server through a jumpbox/bastion host.
so, I will login as normal user in jumpbox and then change user to root after that login to remote server using root. I dont have direct access to root in jumpbox.
$ ssh user#jumpbox
$ user#jumpbox:~# su - root
Enter Password:
$ root#jumpbox:~/ ssh root#remoteserver
Enter Password:
$ root#remoteserver:~/
Above is the manual workflow. I want to achieve this in ansible.
I have seen something like this.
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#jumpbox"'
This doesnot work when we need to switch to root and login to remote server.
There are a few things to unpack here:
General Design / Issue:
This isn't an Ansible issue, it's an ssh issue/proxy misconfiguration.
A bastion host/ssh proxy isn't meant to be logged into and have commands ran directly on it interactively (like su - root, enter password, then ssh...). That's not really a bastion, that's just a server you're logging into and running commands on. It's not an actual ssh proxy/bastion/jump role. At that point you might as well just run Ansible on the host.
That's why things like ProxyJump and ProxyCommand aren't working. They are designed to work with ssh proxies that are configured as ssh proxies (bastions).
Running Ansible Tasks as Root:
Ansible can run with sudo during task execution (it's called "become" in Ansible lingo), so you should never need to SSH as the literal root user with Ansible (shouldn't ssh as root ever really).
Answering the question:
There are a lot of workarounds for this, but the straightforward answer here is to configure the jump host as a proper bastion and your issue will go away. An example...
As the bastion "user", create an ssh key pair, or use an existing one.
On the bastion, edit the users ~/.ssh/config file to access the target server with the private key and desired user.
EXAMPLE user#bastion's ~/.ssh/config (I cringe seeing root here)...
Host remote-server
User root
IdentityFile ~/.ssh/my-private-key
Add the public key created in step 1 to the target servers ~/.ssh/authorized_keys file for the user you're logging in as.
After that type of config, your jump host is working as a regular ssh proxy. You can then use ProxyCommand or ProxyJump as you had tried to originally without issue.

Ansible multi hop design

I would like to run an ansible playbook on a target host passing through multiple hosts. The scenario looks similar to the one depicted in the picture:
I partially solved issue creating the ssh_config file in the Ansible project directory:
Host IP_HostN
HostName IP_HOST_N
ProxyJump Username1#IP_HOST_2:22,Username2#IP_HOST_2:22
User UsernameN
and defining in the ansible.cfg in the Ansible project directory:
[ssh_connection]
ssh_args= -F "ssh_config"
The problem is that I need to insert automatically for each transient hosts and target host ssh username and password and I don't know how to automate this task. Moreover, python may not be installed on every transient node.
I found a reasonably good workaround. According to the scenario below:
we create an ssh tunnel until the transient host that can directly reach the target host. We also create a local port binding with -L flag:
ssh -J user_1#transient_host1:port_1 -p port_2 user_2#transient_host2 -L LOCAL_PORT:TARGET_HOST_IP:TARGET_HOST_PORT
Then we can directly enter into Target Host using the local binding:
ssh user_target_host#localhost -p LOCAL_PORT
In this way, we can run ansible playbooks on the local host configuring ansible variables accordingly:
ansible_host: localhost
ansible_user: user_target_host
ansible_port: LOCAL_PORT
ansible_password: password_target_host

Is it possible to add an ssh key to the agent for a private repo in an ansible playbook?

I am using Ansible to provision a Vagrant environment. As part of the provisioning process, I need to connect from the currently-provisioning VM to a private external repository using an ssh key in order to use composer to pull in modules for an application. I've done a lot of reading on this before asking this question, but still can't seem to comprehend what's going on.
What I want to happen is:
As part of the playbook, on the Vagrant VM, I add the ssh key to the private repo to the ssh-agent
Using that private key, I am then able to use composer to require modules from the external source
I've read articles which highlight specifying the key in playbook execution. (E.g. ansible-play -u username --private-key play.yml) As far as I understand, this isn't for me, as I'm calling the playbook via Vagrant file. I've also read articles which mention ssh forwarding. (SSH Agent Forwarding with Ansible). Based on what I have read, this is what I've done:
On the VM being provisioned, I insert a known_hosts file which consists of the host entries of the machines which house the repos I need:
On the VM being provisioned, I have the following in ~/.ssh/config:
Host <VM IP>
ForwardAgent yes
I have the following entries in my ansible.cfg to support ssh forwarding:
[defaults]
transport = ssh
[ssh_connection]
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
[privilege_escalation]
pipelining = False
I have also added the following task to the playbook which tries to
use composer:
- name: Add ssh agent line to sudoers
become: true
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
I exit the ansible provisioner and add the private key on the provisioned VM to the agent via a shell provisioner (This is where I suspect I'm going wrong)
Then, I attempt to use composer, or call git via the command module. Like this, for example, to test:
- name: Test connection
command: ssh -T git#github.com
Finally, just in case I wasn't understanding ssh connection forwarding correctly, I assumed that what was supposed to happen was that I needed to first add the key to my local machine's agent, then forward that through to the provisioned VM to use to grab the repositories via composer. So I used ssh-add on my local machine before executing vagrant up and running the provisioner.
No matter what, though, I always get permission denied when I do this. I'd greatly appreciate some understanding as to what I may be missing in my understanding of how ssh forwarding should be working here, as well as any guidance for making this connection happen.
I'm not certain I understand your question correctly, but I often setup machines that connect to a private bitbucket repository in order to clone it. You don't need to (and shouldn't) use agent forwarding for that ("ssh forwarding" is unclear; there's "authentication agent forwarding" and "port forwarding", but you need neither in this case).
Just to be clear with terminology, you are running Ansible in your local machine, you are provisioning the controlled machine, and you want to ssh from the controlled machine to a third-party server.
What I do is I upload the ssh key to the controlled machine, in /root/.ssh (more generally $HOME/.ssh where $HOME is the home directory of the controlled machine user who will connect to the third-party server—in my case that's root). I don't use the names id_rsa and id_rsa.pub, because I don't want to touch the default keys of that user (these might have a different purpose; for example, I use them to backup the controlled machine). So this is the code:
- name: Install bitbucket aptiko_ro ssh key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa
mode: 0600
content: "{{ aptiko_ro_ssh_key }}"
- name: Install bitbucket aptiko_ro ssh public key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa.pub
content: "{{ aptiko_ro_ssh_pub_key }}"
Next, you need to tell the controlled machine ssh this: "When you connect to the third-party server, use key X instead of the default key, and logon as user Y". You tell it in this way:
- name: Install ssh config that uses aptiko_ro keys on bitbucket
copy:
dest: /root/.ssh/config
content: |
Host bitbucket.org
IdentityFile ~/.ssh/aptiko_ro_id_rsa
User aptiko_ro

How to use a public keypair .pem file for ansible playbooks?

I want to use a public aws keypair .pem file for running ansible playbooks. I want to do this without changing my ~/.ssh/id_rsa.pub and I can't create a new keypair from my current ~/.ssh/id_rsa.pub and apply it to the ec2 instances I am trying to change.
$ ansible --version
ansible 1.9.6
configured module search path = None
Here is my hosts file (note that my actual ip is replaced with 1.2.3.4). This is probably the issue since I need a way to set a public key variable and use that:
[all_servers:vars]
ansible_ssh_private_key_file = ./mykeypair.pem
[dashboard]
1.2.3.4 dashboard_domain=my.domain.info
Here is my playbook:
---
- hosts: dashboard
gather_facts: False
remote_user: ubuntu
tasks:
- name: ping
ping:
This is the command I am using to run it:
ansible-playbook -i ./hosts test.yml
It results in the following error:
fatal: [1.2.3.4] => SSH Error: Permission denied (publickey).
while connecting to 1.2.3.4:22
There is no problem with my keypair:
$ ssh -i mykeypair.pem ubuntu#1.2.3.4 'whoami'
ubuntu
What am I doing wrong?
Ok little mistakes I guess you can't have spaces in host file variables and need to define the group you are applying the vars to. This hosts file works with it all:
[dashboard:vars]
ansible_ssh_private_key_file=./mykeypair.pem
[dashboard]
1.2.3.4 dashboard_domain=my.domain.info
I have come across this and all what I had to do was to run the below
#ssh-agent bash
#ssh-add ~/.ssh/keypair.pem

ansible ssh permission denied

I'm generated ssh key, and copy it to remote server. When I try to ssh to that server everything works fine:
ssh user#ip_address
User is not a root. If I try to ssh throw ansible:
ansible-playbook -i hosts playbook.yml
with ansible playbook:
---
- hosts: web
remote_user: user
tasks:
- name: test connection
ping:
and hosts file:
[web]
192.168.0.103
I got error:
...
Permission denied (publickey,password)
What's the problem?
Ansible is using different key compared to what you are using to connect to that 'web' machine.
You can explicitly configure ansible to use a specific private key by
private_key_file=/path/to/key_rsa
as mentioned in the docs Make sure that you authorize that key which ansible uses, to the remote user in remote machine with ssh-copy-id -i /path/to/key_rsa.pub user#webmachine_ip_address
In my case I got similar error while running ansible playbook when host changed it's fingerprint. I found this, trying to establish ssh connection from command line. So, after running ssh-keygen -f "/root/.ssh/known_hosts" -R my_ip this problem was solved.
Hi Run the play as below. by default ansible plays using root.
ansible-playbook -i hosts playbook.yml -u user
If you still get the error, run below and paste the out-put here.
ansible-playbook -i hosts playbook.yml -u user -vvv