This question already has an answer here:
How can a user with SSH keys authentication have sudo powers in Ansible? [duplicate]
(1 answer)
Closed 5 years ago.
quick question
I have setup an Ubuntu server with a user named test. I copy the authorized_keys to it, I can ssh no problem.
If I do $ ansible -m ping ubu1, no problem I get a response
<i><p>ubu1 | SUCCESS => {
<br>"changed": false,
<br>"ping": "pong"
<br>}</i>
What I dont get is this, If I do
$ ansible-playbook -vvvv Playbooks/htopInstall.yml
fatal: [ubu1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "setup"}, "module_stderr": "OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g-fips 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 6109\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.1.112 closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "parsed": false}
If I do $ ansible-playbook --ask-sudo-pass Playbooks/htopInstall.yml, then it ask my user password and the play is a success.
If I rename the authorized_keys it tells me I "Failed to connect to the host via ssh." which is ok.
What I dont understand is why is it asking for a sudo password. I definetly missed something along the way.
my ansible.cfg file looks like this
[defaults]
nocows = 1
inventory = ./Playbooks/hosts
remote_user = test
private_key_file = /home/test/.ssh/id_ubu
host_key_checking = false
my hosts file looks like this
[servers]
ubu1 ansible_ssh_host=192.168.1.112 ansible_ssh_user=test
What I dont understand is why is it asking for a sudo password.
We can't say for certain without seeing your playbook, but it's almost certainly because a) your playbook asks Ansible to run a particular command with sudo (via the sudo or become directives) and b) the test user does not have password-less sudo enabled.
It sounds like you are aware of (a) but are confused about (b); specifically, what I'm picking up is that you don't understand the difference between ssh authentication and sudo authentication. Again, without more information I can't confirm if this is the case, but I'll take a stab at explaining it in case I guessed correctly.
When you connect to a machine via ssh, there are two primary ways in which sshd authenticates you and allows you to log in as a particular user. The first is to ask for the account's password, which is hands off to the system, and allows a login if it was correct. The second is through public-key cryptography, in which you prove that you have access to a private key that corresponds to a public key fingerprint in ~/.ssh/authorized_keys. Passing sshd's authentication checks gives you a shell on the machine.
When you invoke a command with sudo, you're asking sudo to elevate your privileges beyond what the account normally gets. This is an entirely different system, with rules defined in /etc/sudoers (which you should edit using sudo visudo) that control which users are allowed to use sudo, what commands they should be able to run, whether they need to re-enter their password or not when using the command, and a variety of other configuration options.
When you run the playbook normally, Ansible is presented with a sudo prompt and doesn't know how to continue - it doesn't know the account password. That's why --ask-sudo-pass exists: you're giving the password to Ansible so that it can pass it on to sudo when prompted. If you don't want to have to type this every time and you've decided it's within your security parameters to allow anyone logged in as the test user to perform any action as root, then you can consult man sudoers on how to set passwordless sudo for that account.
I solved this exact error sudo: a password is required\n which I got when running my playbook with become: true but somewhere in a task delegating to localhost, something like this:
uri:
url: "{{ some_url }}"
return_content: yes
status_code: 200
delegate_to: 127.0.0.1
If I understood correctly, the become: true causes Ansible to log into the remote host as my user and then use sudo in order to execute all commands on the remote host as root. Now when delegating to 127.0.0.1, sudo is also executed and as it happens that on my localhost a password is expected when using sudo.
For me the solution was simply to remove the delegate_to, which was not actually needed in that particular use case.
Related
Here is my situation. I want to access a server through a jumpbox/bastion host.
so, I will login as normal user in jumpbox and then change user to root after that login to remote server using root. I dont have direct access to root in jumpbox.
$ ssh user#jumpbox
$ user#jumpbox:~# su - root
Enter Password:
$ root#jumpbox:~/ ssh root#remoteserver
Enter Password:
$ root#remoteserver:~/
Above is the manual workflow. I want to achieve this in ansible.
I have seen something like this.
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#jumpbox"'
This doesnot work when we need to switch to root and login to remote server.
There are a few things to unpack here:
General Design / Issue:
This isn't an Ansible issue, it's an ssh issue/proxy misconfiguration.
A bastion host/ssh proxy isn't meant to be logged into and have commands ran directly on it interactively (like su - root, enter password, then ssh...). That's not really a bastion, that's just a server you're logging into and running commands on. It's not an actual ssh proxy/bastion/jump role. At that point you might as well just run Ansible on the host.
That's why things like ProxyJump and ProxyCommand aren't working. They are designed to work with ssh proxies that are configured as ssh proxies (bastions).
Running Ansible Tasks as Root:
Ansible can run with sudo during task execution (it's called "become" in Ansible lingo), so you should never need to SSH as the literal root user with Ansible (shouldn't ssh as root ever really).
Answering the question:
There are a lot of workarounds for this, but the straightforward answer here is to configure the jump host as a proper bastion and your issue will go away. An example...
As the bastion "user", create an ssh key pair, or use an existing one.
On the bastion, edit the users ~/.ssh/config file to access the target server with the private key and desired user.
EXAMPLE user#bastion's ~/.ssh/config (I cringe seeing root here)...
Host remote-server
User root
IdentityFile ~/.ssh/my-private-key
Add the public key created in step 1 to the target servers ~/.ssh/authorized_keys file for the user you're logging in as.
After that type of config, your jump host is working as a regular ssh proxy. You can then use ProxyCommand or ProxyJump as you had tried to originally without issue.
Need to run bash-script at sudo-user on remote hosts using Ansible. My working machine is Win10 + Cygwin (sorry, it wasn't my fault).
So, i tested it on non-sudo scripts (it doesn't need root access) - and it works.
No, first time it didn't work at all: Failed to connect to the host via ssh: my_user#server1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)
So, i used this: "ssh-keygen -t rsa" -> "ssh-copy-id my_user#server1" || "ssh-copy-id my_user#server2" under my_user: created an ssh-key and shered it to remote hosts. After that i could run scripts under my_user on server1, server2 and so on...
Now, i need run sudo-scripts. But i can't understand, how it'll be.
on Cygwin there're no ROOT-user. And i don't know, how can to generete ssh-key for nonexistent user.
how to run ansible playbook with root? remote_user: root goes with error: Failed to connect to the host via ssh: my_user#server1: Permission denied Look, it's my_user, not root. Does it run as my_user or root-user?
Maybe i do it wrong at all, and are there any "best practice"-vay to run sudo-scripts?
Oh, please, give me a help to solve my problem.
Seems like auth as root disabled on remote server.
In /etc/ssh/sshd_config find PermitRootLogin and set it on Yes, but I'll not recommend you to do that.
Actually, use exactly root user - it's bad practice.
Check permissions for your my_user. Maybe you can grant it sudo rights without password.
To do that edit /etc/sudoers as root, find this line:
# Allow members of group sudo to execute any command
And after it add this:
my_user ALL=(ALL) NOPASSWD: ALL
After it you'll be able to execute any sudo command without password on remote machine.
I did it, but what i did?
So, steps of solution:
set become: true at playbook, abuote here:
hosts:
test_hosts
become: true
vars:
Next, run playbook with "-K" attibute: ansible-playbook ./your_playbook.yml -K
So, it works: ran and even exec scripts under sudo.
But i can't understand, how can i set what user i use as "executable user".
I am using Ansible to provision a Vagrant environment. As part of the provisioning process, I need to connect from the currently-provisioning VM to a private external repository using an ssh key in order to use composer to pull in modules for an application. I've done a lot of reading on this before asking this question, but still can't seem to comprehend what's going on.
What I want to happen is:
As part of the playbook, on the Vagrant VM, I add the ssh key to the private repo to the ssh-agent
Using that private key, I am then able to use composer to require modules from the external source
I've read articles which highlight specifying the key in playbook execution. (E.g. ansible-play -u username --private-key play.yml) As far as I understand, this isn't for me, as I'm calling the playbook via Vagrant file. I've also read articles which mention ssh forwarding. (SSH Agent Forwarding with Ansible). Based on what I have read, this is what I've done:
On the VM being provisioned, I insert a known_hosts file which consists of the host entries of the machines which house the repos I need:
On the VM being provisioned, I have the following in ~/.ssh/config:
Host <VM IP>
ForwardAgent yes
I have the following entries in my ansible.cfg to support ssh forwarding:
[defaults]
transport = ssh
[ssh_connection]
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
[privilege_escalation]
pipelining = False
I have also added the following task to the playbook which tries to
use composer:
- name: Add ssh agent line to sudoers
become: true
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
I exit the ansible provisioner and add the private key on the provisioned VM to the agent via a shell provisioner (This is where I suspect I'm going wrong)
Then, I attempt to use composer, or call git via the command module. Like this, for example, to test:
- name: Test connection
command: ssh -T git#github.com
Finally, just in case I wasn't understanding ssh connection forwarding correctly, I assumed that what was supposed to happen was that I needed to first add the key to my local machine's agent, then forward that through to the provisioned VM to use to grab the repositories via composer. So I used ssh-add on my local machine before executing vagrant up and running the provisioner.
No matter what, though, I always get permission denied when I do this. I'd greatly appreciate some understanding as to what I may be missing in my understanding of how ssh forwarding should be working here, as well as any guidance for making this connection happen.
I'm not certain I understand your question correctly, but I often setup machines that connect to a private bitbucket repository in order to clone it. You don't need to (and shouldn't) use agent forwarding for that ("ssh forwarding" is unclear; there's "authentication agent forwarding" and "port forwarding", but you need neither in this case).
Just to be clear with terminology, you are running Ansible in your local machine, you are provisioning the controlled machine, and you want to ssh from the controlled machine to a third-party server.
What I do is I upload the ssh key to the controlled machine, in /root/.ssh (more generally $HOME/.ssh where $HOME is the home directory of the controlled machine user who will connect to the third-party server—in my case that's root). I don't use the names id_rsa and id_rsa.pub, because I don't want to touch the default keys of that user (these might have a different purpose; for example, I use them to backup the controlled machine). So this is the code:
- name: Install bitbucket aptiko_ro ssh key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa
mode: 0600
content: "{{ aptiko_ro_ssh_key }}"
- name: Install bitbucket aptiko_ro ssh public key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa.pub
content: "{{ aptiko_ro_ssh_pub_key }}"
Next, you need to tell the controlled machine ssh this: "When you connect to the third-party server, use key X instead of the default key, and logon as user Y". You tell it in this way:
- name: Install ssh config that uses aptiko_ro keys on bitbucket
copy:
dest: /root/.ssh/config
content: |
Host bitbucket.org
IdentityFile ~/.ssh/aptiko_ro_id_rsa
User aptiko_ro
On an Ubuntu server, 'foo.com', that serves gitlab, a gitlab user, 'bar', can clone, push, and pull without having to use a password, with no problem (public key is set up on the gitlab server for user 'bar').
User 'bar' wants to use the command line on the server 'foo', and does ssh bar#foo.com. When user 'bar's ssh keys are not in 'foo''s authorized_keys, 'bar' is logged momentarily into Gitlab:
debug2: shell request accepted on channel 0
Welcome to GitLab, bar
and then that session promptly exits.
When user 'bar's ssh key - even one that is not registered with GitLab - is in 'foo.com''s authorized_keys, then that user gets the expected result when doing ssh bar#foo.com. However, then user bar (on their local computer) is unable to push, pull, clone, etc. from their gitlab-managed repository, with the error message being that "'some-group/some-project.git' does not appear to be a git repository".
It appears that there is a misconfiguration such that shell access is mixed up with gitlab project access.
How can user 'foo' be able to both login via ssh to a regular shell prompt and also use git normally (interacting with the remote git server from their local box)?
After a lot of searching I got to know why this was happening on my end. I had the same issue. I wanted to use the same SSH key for both SSH login as well as GitLab access.
I found this thread helpful:
https://gist.github.com/hanseartic/368a63933afb7c9f7e6b
In the authorized_keys file, the gitlab-shell enters specific commands to limit the access. It adds the limitation once the user enters the public key through web interface. It uses the command option to do so.
We would need to modify the command option to allow access to bash and remember to remove the option of no-pty if listed in the comma-separated section. For example in my case I had this within the line: no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty and had to remove no-pty from the list.
A sample modified command should look like this:
command="if [ -t 0 ]; then bash; else /home/ec2-user/gitlab_service/gitlab-shell/bin/gitlab-shell key-11; fi",no-port-forwarding,no-X11-forwarding,no-agent-forwarding ssh-rsa AAA...
Need to be mindful to edit the correct command by checking the key number or the publickey and username associated with the command.
This did not require any service restart.
After following instructions both online and in a couple of books, I am unsure of why this is happening. I have a feeling there is a missing setting, but here is the setup:
I am attempting to use the command:
ansible all -u $USER -m ping -vvvv
Obviously using the -vvvv for debugging, but not much output aside from the fact it says it's attempting to connect. I get the following error:
S4 | FAILED => FAILED: Authentication failed.
S4 stands for switch 4, a Cisco switch I am attempting to automate configuration and show commands on. I know 100% the password I set in the host_vars file is correct, as it works when I use it from a standard SSH client.
Here are my non-default config settings in the ansible.cfg file:
[defaults]
transport=paramiko
hostfile = ./myhosts
host_key_checking=False
timeout = 5
My myhosts file:
[cisco-switches]
S4
And my host_vars file for S4:
ansible_ssh_host: 192.168.1.12
ansible_ssh_pass: password
My current version is 1.9.1, running on a Centos VM. I do have an ACL applied on the management interface of the switch, but it allows remote connections from this particular IP.
Please advise.
Since you are using ansible to automate commands in a Cisco switch, I guess you want to perform the SSH connection to the switch without been prompted for password or been requested to press [Y/N] to confirm the connection.
To do that I recommend to configure the Cisco IOS SSH Server on the switch to perform RSA-Based user authentication.
First of all you need to generate RSA key pair on your Linux box:
ssh-keygen -t rsa -b 1024
Note: You can use 2048 instead 1024 but consider that some IOS versions will accept maximum 254 characters for ssh public key.
At switch side:
conf t
ip ssh pubkey-chain
username test
key-string
Copy the entire public key as appears in the cat id_rsa.pub
including the ssh-rsa and username#hostname.
Please note that some IOS versions will accept
maximum 254 characters.
You can paste multiple lines.
exit
exit
If you need that 'test' user can execute privileged IOS commands:
username test privilege 15 secret _TEXT_CLEAR_PASSWORD_
Then, test your connection from your Linux box in order to add the switch to known_hosts file. This will only happen one time for each switch/host not found in the known_hosts file:
ssh test#10.0.0.1
The authenticity of host '10.0.0.1 (10.0.0.1)' can't be established.
RSA key fingerprint is xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:d6:4b:d1:67.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.1' (RSA) to the list of known hosts.
ciscoswitch#
ciscoswitch#exit
Finally test the connection using ansible over SSH and raw module, for example:
ansible inventory -m raw -a "show env all" -u test
I hope you find it useful.