Ansible sudo run ("as root") on Cygwin - ssh

Need to run bash-script at sudo-user on remote hosts using Ansible. My working machine is Win10 + Cygwin (sorry, it wasn't my fault).
So, i tested it on non-sudo scripts (it doesn't need root access) - and it works.
No, first time it didn't work at all: Failed to connect to the host via ssh: my_user#server1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)
So, i used this: "ssh-keygen -t rsa" -> "ssh-copy-id my_user#server1" || "ssh-copy-id my_user#server2" under my_user: created an ssh-key and shered it to remote hosts. After that i could run scripts under my_user on server1, server2 and so on...
Now, i need run sudo-scripts. But i can't understand, how it'll be.
on Cygwin there're no ROOT-user. And i don't know, how can to generete ssh-key for nonexistent user.
how to run ansible playbook with root? remote_user: root goes with error: Failed to connect to the host via ssh: my_user#server1: Permission denied Look, it's my_user, not root. Does it run as my_user or root-user?
Maybe i do it wrong at all, and are there any "best practice"-vay to run sudo-scripts?
Oh, please, give me a help to solve my problem.

Seems like auth as root disabled on remote server.
In /etc/ssh/sshd_config find PermitRootLogin and set it on Yes, but I'll not recommend you to do that.
Actually, use exactly root user - it's bad practice.
Check permissions for your my_user. Maybe you can grant it sudo rights without password.
To do that edit /etc/sudoers as root, find this line:
# Allow members of group sudo to execute any command
And after it add this:
my_user ALL=(ALL) NOPASSWD: ALL
After it you'll be able to execute any sudo command without password on remote machine.

I did it, but what i did?
So, steps of solution:
set become: true at playbook, abuote here:
hosts:
test_hosts
become: true
vars:
Next, run playbook with "-K" attibute: ansible-playbook ./your_playbook.yml -K
So, it works: ran and even exec scripts under sudo.
But i can't understand, how can i set what user i use as "executable user".

Related

Access to jumpbox as normal user and change to root user in ansible

Here is my situation. I want to access a server through a jumpbox/bastion host.
so, I will login as normal user in jumpbox and then change user to root after that login to remote server using root. I dont have direct access to root in jumpbox.
$ ssh user#jumpbox
$ user#jumpbox:~# su - root
Enter Password:
$ root#jumpbox:~/ ssh root#remoteserver
Enter Password:
$ root#remoteserver:~/
Above is the manual workflow. I want to achieve this in ansible.
I have seen something like this.
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#jumpbox"'
This doesnot work when we need to switch to root and login to remote server.
There are a few things to unpack here:
General Design / Issue:
This isn't an Ansible issue, it's an ssh issue/proxy misconfiguration.
A bastion host/ssh proxy isn't meant to be logged into and have commands ran directly on it interactively (like su - root, enter password, then ssh...). That's not really a bastion, that's just a server you're logging into and running commands on. It's not an actual ssh proxy/bastion/jump role. At that point you might as well just run Ansible on the host.
That's why things like ProxyJump and ProxyCommand aren't working. They are designed to work with ssh proxies that are configured as ssh proxies (bastions).
Running Ansible Tasks as Root:
Ansible can run with sudo during task execution (it's called "become" in Ansible lingo), so you should never need to SSH as the literal root user with Ansible (shouldn't ssh as root ever really).
Answering the question:
There are a lot of workarounds for this, but the straightforward answer here is to configure the jump host as a proper bastion and your issue will go away. An example...
As the bastion "user", create an ssh key pair, or use an existing one.
On the bastion, edit the users ~/.ssh/config file to access the target server with the private key and desired user.
EXAMPLE user#bastion's ~/.ssh/config (I cringe seeing root here)...
Host remote-server
User root
IdentityFile ~/.ssh/my-private-key
Add the public key created in step 1 to the target servers ~/.ssh/authorized_keys file for the user you're logging in as.
After that type of config, your jump host is working as a regular ssh proxy. You can then use ProxyCommand or ProxyJump as you had tried to originally without issue.

ssh and sudo: pam_unix(sudo:auth): conversation failed, auth could not identify password for [username]

I'm facing a weird behavior trying to run rsync as sudo through ssh with passwordless login.
This is something I do with dozens of servers, I'm having this frustrating problem connecting to a couple of Ubuntu 18.04.4 servers
PREMISE
the passwordless SSH from CLIENT to SERVER with account USER works
nicely
When I'm logged in SERVER I can sudo everything with
account USER
On SERVER I've added the following to /etc/sudoers
user ALL=NOPASSWD:/usr/bin/rsync
Now, if I launch this simple test from machine CLIENT as user USER, I receive the following sudo error message:
$ ssh utente#192.168.200.135 -p 2310 sudo rsync
sudo: no tty present and no askpass program specified
Moreover, looking in the SERVER's /var/log/auth.log I found this errors:
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [user]
am not an PAM expert, but tested the following solution working on Ubuntu 16.04.5 and 20.04.1
NOTE : Configuration set to default on /etc/ssh/sshd_config
$ sudo visudo -f /etc/sudoers.d/my_config_file
add the below lines
my_username ALL=(ALL) NOPASSWD:ALL
and don't forget to restart sshd
$ sudo systemctl restart sshd
I've found a solution thanks to Centos. Infact, because of the more complex configuration of /etc/sudoers in Centos (compared to Ubuntu or Debian), I've been forced to put my additional configurations to an external file in /etc/sudoers.d/ instead than putting it directly into /etc/sudoers
SOLUTION:
Putting additional configurations directly into /etc/sudoers wouldn't work
Putting the needed additional settings in a file within the directory /etc/sudoers.d/ will work
e.g. , these are the config lines put in a file named /etc/sudoers.d/my_config_file:
Host_Alias MYSERVERHOST=192.168.1.135,localhost
# User that will execute Rsync with Sudo from a remote client
rsyncuser MYSERVERHOST=NOPASSWD:/usr/bin/rsync
Why /etc/sudoers didn't work? It's unknown to me even after two days worth of Internet search. I find this very obscure and awful.
What follows is a quote from this useful article: https://askubuntu.com/a/931207
Unlike /etc/sudoers, the contents of /etc/sudoers.d survive system upgrades, so it's preferrable to create a file there than to modify /etc/sudoers.
For the editing of any configuration file to be used by sudo the command visudo is preferable.
i.e.
$ sudo visudo -f /etc/sudoers.d/my_config_file
I had a similar problem on a custom linux server, but the solution was similar to the answers above.
As soon as I removed the line your_user ALL=(ALL) NOPASSWD:ALL from /etc/sudoers, the errors were gone.

Running playbook returns: Failed to connect to the host via ssh, solved by running ping all

I have an ansible playbook and I run it:
sudo ansible-playbook -i hosts startelk.yml -vvv
Every time, after I change the hosts file, running the same playbook results in "Failed to connect to the host via ssh". If I run
ansible all -m ping
first and then the playbook command, the playbook gets successfully started.
Does anyone know why do I have to run ping each time after changing hosts (or some other) file, and then my ssh connection for playbook works, otherwise no? I don't want to be running ping every time I need to change something in Ansible.
Thanks!
It's not a good idea to run "sudo ansible-playbook ..." This way the controller connects the host as root. Best practice is not to allow root ssh connections.
Best practice is to:
run ansible-playbook as a normal user
configure
remote_user
and
escalate the privilege with become and become_user.
Read more at Understanding Privilege Escalation.

Run ssh on Apache -> Failed to get a pseudo terminal: Permission denied

I'm using flask with apache(mod_wsgi).
When I use ssh module with external command subprocess.call("ssh ......",shell=True)
(My Python Flask code : Not wrong)
ssh = "sshpass -p \""+password+"\" ssh -p 6001 "+username+"#"+servername+" \"mkdir ~/MY_SERVER\""
subprocess.call(ssh, shell=True)
I got this error on Apache error_log : Failed to get a pseudo terminal: Permission denied
How can I fix this?
I've had this problem under RHEL 7. It's due to SELinux blocking apache user to access pty. To solve:
Disable or set SELinux as permissive (check your security needs): edit /etc/selinux/config and reboot.
Allow apache to control its directory for storing SSH keys:
sudo -u apache
chown apache /etc/share/httpd
ssh to desired host, accept key.
I think apache's login shell is "/sbin/nologin".
If you want to allow apache to use shell command, modify /etc/passwd and change the login shell to another shell like "/bin/bash".
However, this method is vulnerable to security. Many python ssh modules are available in internet. Use one of them.
What you are doing seems frightfully insecure. If you cannot use a Python library for your SSH connections, then you should at least plug the hole that is shell=True. There is very little here which is done by the shell anyway; doing it in Python affords you more control, and removes a big number of moving parts.
subprocess.call(['/usr/bin/sshpass', '-p', password,
'/usr/bin/ssh', '-T', '-p', '6001', '{0}#{1}'.format(username, servername),
'mkdir ~/MY_SERVER'])
If you cannot hard-code the paths to sshpass and ssh, you should at least make sure you have a limited, controlled PATH variable in your environment before doing any of this.
The fix for Failed to get a pseudo-terminal is usually to add a -T flag to the ssh command line. I did that above. If your real code actually requires a tty (which mkdir obviously does not), perhaps experiment with -t instead, and/or redirecting standard input and standard output.

SSHing into EC2 server via gives error Please login as the ec2-user user rather than root user

Question as title.
Why is this, I have used the ssh command:
ssh -i mykey.pem root#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
But i get that error, find nothing on google. What am I doing wrong?
You log in as ec2-user as Klaus suggested:
ssh -i key.pem ec2-user#host
... and then you use sudo to run commands. E.g., to edit the /etc/hosts file which is owned by root and requires root privileges: sudo nano /etc/hosts.
Or you run sudo su to become the root user.
By default root user is not allowed to login but you can use ec2-user as indicated by others.
Once you login with ec2-user you switch to root and change the SSH configuration.
To become the root user you run:
sudo su -
Edit the SSH daemon configuration file /etc/ssh/sshd_config, e.g. by using vi, and replace the PermitRootLogin entry with the following:
PermitRootLogin without-password
Reload the SSH daemon configuration by running:
/etc/init.d/sshd reload
The message Please login as the ec2-user user rather than root user. is displayed because a command is executed when you login with the private key. To remove that command edit ~/.ssh/authorized_keys file and remove the command option. The line should start with the key type (Eg. ssh-rsa).
(*) Do at your own risk. I recommend you to leave always a console open just in case you're not able to login after you make the configuration changes.
For reference you can read the man pages:
man sshd_config
man sshd
I have encountered a similar problem when setting up a hadoop cluster on Amazon ec2.
My head node needs to have root ssh access to each worker/slave nodes. I aliased the connects by adding each slave node's IP address, private address, and alias name to the /etc/hosts/ file. (I get that data by running the command echo -e "`hostname -i`\t`hostname -f`\talias-name" where alias-name is what I call each node (head or n1 for example). Then I put that output for each node in every node's /etc/hosts file.
The problem I have been encountering is that when I type ssh n1 while in my head node to ssh into my first slave node, I get that same error message: Please login as the use "ec2-user" rather than the user "root".
So after doing some research, I figured out how to fix it.
First:
ssh into your server. non-root (ec2-user) access is fine here.
Then su - your way into root. Now vi /etc/ssh/sshd_config and
un-comment the line PermitRootLogin yes.
Exit vi editor.
Now restart ssh daemon by typing service sshd stop then service
sshd start.
Second:
Now, here is the part I had to dig for,
run vi /root/.ssh/authorized_keys
Comment out everything up to ssh-rsa. Just put a # at the beginning
of the file's content, before no-port-forwarding... and hit enter on ssh-rsa to move it to
the next line (this way you dont have to delete anything in case you
want to backtrack).
exit vi editor
Now you should be able to login to root without that error message popping up.
Also, if you are using aliases for a cluster setup; Repeat the same steps on each node. First ssh in using ec2-user then follow the steps.
After adding the IP address, private address, and alias name info to your /etc/hosts file you should be able to ssh into each node's root using the alias name for example ssh n1.
The tutorial I followed is here: https://www.youtube.com/watch?v=xrxQXfE7t9A
But it didnt discuss the problem with root login.
Hope that helps! It worked for me.
*Keep in mind that I havnt taken any security into concern. This is simply a practice/dev setup.
I think it's just asking you to login with another username. Do you happen to have a user called ec2-user? If so, try this instead:
ssh -i mykey.pem ec2-user#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
I have faced the same problem when I tried to access my EC2 instance as 'root' through Windows PuTTY client, this is how I solved problem.
Access and edit SSH configuration file, to allow root login and password authentication.
Login as ec2-user (by default it is allowed)
Enter below command to open ssh config
sudo vi /etc/ssh/sshd_config
Edit SSH configuration file as below using vi, how to use vi editor
PermitRootLogin yes (remove # at begging if it present)
PasswordAuthentication yes
Restart SSH
sudo /etc/init.d/sshd restart
Change/set root password
sudo passwd root
type new password and re-enter it (at least 8 characters)
Exit current session and close PuTTY
exit
Try again login as root and type previously set password.
Solved!
Try compare root key file and user key file)
diff /root/.ssh/authorized_keys /home/user/.ssh/authorized_keys
...and see
For anyone like me that created a new user, copied root's .ssh dir to the new user, set ownership and STILL got this error - look at the new user's ~/.ssh/authorized_keys file. It has SSH params specified that force the prompt. Delete everything from that line up to the ssh-rsa and you'll be good to go.
Or - copy /home/ec2-user/.ssh to the new user homedir instead of /root/.ssh
Edit /etc/ssh/sshd_config, and make sure this is set:
PasswordAuthentication yes
Then reload SSH:
systemctl reload sshd.service
You can now log in as users other than ec2-user.
ssh -i mykey.pem root#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
just replace above command to this
ssh -i mykey.pem ubuntu#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
its working in my case
For those who are looking for a single, simple line:
sudo ssh -i ./mykey.pem ec2-user#ec2-x-xx-xxx-xxx.us-east-2.compute.amazonaws.com
Note that, you can get the line after the # from the Public IPv4 DNS section in your instance summary page.