I uses fabric to run a fastcgi_mono service via the command:
sudo('/etc/init.d/fastcgi_mono restart', pty=False)
But when I execute it, it gives me this error:
[52.192.204.174] run: sudo /etc/init.d/fastcgi_mono restart
[52.192.204.174] out: sudo: sorry, you must have a tty to run sudo
[52.192.204.174] out:
Warning: run() received nonzero return code 1 while executing 'sudo /etc/init.d/fastcgi_mono restart'!
how do I solve this issue? Please help.
The way i solve this is, I dont know if there is a better way but it makes sense in my head: I have two users set up in my fabfile.py, ubuntu (which has sudo privileges) and www-data (which does not have any real rights, only can add/delete directories in its "space" (/server/*)). I always establish connections using ubuntu that way i can use sudo() when ever i need to. When ever i need to do something at the application level, what i call def deploy() i connect using the application user, so i do something like:
#settings(user='www-data')
def deploy():
run('whoami') # will say www-data
or if i need to do some kind of sudo() inside my deploy() i'll do:
def deploy():
sudo('whoami') # will say ubuntu/root
with settings(user='www-data'):
run('whoami') # will say www-data
... more code here
so recap:
Connect using a user that has sudo access
Change the user to a higher level user if need be later
Yes I found the answer. For Amazon instances, you need to disable requiretty
comment('/etc/sudoers', 'Defaults requiretty', use_sudo=True)
Related
as I said in the title, I am having trouble updating Nextcloud from version 23.0.0 to 23.0.5.
The system is running on a KVM virtual machine. To upgrade, these are the steps I make:
ssh into the server
cd /var/www/nextcloud
enable maintenance mode: sudo -u www-data php occ maintenance:mode --on
Backing up the machine
Change files ownership so they can be written: chown -R www-data /var/www/nextcloud
Update it: sudo -u www-data php updater/updater.phar
Then, I simply roll back the permissions and disable the maintenance mode
The system updates. However, when I log in and go to the administration overview, I get a warning saying:
Invalid UUIDs of LDAP users or groups have been found. Please review your "Override UUID detection" settings in the Expert part of the LDAP configuration and use "occ ldap:update-uuid" to update them.
When I run the command they say "occ ldap:update-uuid" the console outputs this:
# sudo -u www-data php occ ldap:update-uuid
8/8 [============================] 100%
No record was updated.
For 8 records, the UUID could not be saved to database. Double-check your configuration.
Do you know how to fix this?
Another possibility is getting the UUIDs and replacing them or even removing them if they are not needed. But still, I don't know how to get to them.
I found the solution.
Some LDAP Groups were deleted, and this change did not propagate to NextCloud.
When running sudo -u www-data php occ ldap:update-uuid, you can add --verbose to see what is happening.
In my case, it returned eight groups.
The solution was to open MySQL, select the NextCloud database and then delete the invalid groups in the table oc_ldap_group_mapping. To achieve this, just run:
delete from oc_ldap_group_mapping where directory_uuid like "invalidated_%"
This solution may also apply to LDAP users with invalid UUIDs, but I can't confirm it.
Thanks for your solution!
It works also for oc_ldap_user_mapping with invalid UUIDs.
Select * from oc_ldap_user_mapping where directory_uuid like "invalidated_%"
Delete from oc_ldap_user_mapping where directory_uuid like "invalidated_%"
Need to run bash-script at sudo-user on remote hosts using Ansible. My working machine is Win10 + Cygwin (sorry, it wasn't my fault).
So, i tested it on non-sudo scripts (it doesn't need root access) - and it works.
No, first time it didn't work at all: Failed to connect to the host via ssh: my_user#server1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)
So, i used this: "ssh-keygen -t rsa" -> "ssh-copy-id my_user#server1" || "ssh-copy-id my_user#server2" under my_user: created an ssh-key and shered it to remote hosts. After that i could run scripts under my_user on server1, server2 and so on...
Now, i need run sudo-scripts. But i can't understand, how it'll be.
on Cygwin there're no ROOT-user. And i don't know, how can to generete ssh-key for nonexistent user.
how to run ansible playbook with root? remote_user: root goes with error: Failed to connect to the host via ssh: my_user#server1: Permission denied Look, it's my_user, not root. Does it run as my_user or root-user?
Maybe i do it wrong at all, and are there any "best practice"-vay to run sudo-scripts?
Oh, please, give me a help to solve my problem.
Seems like auth as root disabled on remote server.
In /etc/ssh/sshd_config find PermitRootLogin and set it on Yes, but I'll not recommend you to do that.
Actually, use exactly root user - it's bad practice.
Check permissions for your my_user. Maybe you can grant it sudo rights without password.
To do that edit /etc/sudoers as root, find this line:
# Allow members of group sudo to execute any command
And after it add this:
my_user ALL=(ALL) NOPASSWD: ALL
After it you'll be able to execute any sudo command without password on remote machine.
I did it, but what i did?
So, steps of solution:
set become: true at playbook, abuote here:
hosts:
test_hosts
become: true
vars:
Next, run playbook with "-K" attibute: ansible-playbook ./your_playbook.yml -K
So, it works: ran and even exec scripts under sudo.
But i can't understand, how can i set what user i use as "executable user".
I'm using flask with apache(mod_wsgi).
When I use ssh module with external command subprocess.call("ssh ......",shell=True)
(My Python Flask code : Not wrong)
ssh = "sshpass -p \""+password+"\" ssh -p 6001 "+username+"#"+servername+" \"mkdir ~/MY_SERVER\""
subprocess.call(ssh, shell=True)
I got this error on Apache error_log : Failed to get a pseudo terminal: Permission denied
How can I fix this?
I've had this problem under RHEL 7. It's due to SELinux blocking apache user to access pty. To solve:
Disable or set SELinux as permissive (check your security needs): edit /etc/selinux/config and reboot.
Allow apache to control its directory for storing SSH keys:
sudo -u apache
chown apache /etc/share/httpd
ssh to desired host, accept key.
I think apache's login shell is "/sbin/nologin".
If you want to allow apache to use shell command, modify /etc/passwd and change the login shell to another shell like "/bin/bash".
However, this method is vulnerable to security. Many python ssh modules are available in internet. Use one of them.
What you are doing seems frightfully insecure. If you cannot use a Python library for your SSH connections, then you should at least plug the hole that is shell=True. There is very little here which is done by the shell anyway; doing it in Python affords you more control, and removes a big number of moving parts.
subprocess.call(['/usr/bin/sshpass', '-p', password,
'/usr/bin/ssh', '-T', '-p', '6001', '{0}#{1}'.format(username, servername),
'mkdir ~/MY_SERVER'])
If you cannot hard-code the paths to sshpass and ssh, you should at least make sure you have a limited, controlled PATH variable in your environment before doing any of this.
The fix for Failed to get a pseudo-terminal is usually to add a -T flag to the ssh command line. I did that above. If your real code actually requires a tty (which mkdir obviously does not), perhaps experiment with -t instead, and/or redirecting standard input and standard output.
I am trying to run python script in Apache 2.x with mod_python. I edited httpd.conf with publisher
LoadModule python_module /usr/local/apache2/modules/mod_python.so
<Directory /usr/local/apache2/htdocs/mod_python>
SetHandler mod_python
PythonHandler mod_python.publisher
PythonDebug On
I am trying to add a rule in firewall using python script which require root privilege. it's asking for root privilege ? Please somebody help.
#!/usr/local/bin/python
#from mod_python import apache
import sys
import errno
import pf
def index(req):
filter = pf.PacketFilter()
try:
# Enable packet filtering
filter.enable()
print "pf is enabled"
return "pf is enabled"
except IOError, (err, msg):
if err == errno.EACCES:
#sys.exit("Permission denied: are you root?")
return ("Permission denied: are you root?")
elif err == errno.ENOTTY:
#sys.exit("ioctl not supported by the device: is the pf device correct?")
return ("ioctl not supported by the device: is the pf device correct?")
this is python script which i want to execute though apache at openBSD. it uses mod_python.
Please post your python script somewhere and give us the link.
How is your python script trying to communicate with pf? through pfctl? lets say you are tryng to add an IP to a table
pfctl -t thetable -T add x.x.x.x
Find out which user runs apache
ps aux | grep apache
Then you must edit /etc/sudoers to have that user be able to run the pfctl command without a password. So lets say that you run apache as www. place the following in sudoers :
www ALL=(ALL:ALL) NOPASSWD: /sbin/pfctl
Finally in the python script (lets say you call the external command with subprocess)
from subprocess import call
call(["sudo","pfctl","-T","theTable","-t","add", "x.x.x.x"])
But please keep in mind that the whole scheme is really a bad idea and you shouldn't do it that way. get rid of the python script if you can and run the bundled apache 1.3 which is privseped and audited. Run the webserver in a chroot. Never expose the control of your firewall to user input specially when this comes over the web. I am sure that if you elaborate on what you want to do , we could find a much more efficient and secure setup.
You cannot run Python scripts under mod_python as the root user. This is because Apache will always drop privileges to an untrusted user. The only way to get around it would be to recompile Apache from source code and define a magic preprocessor macro which enables the security hole which allows Apache worker processes to run as root.
In summary, don't do it, it is dangerous.
Also be aware the mod_python is no longer maintained or developed and it is questionable as to whether you should use it in the first place.
I'm using fabric to remotely start a micro aws server, install git and a git repository, adjust apache config and then restart the server.
If at any point, from the fabfile I issue either
sudo('service apache2 restart') or run('sudo service apache2 restart') or a stop and then a start, the command apparently runs, I get the response indicating apache has started, for example
[ec2-184-73-1-113.compute-1.amazonaws.com] sudo: service apache2 start
[ec2-184-73-1-113.compute-1.amazonaws.com] out: * Starting web server apache2
[ec2-184-73-1-113.compute-1.amazonaws.com] out: ...done.
[ec2-184-73-1-113.compute-1.amazonaws.com] out:
However, if I try to connect, the connection is refused and if I ssh into the server and run
sudo service apache2 status it says that "Apache is NOT running"
Whilst sshed in, if run
sudo service apache start, the server is started and I can connect. Has anyone else experienced this? Or does anyone have any tips as to where I could look, in log files etc to work out what has happened. There is nothing in apache2/error.log, syslog or auth.log.
It's not that big a deal, I can work round it. I just don't like such silent failures.
Which version of fabric are you running?
Have you tried to change the pty argument (try to change shell too, but it should not influence things)?
http://docs.fabfile.org/en/1.0.1/api/core/operations.html#fabric.operations.run
You can set the pty argument like this:
sudo('service apache2 restart', pty=False)
Try this:
sudo('service apache2 restart',pty=False)
This worked for me after running into the same problem. I'm not sure why this happens.
This is an instance of this issue and there is an entry in the FAQ that has the pty answer. Unfortunately on CentOS 6 doesn't support pty-less sudo commands and I didn't like the nohup solution since it killed output.
The final entry in the issue mentions using sudo('set -m; service servicename start'). This turns on Job Control and therefore background processes are put in their own process group. As a result they are not terminated when the command ends.
When connecting to your remotes on behalf of a user granted enough privileges (such as root), you can manage system services as shown below:
from fabtools import service
service.restart('apache2')
https://fabtools.readthedocs.org/en/0.13.0/api/service.html
P.S. Its requires the installation of fabtools
pip install fabtools
Couple of more ways to fix the problem.
You could run the fab target with --no-pty option
fab --no-pty <task>
Inside fabfile, set the global environment variable always_use_pty to False, before your target code executes
env.always_use_pty = False
using pty=False still didn't solve it for me. The solution that ended up working for me is doing a double-nohup, like so:
run.sh
#! /usr/bin/env bash
nohup java -jar myapp.jar 2>&1 &
fabfile.py
...
sudo("nohup ./run.sh &> nohup.out", user=env.user, warn_only=True)
...