as I said in the title, I am having trouble updating Nextcloud from version 23.0.0 to 23.0.5.
The system is running on a KVM virtual machine. To upgrade, these are the steps I make:
ssh into the server
cd /var/www/nextcloud
enable maintenance mode: sudo -u www-data php occ maintenance:mode --on
Backing up the machine
Change files ownership so they can be written: chown -R www-data /var/www/nextcloud
Update it: sudo -u www-data php updater/updater.phar
Then, I simply roll back the permissions and disable the maintenance mode
The system updates. However, when I log in and go to the administration overview, I get a warning saying:
Invalid UUIDs of LDAP users or groups have been found. Please review your "Override UUID detection" settings in the Expert part of the LDAP configuration and use "occ ldap:update-uuid" to update them.
When I run the command they say "occ ldap:update-uuid" the console outputs this:
# sudo -u www-data php occ ldap:update-uuid
8/8 [============================] 100%
No record was updated.
For 8 records, the UUID could not be saved to database. Double-check your configuration.
Do you know how to fix this?
Another possibility is getting the UUIDs and replacing them or even removing them if they are not needed. But still, I don't know how to get to them.
I found the solution.
Some LDAP Groups were deleted, and this change did not propagate to NextCloud.
When running sudo -u www-data php occ ldap:update-uuid, you can add --verbose to see what is happening.
In my case, it returned eight groups.
The solution was to open MySQL, select the NextCloud database and then delete the invalid groups in the table oc_ldap_group_mapping. To achieve this, just run:
delete from oc_ldap_group_mapping where directory_uuid like "invalidated_%"
This solution may also apply to LDAP users with invalid UUIDs, but I can't confirm it.
Thanks for your solution!
It works also for oc_ldap_user_mapping with invalid UUIDs.
Select * from oc_ldap_user_mapping where directory_uuid like "invalidated_%"
Delete from oc_ldap_user_mapping where directory_uuid like "invalidated_%"
Related
Fbi is a framebuffer image viewer that does not need an X-server setup. It's perfect for what I want to achieve: a stand alone dedicated slide show.
I want to run the following command as a non-root user:
fbi -d /dev/fb0 -T 1 foo.jpg
When I run fbi as non-root, the result on tty1 is:
access /dev/tty1: Permission denied
I can perfectly run the command as root (sudo ...), but that's not desirable. My question is: how can I run fbi as non-root user on a tty?
The setup of the rpi4 is "headless": no X-server installed, the fbi command is issued from ssh or crontab, the output tty1 is a screen connected to a HDMI port on the rpi4.
I tried many things:
checked all possible forums, many of which address this problem without giving a satisfactory solution. The man page for fbi suggests adding the user to the vido and tty groups which I did but to no avail.
added the user to the groups tty and video
changed permissions on tty1 and fb0 to 666. Interestingly the error message changes to ...
ioctl VT_ACTIVATE: Operation not permitted.
Of course, after reboot the permissions of the /dev/tty1 and /dev/fb0 change back to normal. So changing these permissions is no good idea at all, even if it would work, which it doesn't.
Thanks you guys for your help!
I'm running bullseye on a rpi3.
Here is how I got it to work (from inside a docker container fwiw):
[x] add the user to the tty and video groups
[x] change permission on /dev/tty1 to allow group read
[ ] grant capability CAP_SYS_TTY_CONFIG to the fbi binary
The last part is what you missed (the cap).
One liner to get it:
setcap 'cap_sys_tty_config+ep' $(which fbi)
[UPDATED]
Also, to persist the permission on the tty on raspbian, look into
cat /usr/lib/udev/rules.d/50-udev-default.rules | grep "\"tty\[0"
I'm using ansible to provision my Centos 7 produciton cluster. Unfortunately, execution of below command results with ansible Tiemout and Linux Pluggable Authentication Modules (pam) error conversation failed.
The same ansible command works well, executed against virtual lab mad out of vagrant boxes.
Ansible Command
$ ansible master_server -m yum -a 'name=vim state=installed' -b -K -u lukas -vvvv
123.123.123.123 | FAILED! => {
"msg": "Timeout (7s) waiting for privilege escalation prompt: \u001b[?1h\u001b=\r\r"
}
SSHd Log
# /var/log/secure
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): conversation failed
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): auth could not identify password for [lukas]
I've found the problem. It turned out to be PAM's auth module problem! Let me describe how I got to the solution.
Context:
I set up my machine for debugging - that is I had four terminal windows opened.
1st terminal (local machine): Here, I was executing ansible prduction_server -m yum -a 'name=vim state=installed' -b -K -u username
2nd terminal (production server): Here, I executed journalctl -f (system wide log).
3rd terminal (production server): Here, I executed tail -f /var/log/secure (log for sshd).
4th terminal (production server): Here, I was editing vi /etc/pam.d/sudo file.
Every time, I executed command from 1st terminal I got this errors:
# ansible error - on local machine
Timeout (7s) waiting for privilege escalation prompt error.
# sshd error - on remote machine
pam_unix(sudo:auth): conversation failed
pam_unix(sudo:auth): [username]
I showed my entire setup to my colleague, and he told me that the error had to do something with "PAM". Frankly, It was the first time that I've heard about PAM. So, I had to read this PAM Tutorial.
I figured out, that error relates to auth interface located in /etc/pam.d/sudo module. Diging over the internet, I stambled upon this pam_permit.so module with sufficient controll flag, that fixed my problem!
Solution
Basically, what I added was auth sufficient pam_permit.so line to /etc/pam.d/sudo file. Look at the example below.
$ cat /etc/pam.d/sudo
#%PAM-1.0
# Fixing ssh "auth could not identify password for [username]"
auth sufficient pam_permit.so
# Below is original config
auth include system-auth
account include system-auth
password include system-auth
session optional pam_keyinit.so revoke
session required pam_limits.so
session include system-auth
Conclusion:
I spent 4 days to arrive to this solution. I stumbled upon over a dozens solutions that did not worked for me, starting from "duplicated sudo password in ansible hosts/config file", "ldap specific configuration" to getting advice from always grumpy system admins!
Note:
Since, I'm not expert in PAM, I'm not aware if this fix affects other aspects of the system, so be cautious over blindly copy pasting this code! However, if you are expert on PAM please share with us alternative solutions or input. Thanks!
Assuming the lukas user is a local account, you should look at how the pam_unix.so module is declared in your system-auth pam file. But more information about the user account and pam configuration is necessary for a specific answer.
While adding auth sufficient pam_permit.so is enough to gain access. Using it in anything but the most insecure test environment would not be recommended. From the pam_permit man page:
pam_permit is a PAM module that always permit access. It does nothing
else.
So adding pam_permit.so as sufficient for authentication in this manner will completely bypass the security for all users.
Found myself in the same situation, tearing my hair out. In my case, hidden toward the end of the sudoers file, there was the line:
%sudo ALL=(ALL:ALL) ALL
This undoes authorizations that come before it. If you're not using the sudo group then this line can safely be deleted.
I had this error since upgrading sudo to version 1.9.4 with pacman. I hadn't noticed that pacman had provided a new sudoers file.
I just needed to merge /etc/sudoers.pacnew.
See here for more details: https://wiki.archlinux.org/index.php/Pacman/Pacnew_and_Pacsave
I know that this doesn't answer the original question (which pertains to a Centos system), but this is the top Google result for the error message, so I thought I'd leave my solution here in case anyone stumbles across this problem coming from an Arch Linux based operating system.
I got the same error when I tried to restart apache2 with sudo service apache2 restart
When logging into root I was able to see the real error lied with the configuration of apache2. Turned out I removed a site's SSL-Certificate files a few months ago but didn't disable the site in apache2. a2dissite did the trick.
I'm distributing load between two web servers, which means all of the Apache settings and vhosts are pretty much identical, and I wanted to make sure they stay that way by using LSync (or if there's another solution that helps with the problem I'm having, let me know)
So obviously Apache runs as the apache user, and we cant enable root SSH logins, so I created an lsync user that can SSH between the two servers using RSA keys.
And now I'm running into some permissions errors, which is kinda what I expected to happen really. What I'm trying now is I added the lsync user to the apache group, and the apache user to the lsync group... and that seems to work ok, as long as the files are chowned 7 for both the user and the group...
I thought about setting a cron job to chown apache.apache every so often, and maybe even chmod +rwx for the group and user, but I'm sure that would cause some other issues.
I thought about having lsync run as the apache user, but it looks like the apache home directory needs to actually be owned by root.root.. so that would cause issues with the apache user trying to ssh in and read from the .ssh directory.
I couldn't find much about this when I looked on Google... Most people just used the root user for lsync, which is out of the question.
So if anyone has a fix, that would be great! thanks
P.S. I know that I can allow the lsync user to execute specific commands via sudo, if I properly configure the sudoers configuration... is there a way to have it sudo chown apache.apache /var/www && sudo chmod -R u+rwx /var/www or something?
rsync has an option for forcing the permissions of the files it creates on the destination: --chmod=<blah>. lsyncd does not have direct support for this, but can pass-through rsync flags.
Try adding this to your lsyncd configuration:
_extra = {"--chmod=Dug+rwx,Fug+rw"}
That should ensure that directories, D, have read/write/execute permissions for owner and group, and files, F, have read/write permissions for owner and group. Any other permissions should be set as they are on the source server.
If you need the files to be owned by the apache user then you could set up a chown cron job, as you suggest, but you might find that a constantly running script that reads the output from inotifywatch will be more responsive (and mostly idle).
You might consider having the apache user run an rsync daemon. It's little used since tunnelling rsync through ssh is more convenient and more secure, but it might help you side-step this problem.
You need to set up a configuration file, and then simply launch it with rsync --daemon using whatever init system your distro has.
You can then configure your lsynd with target = "rsync://server/path".
If the connection between the servers is local and the network is trusted then you're done, otherwise you should configure the rsync daemon to listen only on 127.0.0.1, and then use an ssh -L port mapping to route the traffic through an encrypted tunnel (the owner of the tunnel is not important).
I am trying to follow this vagrant tutorial. I get error after my first two command. I wrote these two command from command line
$ vagrant init hashicorp/precise64
$ vagrant up
After I ran vagrant up command I get this message.
The private key to connect to the machine via SSH must be owned
by the user running Vagrant. This is a strict requirement from
SSH itself. Please fix the following key to be owned by the user
running Vagrant:
/media/bcc/Other/Linux/vagrant3/.vagrant/machines/default/virtualbox/private_key
And then if I run any command I get the same error. Even if I run vagrant ssh I get the same error message. Please help me to fix the problem.
I am on linux mint and using virutal box as well.
Exactly as the error message tells you:
The private key to connect to the machine via SSH must be owned
by the user running Vagrant.
Therefore check permissions of file using
stat /media/bcc/Other/Linux/vagrant3/.vagrant/machines/default/virtualbox/private_key
check what user you are running using
id
or
whoami
and then modify owner of the file:
chown `whoami` /media/bcc/Other/Linux/vagrant3/.vagrant/machines/default/virtualbox/private_key
Note that this might not be possible if your /media/bbc/ is some non-linux filesystem that does not support linux permissions. In that case you should choose more suitable location for you private key.
Jakuje has the correct answer - if the file system you are working on supports changing the owner.
If you are trying to mount the vagrant box off of NTFS, it is not possible to change the owner of the key file.
If you want to mount the file on NTFS and you are running a local instance you can try the following which worked for me:
Vagrant Halt
[remove the vagrant box]
[Add the following line to Vagrantfile]
config.ssh.insert_key=false
[** you may need to remove and clone your project again]
Vagrant Provision
This solution may not be suitable for a live instance - it uses the default insecure ssh key. If you require more security you might be able to find a more palatable soultion here https://www.vagrantup.com/docs/vagrantfile/ssh_settings.html
If you put vagrant data on NTFS you can use this trick to bypass the keyfile ownership/permissions check.
Copy your key file to $HOME/.ssh/ or where-ever on a suitable filesystem where you can set it to the correct ownership and permissions. Then simply create a symlink (!) to it inside the NTFS directory (where you have set $VAGRANT_HOME, for example) like this:
ln -sr $HOME/.ssh/your_key_file your_key_file
I uses fabric to run a fastcgi_mono service via the command:
sudo('/etc/init.d/fastcgi_mono restart', pty=False)
But when I execute it, it gives me this error:
[52.192.204.174] run: sudo /etc/init.d/fastcgi_mono restart
[52.192.204.174] out: sudo: sorry, you must have a tty to run sudo
[52.192.204.174] out:
Warning: run() received nonzero return code 1 while executing 'sudo /etc/init.d/fastcgi_mono restart'!
how do I solve this issue? Please help.
The way i solve this is, I dont know if there is a better way but it makes sense in my head: I have two users set up in my fabfile.py, ubuntu (which has sudo privileges) and www-data (which does not have any real rights, only can add/delete directories in its "space" (/server/*)). I always establish connections using ubuntu that way i can use sudo() when ever i need to. When ever i need to do something at the application level, what i call def deploy() i connect using the application user, so i do something like:
#settings(user='www-data')
def deploy():
run('whoami') # will say www-data
or if i need to do some kind of sudo() inside my deploy() i'll do:
def deploy():
sudo('whoami') # will say ubuntu/root
with settings(user='www-data'):
run('whoami') # will say www-data
... more code here
so recap:
Connect using a user that has sudo access
Change the user to a higher level user if need be later
Yes I found the answer. For Amazon instances, you need to disable requiretty
comment('/etc/sudoers', 'Defaults requiretty', use_sudo=True)