Fbi has no access to tty when run as non-root user - permissions

Fbi is a framebuffer image viewer that does not need an X-server setup. It's perfect for what I want to achieve: a stand alone dedicated slide show.
I want to run the following command as a non-root user:
fbi -d /dev/fb0 -T 1 foo.jpg
When I run fbi as non-root, the result on tty1 is:
access /dev/tty1: Permission denied
I can perfectly run the command as root (sudo ...), but that's not desirable. My question is: how can I run fbi as non-root user on a tty?
The setup of the rpi4 is "headless": no X-server installed, the fbi command is issued from ssh or crontab, the output tty1 is a screen connected to a HDMI port on the rpi4.
I tried many things:
checked all possible forums, many of which address this problem without giving a satisfactory solution. The man page for fbi suggests adding the user to the vido and tty groups which I did but to no avail.
added the user to the groups tty and video
changed permissions on tty1 and fb0 to 666. Interestingly the error message changes to ...
ioctl VT_ACTIVATE: Operation not permitted.
Of course, after reboot the permissions of the /dev/tty1 and /dev/fb0 change back to normal. So changing these permissions is no good idea at all, even if it would work, which it doesn't.
Thanks you guys for your help!

I'm running bullseye on a rpi3.
Here is how I got it to work (from inside a docker container fwiw):
[x] add the user to the tty and video groups
[x] change permission on /dev/tty1 to allow group read
[ ] grant capability CAP_SYS_TTY_CONFIG to the fbi binary
The last part is what you missed (the cap).
One liner to get it:
setcap 'cap_sys_tty_config+ep' $(which fbi)
[UPDATED]
Also, to persist the permission on the tty on raspbian, look into
cat /usr/lib/udev/rules.d/50-udev-default.rules | grep "\"tty\[0"

Related

Trouble updating Nextcloud from version 23.0.0 to 23.0.5

as I said in the title, I am having trouble updating Nextcloud from version 23.0.0 to 23.0.5.
The system is running on a KVM virtual machine. To upgrade, these are the steps I make:
ssh into the server
cd /var/www/nextcloud
enable maintenance mode: sudo -u www-data php occ maintenance:mode --on
Backing up the machine
Change files ownership so they can be written: chown -R www-data /var/www/nextcloud
Update it: sudo -u www-data php updater/updater.phar
Then, I simply roll back the permissions and disable the maintenance mode
The system updates. However, when I log in and go to the administration overview, I get a warning saying:
Invalid UUIDs of LDAP users or groups have been found. Please review your "Override UUID detection" settings in the Expert part of the LDAP configuration and use "occ ldap:update-uuid" to update them.
When I run the command they say "occ ldap:update-uuid" the console outputs this:
# sudo -u www-data php occ ldap:update-uuid
8/8 [============================] 100%
No record was updated.
For 8 records, the UUID could not be saved to database. Double-check your configuration.
Do you know how to fix this?
Another possibility is getting the UUIDs and replacing them or even removing them if they are not needed. But still, I don't know how to get to them.
I found the solution.
Some LDAP Groups were deleted, and this change did not propagate to NextCloud.
When running sudo -u www-data php occ ldap:update-uuid, you can add --verbose to see what is happening.
In my case, it returned eight groups.
The solution was to open MySQL, select the NextCloud database and then delete the invalid groups in the table oc_ldap_group_mapping. To achieve this, just run:
delete from oc_ldap_group_mapping where directory_uuid like "invalidated_%"
This solution may also apply to LDAP users with invalid UUIDs, but I can't confirm it.
Thanks for your solution!
It works also for oc_ldap_user_mapping with invalid UUIDs.
Select * from oc_ldap_user_mapping where directory_uuid like "invalidated_%"
Delete from oc_ldap_user_mapping where directory_uuid like "invalidated_%"

pam_unix(sudo:auth): conversation failed, auth could not identify password for [username]

I'm using ansible to provision my Centos 7 produciton cluster. Unfortunately, execution of below command results with ansible Tiemout and Linux Pluggable Authentication Modules (pam) error conversation failed.
The same ansible command works well, executed against virtual lab mad out of vagrant boxes.
Ansible Command
$ ansible master_server -m yum -a 'name=vim state=installed' -b -K -u lukas -vvvv
123.123.123.123 | FAILED! => {
"msg": "Timeout (7s) waiting for privilege escalation prompt: \u001b[?1h\u001b=\r\r"
}
SSHd Log
# /var/log/secure
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): conversation failed
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): auth could not identify password for [lukas]
I've found the problem. It turned out to be PAM's auth module problem! Let me describe how I got to the solution.
Context:
I set up my machine for debugging - that is I had four terminal windows opened.
1st terminal (local machine): Here, I was executing ansible prduction_server -m yum -a 'name=vim state=installed' -b -K -u username
2nd terminal (production server): Here, I executed journalctl -f (system wide log).
3rd terminal (production server): Here, I executed tail -f /var/log/secure (log for sshd).
4th terminal (production server): Here, I was editing vi /etc/pam.d/sudo file.
Every time, I executed command from 1st terminal I got this errors:
# ansible error - on local machine
Timeout (7s) waiting for privilege escalation prompt error.
# sshd error - on remote machine
pam_unix(sudo:auth): conversation failed
pam_unix(sudo:auth): [username]
I showed my entire setup to my colleague, and he told me that the error had to do something with "PAM". Frankly, It was the first time that I've heard about PAM. So, I had to read this PAM Tutorial.
I figured out, that error relates to auth interface located in /etc/pam.d/sudo module. Diging over the internet, I stambled upon this pam_permit.so module with sufficient controll flag, that fixed my problem!
Solution
Basically, what I added was auth sufficient pam_permit.so line to /etc/pam.d/sudo file. Look at the example below.
$ cat /etc/pam.d/sudo
#%PAM-1.0
# Fixing ssh "auth could not identify password for [username]"
auth sufficient pam_permit.so
# Below is original config
auth include system-auth
account include system-auth
password include system-auth
session optional pam_keyinit.so revoke
session required pam_limits.so
session include system-auth
Conclusion:
I spent 4 days to arrive to this solution. I stumbled upon over a dozens solutions that did not worked for me, starting from "duplicated sudo password in ansible hosts/config file", "ldap specific configuration" to getting advice from always grumpy system admins!
Note:
Since, I'm not expert in PAM, I'm not aware if this fix affects other aspects of the system, so be cautious over blindly copy pasting this code! However, if you are expert on PAM please share with us alternative solutions or input. Thanks!
Assuming the lukas user is a local account, you should look at how the pam_unix.so module is declared in your system-auth pam file. But more information about the user account and pam configuration is necessary for a specific answer.
While adding auth sufficient pam_permit.so is enough to gain access. Using it in anything but the most insecure test environment would not be recommended. From the pam_permit man page:
pam_permit is a PAM module that always permit access. It does nothing
else.
So adding pam_permit.so as sufficient for authentication in this manner will completely bypass the security for all users.
Found myself in the same situation, tearing my hair out. In my case, hidden toward the end of the sudoers file, there was the line:
%sudo ALL=(ALL:ALL) ALL
This undoes authorizations that come before it. If you're not using the sudo group then this line can safely be deleted.
I had this error since upgrading sudo to version 1.9.4 with pacman. I hadn't noticed that pacman had provided a new sudoers file.
I just needed to merge /etc/sudoers.pacnew.
See here for more details: https://wiki.archlinux.org/index.php/Pacman/Pacnew_and_Pacsave
I know that this doesn't answer the original question (which pertains to a Centos system), but this is the top Google result for the error message, so I thought I'd leave my solution here in case anyone stumbles across this problem coming from an Arch Linux based operating system.
I got the same error when I tried to restart apache2 with sudo service apache2 restart
When logging into root I was able to see the real error lied with the configuration of apache2. Turned out I removed a site's SSL-Certificate files a few months ago but didn't disable the site in apache2. a2dissite did the trick.

rsync daemon and permissions

Problem
I am confused about rsync daemon and permissions. Unfortunately I cannot figure out why I get
rsync: opendir "/." (in share) failed: Permission denied (13) and
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1650) [generator=3.1.2].
Obviously, by searching the web and looking into the manpage of rsync / rsyncd.conf I was not able to solve this issue.
Setup
Here is my /etc/rsyncd.conf owned by root with 644 permissions:
log file = /var/log/rsyncd.log
[share]
comment = data
path = /path/to/data
uid = root
gid = root
read only = false
auth users = syncuser
secrets file = /etc/rsyncd.secrets
Note, /path/to/data is owned by root with 755 permissions (though random user or syncuser is also not working).
Besides, /etc/rsyncd.secrets has 600 permissions and is owned by root (I also tried 400 though same issue)
syncuser:passwd
To start the service (on CentOS7)
sudo systemctl start rsyncd
A first test as a random user on the host machine running the daemon, (also running with sudo or as syncuser has no effect)
rsync user#host::
returns share data showing that the configuration is fine?!
However
rsync user#host::share
leads to the errors mentioned above.
Tries
Playing with chmod (777) and chown (root:root, random user:user, syncuser:syncuser) of /path/to/data was not changing anything. Moreover I varied uid and gid to nobody but also without success.
Running above rsync command on an actual client, from which I want to ultimately copy data to the host is of course also failing.
So what am I missing here? Any hints are of course highly appreciated.
could you check SELinux?
If it is enforced, the directory /path/to/data need to be labeled correctly.
For example,
chcon -R -t public_content_t /path/to/data

Running Sudo inside SSH with << heredoc

I' m sure you will find the question similar to many other posts on stackoverflow or on internet. However, I could not find the solution to my problem precisely. I have list of task to be run on remote server, and passing the script is OK! however does not suit to the requirement.
I' m running following from my server to connect to remote server;
ssh -t user#server << 'HERE'
sudo su - <diff_user>
do task as diff_user
HERE
ssh -tt user#server << 'HERE'
sudo su - <diff_user>
do task as diff_user
HERE
With first option (-t), I' m still not able to do sudo, it says below;
sudo: sorry, you must have a tty to run sudo
With second option above (-tt), I' m getting reverse input/output to current server session, total mess.
I tried passing the content as an script to SSH to run on remote host, however, getting similar results.
Is there a way other than commenting out below?
Defaults requiretty in /etc/sudoers file
I have not tried above though, I know RedHat approved it to be removed/ commented out in future version, whenever that is. If I go with step, I will have get above done in 100's of VM's (moreover, I dont have permission to edit the file on VM's and give it a try).
Bug 1020147
Hence, my issue remains the same, as before. It would be great if I can get some input from experts here :)
Addition Info : Using RedHat RHEL 6, 2.6.32-573.3.1
I do have access to the remote host and once I' m in, my ID does not require password to switch to diff_user.
When you are asking this way, I guess you don't have passwordless sudo.
You can't communicate with the remote process (sudo), when you put the script on stdin.
You should rather use the ssh and su command:
ssh -t user#server "sudo su - <diff_user> -c do task as diff_user"
but it might not work. Interactive session can be initiated using expect (a lot of questions around here).
I was trying to connect to another machine in an automated fashion and check some logs only accessible to root/sudo.
This was done by passing the password, server, user, etc. in a file — I know this is not safe and neither a good practice, but this is the way it will be done in my company.
I have several problems:
tcgetattr: Inappropriate ioctl for device;
tty related problems that I don't remember exactly;
sudo: sorry, you must have a tty to run sudo, etc..
Here is the code that worked for me:
#!/bin/bash
function checkLog(){
FILE=$1
readarray -t LINES < "$FILE"
machine=${LINES[4]}
user=${LINES[5]}
password=${LINES[6]}
fileName=${LINES[7]}
numberOfLines=${LINES[8]}
IFS='' read -r -d '' SSH_COMMAND <<EOT
sudo -S <<< '$password' tail $fileName -n $numberOfLines
EOT
RESULTS=$(sshpass -p $password ssh -tt $user#$machine "${SSH_COMMAND}")
echo "$RESULTS"
}
checkLog $1

Cannot ssh into server except with google dev console ssh

I cannot ssh from my computer into the server hosted on Google Cloud.
I tried the normal ssh-keygen with user#domain.com and uploading the public key, which worked last time, but this time it didn't. The issue started after I changed the password for the account. After that I could no longer ssh or sftp into the account, although I wasn't disconnected until I disconnected.
I then tried the gcloud ssh user#instance and it ran fine and told me it just hasn't propagated yet.
I added AllowUsers user to the server's ssh config file and I restarted ssh on the server, but still the same result
Here's the error:
Permission denied (publickey).
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Update:
I've been working with Google tech support and this issue is still unresolvable. A file called authorized_keys permissions keep getting changed on boot to another user, who I also cannot log in as.
So I change it to:
thisUser:www-data 755
but on boot it changes it to:
otherUser:otherUser 600
There are a couple of things in order to fix this. You can take advantage of the metadata feature in GCE and add a startup script that would automatically change the permissions.
From the developers console, go to your Instance > Metadata and add a pair of Key/value
key : startup-script
value: chmod 755 /home/your_user/.ssh/authorized_keys OR chmod 755 ~/.ssh
after rebooting you should check the Serial Ouput option further down that page and see if it ran on startup. it should show you something along these lines :
startup script found in metadata.
startupscript: Running startup script /var/run/google.startup.script
Further information can be found HERE
Hope that helps!
I solved this by deleting the existing ssh key under Custom metadata in the VM settings. I then could login on ssh