su does not change everything to other user (cgroups) - ssh

If I run this command:
su -l otheruser -c 'strace /usr/lib/systemd/systemd --user 2> /tmp/su.err'
It fails:
Failed to create root cgroup hierarchy: Permission denied
Failed to allocate manager object: Permission denied
I see in the strace output that starting systemd as user failed here:
mkdir("/sys/fs/cgroup/systemd/user/root/754/systemd-3893", 0755) = -1
EACCES (Permission denied)
Where does /sys/fs/cgroup/systemd/user/root/ come from?
If I run the same command via ssh to localhost it works:
ssh otheruser#localhost 'strace /usr/lib/systemd/systemd --user 2> /tmp/ssh.err'
Here, the right directory gets used:
mkdir("/sys/fs/cgroup/systemd/user/modwork_gew_dfj/825/systemd-4272", 0755) = 0
Why does it work via ssh, but not via su?
Version: su (GNU coreutils) 8.17
Update
Here you can see that the cgroup does not get changed by my version of su:
host:~ # su -l otheruser
otheruser#host:~$ cat /proc/$PPID/cgroup
10:hugetlb:/
9:perf_event:/
8:blkio:/
7:net_cls:/
6:freezer:/
5:devices:/
4:memory:/
3:cpuacct,cpu:/
2:cpuset:/
1:name=systemd:/user/root/5913 <################ root
Via ssh:
host:~ # ssh otheruser#host
otheruser#host:~$ cat /proc/$PPID/cgroup
10:hugetlb:/
9:perf_event:/
8:blkio:/
7:net_cls:/
6:freezer:/
5:devices:/
4:memory:/
3:cpuacct,cpu:/
2:cpuset:/
1:name=systemd:/user/otheruser/5919 <################ otheruser
Update2
My version of su does not change the cgroup (See the link in the answer of user "ax."). Is there a way to change the cgroup (before or after) calling su?
Update3
This version does not have this issue: su util-linux 2.25

su inherits its cgroup from the originating session, not from the user passed to su. So when you call su -l otheruser -c systemd ... as root, systemd tries to use the root cgroup (/sys/fs/cgroup/systemd/user/root/...) as otheruser and fails.
With ssh otheruser#localhost ..., both user and cgroup are otheruser, and everything works as expected.

as guettli pointed out su don't work anymore.
in centos7.2 as root I tried this seems to work for cgroup by uid:
Assume you have uid=1000 that is a high cpu share user and uid=1001 that is a low cpu share user, (I am guessing by default every new user is getting a share of 1024 which will be the case for root user (uid=0))
in centos7.2 as root I tried this seems to work for cgroup:
systemd-run --uid=1000 --slice=user-1000.slice do_uid_1000_work_commands
systemd-run --uid=1001 --slice=user-1001.slice do_uid_1001_work_commands
the above will create two adhoc services with the corresponding user-slice config under /run/systemd/system/:
/run/systemd/system/*10345*
/run/systemd/system/run-10345.service
/run/systemd/system/run-10345.service.d:
50-Description.conf 50-ExecStart.conf 50-Slice.conf 50-User.conf
Here are the rest of my configurations:
--> /etc/systemd/system/user-1000.slice.d/50-CPUShares.conf
[Slice]
CPUShares=4096
--> /etc/systemd/system/user-1001.slice.d/50-CPUShares.conf
[Slice]
CPUShares=1024
--> /usr/lib/systemd/system/user-1001.slice
[Unit]
Description=User and Session Slice for uid = 1001 (low cpu share user)
Documentation=man:systemd.special(7)
Before=slices.target
[Service]
Slice=user-1001
CPUShares=1024
--> /usr/lib/systemd/system/user-1000.slice
[Unit]
Description=User and Session Slice for uid = 1000 (high cpu share user)
Documentation=man:systemd.special(7)
Before=slices.target
[Service]
Slice=user-1000
CPUShares=4096

Related

How do I resolve Invalid SSH Key Entry error when starting App with GCE

I'm trying to launch my app on Google Compute Engine, and I get the following error:
Sep 26 22:46:09 debian google_guest_agent[411]: ERROR non_windows_accounts.go:199 Invalid ssh key entry - unrecognized format: ssh-rsa AAAAB...
I'm having a hard time interpreting it. I have the following startup script:
# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REPOSITORY="github_sleepywakes_thunderroost"
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &
# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor
# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v16.15.0/node-v16.15.0-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
# Get the application source code from the Google Cloud Repository.
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/${PROJECTID}/r/${REPOSITORY} /opt/app/github_sleepywakes_thunderroost
# Install app dependencies
cd /opt/app/github_sleepywakes_thunderroost
npm install
# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/github_sleepywakes_thunderroost
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
# Application should now be running under supervisor
My instance shows I have 2 public SSH keys. The second begins like this one in the error, but after about 12 characters it is different.
Any idea why this might be occurring?
Thanks in advance.
Once you deployed your VM instance, its a default setting that the SSH key isn't
configure yet, but you can also configure the SSH key upon deploying the VM instance.
To elaborate the answer of #JohnHanley, I tried to test in my environment.
Created a VM instance, verified the SSH configuration. As a default configuration there's no SSH key configured as I said earlier you can configure SSH key upon deploying the VM
Created a SSH key pair via CLI, you can use this link for instruction details
Navigate your VM instance, Turn off > EDIT > Security > Add Item > SSH key 1 - copy+paste generated SSH key pair > Save > Power ON VM instance
Then test the VM instance if accessible.
Documentation link How to Add SSH keys to project metadata.

chown: invalid user: ‘nfsnobody’ in fedora 32 after install nfs

I am install nfs using this command in fedora 32:
sudo dnf install nfs-utils
and then I create a dir to export storage:
[dolphin#MiWiFi-R4CM-srv infrastructure]$ cat /etc/exports
/home/dolphin/data/k8s/monitoring/infrastructure/jenkins *(rw,no_root_squash)
now I could mount this dir with root user like this:
sudo mount -t nfs -o v3 192.168.31.2:/home/dolphin/data/k8s/monitoring/infrastructure/jenkins /mnt
now I want to make a step forward to make it it avaliable to any user from any ip(the client could mount nfs without using sudo), so I first try to chown of this folder:
chown 777 jenkins
and then I want to make this jenkins folder group and user to nfsnobody:
[dolphin#MiWiFi-R4CM-srv infrastructure]$ chown -R nfsnobody jenkins
chown: invalid user: ‘nfsnobody’
and I do not find any nfsnobody content from /etc/passwd. what should I do to fix invalid user: ‘nfsnobody’ problem? should nfs-util added it automatically?
Right now nobody used by default probably after RedHat/Centos versions 8
You can simply use
chown -R nobody jenkins
Or
Change it from /etc/idmapd.conf
[Mapping]
Nobody-User = nfsnobody
Nobody-Group = nfsnobody
To put the changes into effect restart the rpcidmapd service and remount the NFSv4 filesystem:
service rpcidmapd restart
mount -o remount /nfs/mnt/point
On Red Hat Enterprise Linux 6, if the above settings have been applied and UID/GID’s are matched on server and client and users are still being mapped to nobody:nobody then a clearing of the idmapd cache may be required.
# nfsidmap -c

Docker - non-privileged user can write to / inside container

I've created a container, based off the centos:6.8 image using the following Dockerfile:
FROM centos:6.8
RUN adduser -m test
USER test
The image is then built using docker build:
docker build -t dockerdemo .
Then I start a container with:
docker run -ti dockerdemo bash
When I am inside the container, I appear to be able to write as the "test" user into the root directory of the container:
[test#9af9c4aeb990 /]$ ls -ld /
drwxr-xr-x 29 root root 4096 Oct 25 09:49 /
[test#9af9c4aeb990 /]$ id -a
uid=500(test) gid=500(test) groups=500(test)
[test#9af9c4aeb990 /]$ touch /test-file
[test#9af9c4aeb990 /]$ ls -l /test-file
-rw-rw-r-- 1 test test 0 Oct 25 09:49 /test-file
I am expecting to see Permission denied when I run the touch command.
If I alter the Dockerfile and remove the USER statement, and rebuild, then I can su to the "test" user inside the container and I get the behaviour I would expect:
[root#d16277f693d8 /]# su - test
[test#d16277f693d8 ~]$ id
uid=500(test) gid=500(test) groups=500(test)
[test#d16277f693d8 ~]$ ls -ld /
drwxr-xr-x 29 root root 4096 Oct 25 09:50 /
[test#d16277f693d8 ~]$ touch /test-file
touch: cannot touch `/test-file': Permission denied
Have I misunderstood how user permissions work inside containers?
Is there a way to produce my expected behaviour?
There was a vulnerability announced in 1.12.2 that your scenario matches. Release 1.12.3 just came out yesterday to fix this issue and CVE-2016-8867 was registered on the vulnerability. It's an internal container privilege escalation, so limited impact, but still worth the upgrade.

Changing vagrant ssh user creates permission erros

I'm trying to alter an Vagrant box I created for my office. Currently, like most boxes, running vagrant ssh logins me in as the vagrant user, but team members get frustrated having to use su - xxadmin to switch to our primary admin user.
In my Vagrantfile, I added: config.ssh.username = "xxadmin", but then I started receiving the common Vagrant error when running vagrant up:
[default] Configuring and enabling network interfaces...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
sed -e '/^#VAGRANT-BEGIN/,/^#VAGRANT-END/ d' /etc/network/interfaces > /tmp/vagrant-network-interfaces
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
and when running vagrant halt:
[default] Attempting graceful shutdown of VM...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
shutdown -h now
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
What's going on here? Why would simply changing the ssh user create these errors? How do i find a solution forward?
Specs:
OS X Mavericks (host)
Vagrant 1.3.5
Virtualbox 4.3.2
Debian 7 Wheezy (vm client)
In your box, you need to modify your sudoers file by running visudo and adding the following:
Defaults !requiretty
I kept running into this error until I made sure that my user's NOPASSWD sudoers entry was not being squashed.

Why doesn't setting the SUID bit in OpenBSD set effective and saved UIDs to executable file owner?

I am using a fresh install of OpenBSD 5.3 as a guest OS on Parallels for Mac:
$ uname -a
OpenBSD openbsd.localdomain 5.3 GENERIC#53 amd64
To my surprise, a binary file owned by root with its SUID bit set runs with UIDs as if the SUID was not set. That is, when UID 1000 runs such a program, the program starts in state:
<real_uid, effective_uid, saved_uid> = <1000, 1000, 1000>
and not in state:
<real_uid, effective_uid, saved_uid> = <1000, 0, 0>
as expected.
Why is this the case?
Here are the details regarding how I found the issue:
I have written an interactive C program (compiled as setuid_min.bin) for evaluating setuid behaviour in different Unix systems. The program lives in a subdirectory of UID 1000's home directory, and the sudo command is used to change ownership and SUID; then the program is run and I enter the uid to report the real, effective, and saved UIDs of the process:
$ sudo chown root:staff setuid_min.bin
$ ls -l | grep 'setuid_min\.bin$'
-rwxr-xr-x 1 root staff [...] setuid_min.bin
$ sudo chmod a+s setuid_min.bin
$ ls -l | grep 'setuid_min\.bin$'
-rwsr-sr-x 1 root staff [...] setuid_min.bin
$ ./setuid_min.bin
uid
1000 1000 1000 some_pid
exit
$
Note that some_pid above is the pid of the setuid_min.bin process. The program reports the real UID, effective UID, and saved UID by reporting the output of the following shell command:
ps -ao ruid,uid,svuid,pid | grep '[ ]my_pid$'
where my_pid is the pid is reported by getpid(). My only guess as to why this might be the case is that OpenBSD has some underlying permissions structure that is using the ownership/permissions of the directory where setuid_min.bin resides, or that is not actually changing ownership/SUID bit when an unprivileged user uses sudo to change file permissions.
Most likely your binary is in one of the default partitions that are mounted "nosuid". The default fstab the install script creates will by mount everything nosuid unless it's known to contain suid binaries.