CentOS 8 Cgroup (or v2) installation without CWP - centos8

We are trying to install cgroup v2 to our centOS 8 virtual servers but after all configs we can limit memory bu we can't limit cpu. We can't see the cpu.max (sys/fs/cgroup/user.slice)
Has cgroup v2 got full support on CentOS 8 ? , we don't have any idea on this. (We must use centOS 8 , can't be Rhel 8)
Do you have any idea or solution about how cgroup V2 works on CentOS 8 without CWP ?
Thank You for All

##https://www.redhat.com/en/authors/marc-richter
#https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/pdf/managing_monitoring_and_updating_the_kernel/Red_Hat_Enterprise_Linux-8-Managing_monitoring_and_updating_the_kernel-en-US.pdf
GRUB
grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="cgroup_no_v1=all"
reboot
/usr/bin/configure_cgroups.sh
#!/bin/bash
touch /var/tmp/configure_cgroups.sh.ok
mkdir -p /mnt/cgroupv2
if [ ! -f /mnt/cgroupv2/cgroup.subtree_control ]
then
mount -t cgroup2 none /mnt/cgroupv2
echo "+cpu" > /mnt/cgroupv2/cgroup.subtree_control
echo "+cpuset" > /mnt/cgroupv2/cgroup.subtree_control
echo "+memory" > /mnt/cgroupv2/cgroup.subtree_control
fi
##cpuNo=0
numberfCpus=$(nproc --all)
numberOfUsers=$(ls /home| wc -l)
cpuShare=$((numberfCpus*100000/numberOfUsers))
totalMemory=$(cat /proc/meminfo | grep MemTotal| awk '{print $2}')
memoryShare=$((totalMemory/numberOfUsers*1024))
for username in $(ls /home)
do
if [ ! -d /mnt/cgroupv2/user.$username ]
then
mkdir -p /mnt/cgroupv2/user.$username
##echo "$cpuNo" > /mnt/cgroupv2/user.$username/cpuset.cpus
##echo "$cpuNo" > /mnt/cgroupv2/user.$username/cpuset.mems
echo "$cpuShare 100000" > /mnt/cgroupv2/user.$username/cpu.max
echo "$memoryShare" > /mnt/cgroupv2/user.$username/memory.max
##cpuNo=$((cpuNo+1))
fi
done
for username in $(ls /home)
do
for pid in $(ps -ef | grep -i "^$username" | awk '{print $2}')
do
grep $pid /mnt/cgroupv2/user.$username/cgroup.procs >/dev/null
if [ $? != 0 ]
then
echo $pid > /mnt/cgroupv2/user.$username/cgroup.procs
fi
done
done
SERVICE : /usr/lib/systemd/system/configure-cgroupsv2.service
[Unit]
Description=Configure CGroups V2
[Service]
Type=oneshot
ExecStart=/usr/bin/configure_cgroups.sh
SERVICE : /usr/lib/systemd/system/configure-cgroupsv2.timer
[Unit]
Description=Configure CGroups V2 Timer
[Timer]
OnUnitActiveSec=10s
OnBootSec=10s
[Install]
WantedBy=timers.target
systemctl daemon-reload
systemctl enable configure-cgroupsv2.timer
systemctl start configure-cgroupsv2.timer
systemctl list-timers --all| grep configure
systemctl start configure-cgroupsv2.service
systemctl status configure-cgroupsv2.service
journalctl -xe
Testing :
CPU: "while true; do echo > /dev/null; done" then top
RAM : "cat /dev/zero | head -c 20000M | tail " then top

Also , you must do the following to be able to run your docker images :
1. yum install crun -y
2. cp /usr/share/containers/containers.conf /etc/containers/
3. edit /etc/containers/containers.conf
cgroups="disabled"
runtime="crun"

Related

How can I access a VPN inside a VMWare Fusion VM

I have a VPN connection in MacOS BigSur but I can't access it inside a Linux VM running under VMWare Fusion V12.1.2.
The issue has been fixed in V12.2.0 VMWare Fusion 12.2.0 Release Notes
The solution is to manually create the VPN tunnel and link it to the VM as there are multiple commands involved and the IP Address can change I created the following script to execute the required commands.
#!/bin/bash
function ask_yes_or_no() {
read -p "$1 ([y]es or [N]o): "
case $(echo $REPLY | tr '[A-Z]' '[a-z]') in
y|yes) echo "yes" ;;
*) echo "no" ;;
esac
}
currNatRules=$(sudo pfctl -a com.apple.internet-sharing/shared_v4 -s nat 2>/dev/null)
if test -z "$currNatRules"
then
echo -e "\nThere are currently no NAT rules loaded\n"
exit 0
fi
utunCheck=$(echo $currNatRules | grep utun)
if test -n "$utunCheck"
then
echo -e "\nIt looks like the VPN tunnel utun2 has already been created"
echo -e "\n$currNatRules\n"
if [[ "no" == $(ask_yes_or_no "Do you want to continue?") ]]
then
echo -e "\nExiting\n"
exit 0
fi
fi
natCIDR=$(echo $currNatRules | grep en | grep nat | cut -d\ -f 6)
if test -z "$natCIDR"
then
echo -e "\nCannot extract the NAT CIDR from:"
echo -e "\n$currNatRules\n"
exit 0
fi
interface=$(route get 10/8 | grep interface | cut -d\ -f 4)
echo -e "\nNAT CIDR=$natCIDR Interface=$interface\n"
newRule="nat on ${interface} inet from ${natCIDR} to any -> (${interface}) extfilter ei"
echo -e "\nAdding new rule: $newRule\n"
configFile="fixnat_rules.conf"
[[ -d $configFile ]] && rm $configFile
echo "$currNatRules" > $configFile
echo "$newRule" >> $configFile
sudo pfctl -a com.apple.internet-sharing/shared_v4 -N -f ${configFile} 2>/dev/null
echo -e "\nConfig update applied\n"
sudo pfctl -a com.apple.internet-sharing/shared_v4 -s nat 2>/dev/null
echo -e "\n"
exit 0

While loop and executing commandsds in remote machine

I have the following commands which needs to be included in Ansible.
How can I incorporate these commands in Ansible module?
while sudo fuser /var/lib/dpkg/lock >/dev/null 2>&1 ; do
sleep 1
done
while sudo fuser /var/lib/apt/lists/lock >/dev/null 2>&1 ; do
sleep 1
done
if [ -f /var/log/unattended-upgrades/unattended-upgrades.log ]; then
while sudo fuser /var/log/unattended-upgrades/unattended-upgrades.log >/dev/null 2>&1 ; do
sleep 1
done
fi
You can definitely use a chunk of code in the shell module using the block style indicator for literal of YAML: |.
Just mind that the code should be further indented than the shell task.
- shell: |
while sudo fuser /var/lib/dpkg/lock >/dev/null 2>&1 ; do
sleep 1
done
while sudo fuser /var/lib/apt/lists/lock >/dev/null 2>&1 ; do
sleep 1
done
if [ -f /var/log/unattended-upgrades/unattended-upgrades.log ]; then
while sudo fuser /var/log/unattended-upgrades/unattended-upgrades.log >/dev/null 2>&1 ; do
sleep 1
done
fi
You could also refactor a little bit using a loop:
- shell: |
if [ -f {{ item }}]; then
while sudo fuser {{ item }} >/dev/null 2>&1 ; do
sleep 1
done
fi
loop:
- /var/lib/dpkg/lock
- /var/lib/apt/lists/lock
- /var/log/unattended-upgrades/unattended-upgrades.log

shell script to perform stop start multiple httpd instances

I want to write a script to do restart of httpd instances only if it is in running status. For ine instance it is working fine, but more than one instance it is failing.
below is script which I am using:
ctl_var=`find /opt/apache/instances/ -name apachectl | grep -v "\/httpd\/"`
ctl_proc=`ps -ef | grep -i httpd | grep -i " 1 " wc -l`
if [ $ctl_proc <= 0 ];
then echo "httpd is not running";
else $ctl_var -k stop; echo "httpd stopped successfully" ;
sleep 5;
$ctl_var -k start;
sleep 5;
echo "httpd started" ps -ef | grep httpd | grep -i " 1 ";
fi
Please suggest...
You mentioned there are multiple instances, i see it misses for loop on execution of script. Here it only restarts the last one picked in the $ctl_var
Modified script should look something like below, tweak script if necessary :
ctl_var=`find /opt/apache/instances/ -name apachectl | grep -v "\/httpd\/"`
ctl_proc=`ps -ef | grep -i httpd | grep -i " 1 " wc -l`
for i in `echo $ctl_var`
do
if [ $ctl_proc <= 0 ];
then echo "httpd is not running";
else $i -k stop; echo "httpd stopped successfully" ;
sleep 5;
$i -k start;
sleep 5;
echo "httpd started" ps -ef | grep httpd | grep -i " 1 ";
fi
done
Hope this helps.

cannot kill process in FreeBSD

I have a script in FreeBSD 10.1 release, it's purpose is to monitor another process and keep the process alive.
When I try to kill itself, it always fail.
I try killall [name | pid]; pkill -9 [name]; service watchtas stop, none of them work.
Below is my script, please advise the solution.
#!/bin/sh
. /etc/rc.subr
prog="Thin-Agent WatchDog"
TAS_BIN="/etc/supermicro/tas-freebsd.x86_64"
TAS_LOG="/etc/supermicro/tas_system_crush.log"
monitor=1
name="watchtas"
rcvar=${name}_enable
command=/etc/rc.d/{$name}
start_cmd="watchdog"
stop_cmd="stop_watching"
load_rc_config $name
recover_tas() {
$TAS_BIN -agent start-service
RETVAl=$?
return $RETVAL
}
stop_watching() {
monitor=0
}
watchdog() {
while [ $monitor == 1 ]
do
tas_count=`ps -x | grep tas-freebsd.x86_64 | grep -v grep | wc -l | sed 's/ *//g'`
if [ $tas_count -eq 0 ]; then
timestamp=`date`
echo "[$timestamp]TAS shutdown unexpectedly, restarting TAS now..." >> $TAS_LOG
echo $?
recover_tas
else
sleep 10
fi
done
}
run_rc_command "$1"
Your start-up script fails in a couple of respects. service watchtas start does not return to the command line because the daemon process does not detach. service watchtas stop does not work as required because the variable monitor is local to the executing script.
I would separate the start-up script and the watchdog code into separate files and use daemon(8) to monitor the watchdog.
The /usr/local/etc/rc.d start-up script would look like this:
#!/bin/sh
. /etc/rc.subr
name="watchtas"
rcvar=${name}_enable
pidfile="/var/run/${name}.pid"
command="/usr/sbin/daemon"
command_args="-c -f -P ${pidfile} -r /usr/local/sbin/${name}"
load_rc_config $name
run_rc_command "$1"
The /usr/local/sbin/watchtas watchdog code would look something like this:
#!/bin/sh
TAS_BIN="/etc/supermicro/tas-freebsd.x86_64"
TAS_LOG="/etc/supermicro/tas_system_crush.log"
recover_tas() {
$TAS_BIN -agent start-service
RETVAl=$?
return $RETVAL
}
while true
do
tas_count=`ps -x | grep tas-freebsd.x86_64 | grep -v grep | wc -l | sed 's/ *//g'`
if [ $tas_count -eq 0 ]; then
timestamp=`date`
echo "[$timestamp]TAS shutdown unexpectedly, restarting TAS now..." >> $TAS_LOG
echo $?
recover_tas
else
sleep 10
fi
done
It seems you have a daemon watching a daemon watching a daemon.

lxc create unprivileged containers

I've installed lxc for create containers and I've done the commands for create unprivileged containers but I've this errors when I do:
[andrea#andrea lxc]$ lxc-create -t download -n prova0
lxc-create: conf.c: chown_mapped_root: 3406 No mapping for container root
lxc-create: lxccontainer.c: do_bdev_create: 943 Error chowning /home/andrea/.local/share/lxc/prova0/rootfs to container root
lxc-create: conf.c: suggest_default_idmap: 4444 Your system is not configured with subuids
lxc-create: lxccontainer.c: do_lxcapi_create: 1408 Error creating backing store type (none) for prova0
lxc-create: lxc_create.c: main: 274 Error creating container prova0
lxc-create: ... Your system is not configured with subuids
As per the above error message, it sounds like you're trying to create an unprivileged container without subuids configured. These steps are for Ubuntu 14.04, but I suspect they will work on Fedora as well.
$ mkdir -p ~/.config/lxc
$ echo "lxc.id_map = u 0 100000 65536" > ~/.config/lxc/default.conf
$ echo "lxc.id_map = g 0 100000 65536" >> ~/.config/lxc/default.conf
$ echo "lxc.network.type = veth" >> ~/.config/lxc/default.conf
$ echo "lxc.network.link = lxcbr0" >> ~/.config/lxc/default.conf
$ echo "$USER veth lxcbr0 2" | sudo tee -a /etc/lxc/lxc-usernet
Once these are configured, you should be able to create an ubuntu container, as follows:
$ lxc-create -t download -n u1 -- -d ubuntu -r trusty -a amd64
Taken from the Ubuntu Server LXC guide:
https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-unpriv