While loop and executing commandsds in remote machine - while-loop

I have the following commands which needs to be included in Ansible.
How can I incorporate these commands in Ansible module?
while sudo fuser /var/lib/dpkg/lock >/dev/null 2>&1 ; do
sleep 1
done
while sudo fuser /var/lib/apt/lists/lock >/dev/null 2>&1 ; do
sleep 1
done
if [ -f /var/log/unattended-upgrades/unattended-upgrades.log ]; then
while sudo fuser /var/log/unattended-upgrades/unattended-upgrades.log >/dev/null 2>&1 ; do
sleep 1
done
fi

You can definitely use a chunk of code in the shell module using the block style indicator for literal of YAML: |.
Just mind that the code should be further indented than the shell task.
- shell: |
while sudo fuser /var/lib/dpkg/lock >/dev/null 2>&1 ; do
sleep 1
done
while sudo fuser /var/lib/apt/lists/lock >/dev/null 2>&1 ; do
sleep 1
done
if [ -f /var/log/unattended-upgrades/unattended-upgrades.log ]; then
while sudo fuser /var/log/unattended-upgrades/unattended-upgrades.log >/dev/null 2>&1 ; do
sleep 1
done
fi
You could also refactor a little bit using a loop:
- shell: |
if [ -f {{ item }}]; then
while sudo fuser {{ item }} >/dev/null 2>&1 ; do
sleep 1
done
fi
loop:
- /var/lib/dpkg/lock
- /var/lib/apt/lists/lock
- /var/log/unattended-upgrades/unattended-upgrades.log

Related

How can I access a VPN inside a VMWare Fusion VM

I have a VPN connection in MacOS BigSur but I can't access it inside a Linux VM running under VMWare Fusion V12.1.2.
The issue has been fixed in V12.2.0 VMWare Fusion 12.2.0 Release Notes
The solution is to manually create the VPN tunnel and link it to the VM as there are multiple commands involved and the IP Address can change I created the following script to execute the required commands.
#!/bin/bash
function ask_yes_or_no() {
read -p "$1 ([y]es or [N]o): "
case $(echo $REPLY | tr '[A-Z]' '[a-z]') in
y|yes) echo "yes" ;;
*) echo "no" ;;
esac
}
currNatRules=$(sudo pfctl -a com.apple.internet-sharing/shared_v4 -s nat 2>/dev/null)
if test -z "$currNatRules"
then
echo -e "\nThere are currently no NAT rules loaded\n"
exit 0
fi
utunCheck=$(echo $currNatRules | grep utun)
if test -n "$utunCheck"
then
echo -e "\nIt looks like the VPN tunnel utun2 has already been created"
echo -e "\n$currNatRules\n"
if [[ "no" == $(ask_yes_or_no "Do you want to continue?") ]]
then
echo -e "\nExiting\n"
exit 0
fi
fi
natCIDR=$(echo $currNatRules | grep en | grep nat | cut -d\ -f 6)
if test -z "$natCIDR"
then
echo -e "\nCannot extract the NAT CIDR from:"
echo -e "\n$currNatRules\n"
exit 0
fi
interface=$(route get 10/8 | grep interface | cut -d\ -f 4)
echo -e "\nNAT CIDR=$natCIDR Interface=$interface\n"
newRule="nat on ${interface} inet from ${natCIDR} to any -> (${interface}) extfilter ei"
echo -e "\nAdding new rule: $newRule\n"
configFile="fixnat_rules.conf"
[[ -d $configFile ]] && rm $configFile
echo "$currNatRules" > $configFile
echo "$newRule" >> $configFile
sudo pfctl -a com.apple.internet-sharing/shared_v4 -N -f ${configFile} 2>/dev/null
echo -e "\nConfig update applied\n"
sudo pfctl -a com.apple.internet-sharing/shared_v4 -s nat 2>/dev/null
echo -e "\n"
exit 0

CentOS 8 Cgroup (or v2) installation without CWP

We are trying to install cgroup v2 to our centOS 8 virtual servers but after all configs we can limit memory bu we can't limit cpu. We can't see the cpu.max (sys/fs/cgroup/user.slice)
Has cgroup v2 got full support on CentOS 8 ? , we don't have any idea on this. (We must use centOS 8 , can't be Rhel 8)
Do you have any idea or solution about how cgroup V2 works on CentOS 8 without CWP ?
Thank You for All
##https://www.redhat.com/en/authors/marc-richter
#https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/pdf/managing_monitoring_and_updating_the_kernel/Red_Hat_Enterprise_Linux-8-Managing_monitoring_and_updating_the_kernel-en-US.pdf
GRUB
grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="cgroup_no_v1=all"
reboot
/usr/bin/configure_cgroups.sh
#!/bin/bash
touch /var/tmp/configure_cgroups.sh.ok
mkdir -p /mnt/cgroupv2
if [ ! -f /mnt/cgroupv2/cgroup.subtree_control ]
then
mount -t cgroup2 none /mnt/cgroupv2
echo "+cpu" > /mnt/cgroupv2/cgroup.subtree_control
echo "+cpuset" > /mnt/cgroupv2/cgroup.subtree_control
echo "+memory" > /mnt/cgroupv2/cgroup.subtree_control
fi
##cpuNo=0
numberfCpus=$(nproc --all)
numberOfUsers=$(ls /home| wc -l)
cpuShare=$((numberfCpus*100000/numberOfUsers))
totalMemory=$(cat /proc/meminfo | grep MemTotal| awk '{print $2}')
memoryShare=$((totalMemory/numberOfUsers*1024))
for username in $(ls /home)
do
if [ ! -d /mnt/cgroupv2/user.$username ]
then
mkdir -p /mnt/cgroupv2/user.$username
##echo "$cpuNo" > /mnt/cgroupv2/user.$username/cpuset.cpus
##echo "$cpuNo" > /mnt/cgroupv2/user.$username/cpuset.mems
echo "$cpuShare 100000" > /mnt/cgroupv2/user.$username/cpu.max
echo "$memoryShare" > /mnt/cgroupv2/user.$username/memory.max
##cpuNo=$((cpuNo+1))
fi
done
for username in $(ls /home)
do
for pid in $(ps -ef | grep -i "^$username" | awk '{print $2}')
do
grep $pid /mnt/cgroupv2/user.$username/cgroup.procs >/dev/null
if [ $? != 0 ]
then
echo $pid > /mnt/cgroupv2/user.$username/cgroup.procs
fi
done
done
SERVICE : /usr/lib/systemd/system/configure-cgroupsv2.service
[Unit]
Description=Configure CGroups V2
[Service]
Type=oneshot
ExecStart=/usr/bin/configure_cgroups.sh
SERVICE : /usr/lib/systemd/system/configure-cgroupsv2.timer
[Unit]
Description=Configure CGroups V2 Timer
[Timer]
OnUnitActiveSec=10s
OnBootSec=10s
[Install]
WantedBy=timers.target
systemctl daemon-reload
systemctl enable configure-cgroupsv2.timer
systemctl start configure-cgroupsv2.timer
systemctl list-timers --all| grep configure
systemctl start configure-cgroupsv2.service
systemctl status configure-cgroupsv2.service
journalctl -xe
Testing :
CPU: "while true; do echo > /dev/null; done" then top
RAM : "cat /dev/zero | head -c 20000M | tail " then top
Also , you must do the following to be able to run your docker images :
1. yum install crun -y
2. cp /usr/share/containers/containers.conf /etc/containers/
3. edit /etc/containers/containers.conf
cgroups="disabled"
runtime="crun"

shell script to perform stop start multiple httpd instances

I want to write a script to do restart of httpd instances only if it is in running status. For ine instance it is working fine, but more than one instance it is failing.
below is script which I am using:
ctl_var=`find /opt/apache/instances/ -name apachectl | grep -v "\/httpd\/"`
ctl_proc=`ps -ef | grep -i httpd | grep -i " 1 " wc -l`
if [ $ctl_proc <= 0 ];
then echo "httpd is not running";
else $ctl_var -k stop; echo "httpd stopped successfully" ;
sleep 5;
$ctl_var -k start;
sleep 5;
echo "httpd started" ps -ef | grep httpd | grep -i " 1 ";
fi
Please suggest...
You mentioned there are multiple instances, i see it misses for loop on execution of script. Here it only restarts the last one picked in the $ctl_var
Modified script should look something like below, tweak script if necessary :
ctl_var=`find /opt/apache/instances/ -name apachectl | grep -v "\/httpd\/"`
ctl_proc=`ps -ef | grep -i httpd | grep -i " 1 " wc -l`
for i in `echo $ctl_var`
do
if [ $ctl_proc <= 0 ];
then echo "httpd is not running";
else $i -k stop; echo "httpd stopped successfully" ;
sleep 5;
$i -k start;
sleep 5;
echo "httpd started" ps -ef | grep httpd | grep -i " 1 ";
fi
done
Hope this helps.

How can I send password safely to tmux?

The following is my code in create_tmux.zsh
#!/bin/zsh
SESSIONNAME=$1
echo $SESSIONNAME
tmux has-session -t $SESSIONNAME &> /dev/null
if [ $? != 0 ]
then
tmux new-session -d -s $SESSIONNAME -n emacs
tmux new-window -t $SESSIONNAME:1 -n a
tmux send-keys -t $SESSIONNAME:1 'ssh -Y a#bc.com;$2' C-m
fi
tmux attach -t $SESSIONNAME
It's simple if I run
create_tmux.zsh ab $%^^&av1#
But in this way, it not only shows in the terminal of my password but also recorded in history.
How can I solve this?
Thank you

Docker - Cannot start Redis Service

I'm installation Redis, setting up init.d, placed the redis.conf beside init.d.
Then using CMD service init.d start to start Redis.
However, Redis-Server does not start, and there are no indiciation in the log file that the service failed to start.
Installing Redis and Placing redis.conf to the etc/init.d folder
Commands:
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r redis && useradd -r -g redis redis
RUN apt-get update > /dev/null \
&& apt-get install -y curl > /dev/null 2>&1 \
&& rm -rf /var/lib/apt/lists/* > /dev/null 2>&1
# grab gosu for easy step-down from root
RUN gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" > /dev/null 2>&1 \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" > /dev/null 2>&1 \
&& gpg --verify /usr/local/bin/gosu.asc > /dev/null 2>&1 \
&& rm /usr/local/bin/gosu.asc > /dev/null 2>&1 \
&& chmod +x /usr/local/bin/gosu > /dev/null 2>&1
ENV REDIS_VERSION 3.0.1
ENV REDIS_DOWNLOAD_URL http://download.redis.io/releases/redis-3.0.1.tar.gz
ENV REDIS_DOWNLOAD_SHA1 fe1d06599042bfe6a0e738542f302ce9533dde88
# for redis-sentinel see: http://redis.io/topics/sentinel
RUN buildDeps='gcc libc6-dev make'; \
set -x \
&& apt-get update > /dev/null && apt-get install -y $buildDeps --no-install-recommends > /dev/null 2>&1 \
&& rm -rf /var/lib/apt/lists/* > /dev/null 2>&1 \
&& mkdir -p /usr/src/redis > /dev/null 2>&1 \
&& curl -sSL "$REDIS_DOWNLOAD_URL" -o redis.tar.gz > /dev/null 2>&1 \
&& echo "$REDIS_DOWNLOAD_SHA1 *redis.tar.gz" | sha1sum -c - > /dev/null 2>&1 \
&& tar -xzf redis.tar.gz -C /usr/src/redis --strip-components=1 > /dev/null 2>&1 \
&& rm redis.tar.gz > /dev/null 2>&1 \
&& make -C /usr/src/redis > /dev/null 2>&1 \
&& make -C /usr/src/redis install > /dev/null 2>&1 \
&& cp /usr/src/redis/utils/redis_init_script /etc/init.d/redis_6379
&& rm -r /usr/src/redis > /dev/null 2>&1 \
&& apt-get purge -y --auto-remove $buildDeps > /dev/null 2>&1
RUN mkdir /data && chown redis:redis /data
VOLUME [/data]
WORKDIR /data
CMD Service init.d start
Command:
RUN touch /var/redis/6379/redis-6379-log.txt
RUN chmod 777 /var/redis/6379/redis-6379-log.txt
ENV REDISPORT 6379
ADD $app$/redis-config.txt /etc/redis/$REDISPORT.conf
CMD service /etc/init.d/redis_6379 start
If I use shellinabox to access the container, and if I type in
/etc/init.d/redis_6379 start
Redis server will start, but it won't start in the dockerfile. Why is this?
It seems that you cannot use background processes, but instead you need something called supervisord.
To Install:
RUN apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
ADD $app$/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD /usr/bin/supervisord
Configuration File:
[supervisord]
nodaemon=true
[program:shellinabox]
command=/bin/bash -c "cd /tmp && exec /opt/shellinabox/shellinaboxd --no-beep --service ${service}"
[program:redis-server]
command=/bin/bash -c "redis-server /etc/redis/${REDISPORT}.conf"
What happens is that after the command is executed, it will start both programs shelllinabox and redis-server.
Thanks everyone for the help!
In general, you can't use an init script inside a Docker container. These scripts are typically designed to start a service "in the background", which means that even if the service starts, the script ultimately exits.
If this is the first process in your Docker container, Docker will see it exit, which will cause it to clean up the container. You will need to arrange for redis to run in the foreground in your container, or you will need to arrange to run some sort of process supervisor in your container.
Consider looking at the official resource container to see one way of setting things up. You can see the Dockerfiles in the github repository.