systemctl short status output format for specific service - systemctl

is it possible to get status for specific systemd service
$ systemctl -a | grep sshd.service
sshd.service loaded active running OpenSSH server daemon
$
but without grep, only with systemctl ? Something like systemctl SHOW_STATUS_LIKE_A_OPTION sshd.service
systemctl status - too long and multiline...

You can try systemctl is-active sshd.service, systemctl is-enabled sshd.service and systemctl is-failed sshd.service.

Based on Samuel's answer, I offer a simple shell function for .bashrc, including cheeky use of grep for colorization:
function status () {
for name in $#; do \
echo ${name} $(systemctl is-active ${name}) $(systemctl is-enabled ${name}); \
done | column -t | grep --color=always '\(disabled\|inactive\|$\)'
}
Invocation:
> status ssh ntp snapd
ssh active enabled
ntp active enabled
snapd inactive disabled
Note that is-active will print inactive for non-existent services, while is-enabled will print a warning to stderr.

Related

CentOS 8 Cgroup (or v2) installation without CWP

We are trying to install cgroup v2 to our centOS 8 virtual servers but after all configs we can limit memory bu we can't limit cpu. We can't see the cpu.max (sys/fs/cgroup/user.slice)
Has cgroup v2 got full support on CentOS 8 ? , we don't have any idea on this. (We must use centOS 8 , can't be Rhel 8)
Do you have any idea or solution about how cgroup V2 works on CentOS 8 without CWP ?
Thank You for All
##https://www.redhat.com/en/authors/marc-richter
#https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/pdf/managing_monitoring_and_updating_the_kernel/Red_Hat_Enterprise_Linux-8-Managing_monitoring_and_updating_the_kernel-en-US.pdf
GRUB
grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="cgroup_no_v1=all"
reboot
/usr/bin/configure_cgroups.sh
#!/bin/bash
touch /var/tmp/configure_cgroups.sh.ok
mkdir -p /mnt/cgroupv2
if [ ! -f /mnt/cgroupv2/cgroup.subtree_control ]
then
mount -t cgroup2 none /mnt/cgroupv2
echo "+cpu" > /mnt/cgroupv2/cgroup.subtree_control
echo "+cpuset" > /mnt/cgroupv2/cgroup.subtree_control
echo "+memory" > /mnt/cgroupv2/cgroup.subtree_control
fi
##cpuNo=0
numberfCpus=$(nproc --all)
numberOfUsers=$(ls /home| wc -l)
cpuShare=$((numberfCpus*100000/numberOfUsers))
totalMemory=$(cat /proc/meminfo | grep MemTotal| awk '{print $2}')
memoryShare=$((totalMemory/numberOfUsers*1024))
for username in $(ls /home)
do
if [ ! -d /mnt/cgroupv2/user.$username ]
then
mkdir -p /mnt/cgroupv2/user.$username
##echo "$cpuNo" > /mnt/cgroupv2/user.$username/cpuset.cpus
##echo "$cpuNo" > /mnt/cgroupv2/user.$username/cpuset.mems
echo "$cpuShare 100000" > /mnt/cgroupv2/user.$username/cpu.max
echo "$memoryShare" > /mnt/cgroupv2/user.$username/memory.max
##cpuNo=$((cpuNo+1))
fi
done
for username in $(ls /home)
do
for pid in $(ps -ef | grep -i "^$username" | awk '{print $2}')
do
grep $pid /mnt/cgroupv2/user.$username/cgroup.procs >/dev/null
if [ $? != 0 ]
then
echo $pid > /mnt/cgroupv2/user.$username/cgroup.procs
fi
done
done
SERVICE : /usr/lib/systemd/system/configure-cgroupsv2.service
[Unit]
Description=Configure CGroups V2
[Service]
Type=oneshot
ExecStart=/usr/bin/configure_cgroups.sh
SERVICE : /usr/lib/systemd/system/configure-cgroupsv2.timer
[Unit]
Description=Configure CGroups V2 Timer
[Timer]
OnUnitActiveSec=10s
OnBootSec=10s
[Install]
WantedBy=timers.target
systemctl daemon-reload
systemctl enable configure-cgroupsv2.timer
systemctl start configure-cgroupsv2.timer
systemctl list-timers --all| grep configure
systemctl start configure-cgroupsv2.service
systemctl status configure-cgroupsv2.service
journalctl -xe
Testing :
CPU: "while true; do echo > /dev/null; done" then top
RAM : "cat /dev/zero | head -c 20000M | tail " then top
Also , you must do the following to be able to run your docker images :
1. yum install crun -y
2. cp /usr/share/containers/containers.conf /etc/containers/
3. edit /etc/containers/containers.conf
cgroups="disabled"
runtime="crun"

Use knife solo with a non-root user with sudo access, without a password

When I use chef knife solo with a non-root user with sudo access, it always asks me that user's password. However, I have turned off password access to that server.
$ bundle exec knife solo cook supersecretuser#productionserver -VV
Starting 'Run'
Running Chef on productionserver...
Checking Chef version...
DEBUG: Initial command sudo chef-solo --version 2>/dev/null | awk '$1 == "Chef:" {print $2}'
DEBUG: Initial command sudo -V
DEBUG: Running processed command sudo -V
DEBUG: sudo -V stdout: Sudo version 1.8.9p5
DEBUG: sudo -V stdout: Sudoers policy plugin version 1.8.9p5
Sudoers file grammar version 43
DEBUG: sudo -V stdout: Sudoers I/O plugin version 1.8.9p5
DEBUG: Running processed command sudo -E -p 'knife sudo password: ' chef-solo --version 2>/dev/null | awk '$1 == "Chef:" {print $2}'
Enter the password for supersecretuser#productionserver:
I've added my ssh key to the server and am able to ssh into that server with ssh supersecretuser#productionserver without needing a password.
I have used chef knife solo on my staging server with the root user and it works fine.
I've tried explicitly passing my ssh key using the -i option, chef knife solo cook supersecretuser#productionserver -i ~/.ssh/id_rsa.pub and knife solo didn't seem to use that. Any ideas on what to try next?
Note: I am showing cook because that's where I'm at. I did a prepare and it worked because I hadn't turned off password access yet - that happened with my first cook - so I was able to just enter the password.
You messing between ssh and sudo, check your sudoers file for the supersecretuser entry it should looks like:
#Vagrant entry to allow sudo from vagrant
vagrant ALL=(ALL) NOPASSWD: ALL
Here allowing vagrant user to sudo any command without password.

Redis Daemon not creating a PID file

The Redis startup script is supposed to create a pid file at startup, but I've confirmed all the settings I can find, and no pid file is ever created.
I installed redis by:
$ yum install redis
$ chkconfig redis on
$ service redis start
In my config file (/etc/redis.conf) I checked to make sure these were enabled:
daemonize yes
pidfile /var/run/redis/redis.pid
And in the startup script (/etc/init.d/redis) there is:
exec="/usr/sbin/$name"
pidfile="/var/run/redis/redis.pid"
REDIS_CONFIG="/etc/redis.conf"
[ -e /etc/sysconfig/redis ] && . /etc/sysconfig/redis
lockfile=/var/lock/subsys/redis
start() {
[ -f $REDIS_CONFIG ] || exit 6
[ -x $exec ] || exit 5
echo -n $"Starting $name: "
daemon --user ${REDIS_USER-redis} "$exec $REDIS_CONFIG"
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $name: "
killproc -p $pidfile $name
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
These are the settings that came by default with the install. Any idea why no pid file is created? I need to use it for Monit.
(The system is RHEL 6.4 btw)
For those experiencing on Debian buster:
Editing
nano /etc/systemd/system/redis.service
and adding this line below redis [Service]
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
It suppose to look like this:
[Service]
Type=forking
ExecStart=/usr/bin/redis-server /etc/redis/redis.conf
ExecStop=/bin/kill -s TERM $MAINPID
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
PIDFile=/run/redis/redis-server.pid
then:
sudo systemctl daemon-reload
sudo systemctl restart redis.service
Check redis.service status:
sudo systemctl status redis.service
The pid file now should appear.
On my Ubuntu 18.04, I was getting the same error.
Error reported by redis (on /var/log/redis/redis-server.log):
# Creating Server TCP listening socket ::1:6379: bind: Cannot assign requested address
This is because I've disabled IPv6 on this host and redis-server package (version 5:4.0.9-1) for Ubuntu comes with:
bind 127.0.0.1 ::1
Editing /etc/redis/redis.conf and removing the ::1 address solves the problem. Example:
bind 127.0.0.1
Edit: As pointed out in the comments (thanks to #nicholas-vasilaki and #tommyalvarez), by default redis only allows connections from localhost. Commenting all the line, using:
# bind 127.0.0.1 ::1
works, but makes redis listen from the network (not only from localhost).
More details can be found in redis configuration file.
Problem was that the user redis did not have permission to create the pid file (or directory it was in). Fix:
sudo mkdir /var/run/redis
sudo chown redis /var/run/redis
Then I killed and restarted redis and sure enough, there was redis.pid
In CentOs 7 i need to add to the file:
$ vi /usr/lib/systemd/system/redis.service
The next line:
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
And then restart the service:
$ sudo systemctl daemon-reload
$ sudo systemctl restart redis.service
Reference:
CentOs 7: Systemd & PID File
i had a similar problem on Debian Buster, systemd complains about the missing PID file, even though the file exists and redis is running.
on my system the solution using "echo $MAINPID > /run/redis/redis.pid" works by accident, although/because the real PID file is set to /run/redis/redis-server.pid (spot the different filenames!) and on my system the content of /run/redis/redis.pid (the one of the echo) was empty.
in a discussion on systemd-devel#lists.freedesktop.org someone writes:
... systemd will add the MAINPID environment variable any time it
knows what the main PID is. It learns this by reading the PID file ...
So by the time ExecStartPost runs, the main PID may or may not be
known.
having an empty MAINPID environment variable can be even harmful: if you notice the different PID filenames in the suggested solution, and correct it, you may end up in a situation where the PID file written by redis gets overwritten by an empty file. this happened to me, the result was that systemctl start redis.service never finished.
i also noticed that another server with 100% same OS and configuration, but different hardware did not have this problem.
my conclusion is that it just hits some sort of race condition, systemd seems to look for a PID file just a little too early. on my system, whatever command i used as ExecStartPost, it will add enough delay to make the error disappear.
therefore a solution is to use "sleep 1" (sleep 0.1 works too, but 1 second may be on the safe side):
ExecStartPost=/bin/sleep 1
/etc/systemd/system/redis.service now looks like:
[Service]
Type=forking
ExecStart=/usr/bin/redis-server /etc/redis/redis.conf
ExecStartPost=/bin/sleep 1
ExecStop=/bin/kill -s TERM $MAINPID
PIDFile=/run/redis/redis-server.pid
...
an alternative solution is to use "supervised systemd":
/etc/redis/redis.conf:
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised systemd
override the redis-server.service file using:
systemctl edit redis-server.service
and enter the following:
[Service]
Type=notify
reload the service and the error should be gone:
sudo systemctl restart redis.service
sudo systemctl status redis.service
Here from 2018
Before start, I am on Ubuntu 18.04.I wrote this if anyone comes here
by searching same error.
In my case error is the same but problem is so different. No solutions that proposed here worked.
So I checked logs if they are exist and looked for is there anything useful. Found them on;
cat /var/log/redis/redis-server.log
Searched logs and found that problem is that another service is listening same port.
2963:C 21 Sep 11:07:33.007 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2963:C 21 Sep 11:07:33.008 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=2963, just started
2963:C 21 Sep 11:07:33.008 # Configuration loaded
2974:M 21 Sep 11:07:33.009 # Creating Server TCP listening socket 127.0.0.1:6379: bind: Address already in use
I checked who is listening.
netstat anp | grep 6379
Found it.
tcp6 0 0 :::6379 :::* LISTEN 3036/docker-proxy
It was docker image of redis that installed by another tool
root#yavuz:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a6a94d401700 redis:3.2 "docker-entrypoint.s…" 20 hours ago Up 3 hours 0.0.0.0:6379->6379/tcp incubatorsuperset_redis_1
So I stopped docker image
root#yavuz:~# docker stop incubatorsuperset_redis_1
And redis-server started without problem.
root#yavuz:~# systemctl start redis-server
root#yavuz:~# systemctl status redis-server
● redis-server.service - Advanced key-value store
Active: active (running) since Fri 2018-09-21 11:10:34 +03; 1min 49s ago
Process: 3671 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS)
For CentOS:
In my case name of Redis server is redis.service, start it edit
systemctl edit redis.service
Add this:
[Service]
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
PIDFile=/var/run/redis/redis.pid
Im my case it create file: /etc/systemd/system/redis.service.d/override.conf
After restart service:
systemctl daemon-reload
systemctl restart redis
And the pid file is:
cat /var/run/redis/redis.pid
=> 19755
sudo nano /etc/redis/redis.conf
Inside the file, find the supervised directive. This directive allows you to declare an init system to manage Redis as a service, providing you with more control over its operation. The supervised directive is set to no by default. Since you are running Ubuntu, which uses the systemd init system, change this to systemd.
My default, Redis does not run as a daemon, and that is why it does not create a pid file. If you look at /etc/redis/redis.conf, it says so explicitly under General.
#By default Redis does not run as a daemon. Use 'yes' if you need it...
daemonize no
So all you need to do is to change it to daemonize yes
For people struggling with getting it to work on Ubuntu 18.04 you need to edit /etc/redis/redis.conf and update the pidfile declaration to following:
pidfile "/var/run/redis/redis-server.pid"
Ubuntu 18. /var/run/redis had the wrong permissions:
drwxr-sr-x 2 redis redis 60 Apr 27 12:22 redis
Changed to 755 (drwxrwxr-x) and the pid file now appears.

Error with rabbit-mq server

I am trying to setup OpenStack on Ubuntu 12.04 using devstack. Now, the error I am getting is:
Setting up rabbitmq-server (2.7.1-0ubuntu4) ...
Starting rabbitmq-server: FAILED - check /var/log/rabbitmq/startup_{log, _err}
rabbitmq-server.
invoke-rc.d: initscript rabbitmq-server, action "start" failed.
dpkg: error processing rabbitmq-server (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
++ err_trap
++ local r=100
++ set +o xtrace
stack.sh failed
Any idea why am I getting this error?
I had this issue twice, when either hostname or ip address in the hosts file didn't match.
Therefore, check that you provide the correct ip address and hostname in the /etc/hosts file
Run sudo cat /etc/hostname to see your hostname
Output:
yoursite
Run sudo nano /etc/hosts
File contains:
127.0.0.1 yoursite
As you see from cat /etc/hostname, hostname is the same as in the /etc/hosts:
Run sudo rabbitmq-server start to start the rabbitmq-server
Try deleting the folder /var/lib/rabbitmq and re-running ./stack.sh
If that doesn't work either, run the following after stach.sh fails:
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq
chown -R rabbitmq:rabbitmq /var/log/rabbitmq
service rabbitmq-server restart
and check the status of rabbitmq using "rabbitmqctl status"
Similar thing happen to me. Rabbit depends on being able to resolve a hostname, run this:
echo "127.0.0.1 $(hostname -s)" | sudo tee -a /etc/hosts
This way works for me.
First go to
sudo vim /etc/hosts
and set
127.0.0.1 <hostname>
then open firewall
sudo rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
For a clean environment, this will not happen. You must run devstack for several times, and one of them failed but you didn't get it cleaned.
run command pf -ef | grep rabbitmq, kill all rabbitmq processes. then it would be fine to run ./stack.sh
it is highly recommended to run ./unstack.sh && ./clean.sh before ./stack.sh
Just to be sure, take a look to your local network
ip add
If there's no lo network, then you should enable it:
ifconfig lo up
Then restart the server again and let's see if it works again now
systemctl start rabbitmq-server
I had the same problem though my /etc/hosts and DNS were OK. I suspect that SystemV init script was started too early when the network was not ready yet. I rewrote the startup script to systemd on CentOS 7.8 and it seems to work well now.
[Unit]
Description=RabbitMQ
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
RuntimeDirectory=rabbitmq
PrivateTmp=true
Restart=on-failure
RestartSec=10
WorkingDirectory=/opt/data/rabbitmq/
User=rabbitmq
Group=rabbitmq
ExecStart=/opt/app/rabbitmq/default/sbin/rabbitmq-server
ExecStop=/opt/app/rabbitmq/default/sbin/rabbitmqctl stop
ExecStop=/bin/sh -c "while ps -p $MAINPID >/dev/null 2>&1; do sleep 1; done"
StandardOutput=journal
StandardError=inherit
[Install]
WantedBy=multi-user.target

docker rabbitmq hostname issue

I am build an image using Dockerfile, and I would like to add users to RabbitMQ right after installation. The problem is that during build hostname of the docker container is different from when I run the resultant image. RabbitMQ loses that user; because of changed hostname it uses another DB.
I connot change /etc/hosts and /etc/hostname files from inside a container, and looks that RabbitMQ is not picking my changes to RABBITMQ_NODENAME and HOSTNAME variables.
The only thing that I found working is running this before starting RabbitMQ broker:
echo "NODENAME=rabbit#localhost" >> /etc/rabbitmq/rabbitmq.conf.d/ewos.conf
But then I will have to run docker image with changed hostname all the time.
docker run -h="localhost" image
Any ideas on what can be done? Maybe the solution is to add users to RabbitMQ not on build but on image run?
Just here is example how to configure from Dockerfile properly:
ENV HOSTNAME localhost
RUN /etc/init.d/rabbitmq-server start ; rabbitmqctl add_vhost /test; /etc/init.d/rabbitmq-server stop
This is remember your config.
Yes, I would suggest to add users when the container runs for the first time.
Instead of starting RabbitMQ directly, you can run a wrapper script that will take care of all the setup, and then start RabbitMQ. If the last step of the wrapper script is a process start, remember that you can use exec so that the new process replaces the script itself.
This is how I did it.
Dockerfile
FROM debian:jessie
MAINTAINER Francesco Casula <fra.casula#gmail.com>
VOLUME ["/var/www"]
WORKDIR /var/www
ENV HOSTNAME my-docker
ENV RABBITMQ_NODENAME rabbit#my-docker
COPY scripts /root/scripts
RUN /bin/bash /root/scripts/os-setup.bash && \
/bin/bash /root/scripts/install-rabbitmq.bash
CMD /etc/init.d/rabbitmq-server start && \
/bin/bash
os-setup.bash
#!/bin/bash
echo "127.0.0.1 localhost" > /etc/hosts
echo "127.0.1.1 my-docker" >> /etc/hosts
echo "my-docker" > /etc/hostname
install-rabbitmq.bash
#!/bin/bash
echo "NODENAME=rabbit#my-docker" > /etc/rabbitmq/rabbitmq-env.conf
echo 'deb http://www.rabbitmq.com/debian/ testing main' | tee /etc/apt/sources.list.d/rabbitmq.list
wget -O- https://www.rabbitmq.com/rabbitmq-release-signing-key.asc | apt-key add -
apt-get update
cd ~
wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.5/rabbitmq-server_3.6.5-1_all.deb
dpkg -i rabbitmq-server_3.6.5-1_all.deb
apt-get install -f -y
/etc/init.d/rabbitmq-server start
sleep 3
rabbitmq-plugins enable amqp_client mochiweb rabbitmq_management rabbitmq_management_agent \
rabbitmq_management_visualiser rabbitmq_web_dispatch webmachine
rabbitmqctl delete_user guest
rabbitmqctl add_user bunny password
rabbitmqctl set_user_tags bunny administrator
rabbitmqctl delete_vhost /
rabbitmqctl add_vhost symfony_prod
rabbitmqctl set_permissions -p symfony_prod bunny ".*" ".*" ".*"
rabbitmqctl add_vhost symfony_dev
rabbitmqctl set_permissions -p symfony_dev bunny ".*" ".*" ".*"
rabbitmqctl add_vhost symfony_test
rabbitmqctl set_permissions -p symfony_test bunny ".*" ".*" ".*"
/etc/init.d/rabbitmq-server restart
IS_RABBIT_INSTALLED=`rabbitmqctl status | grep RabbitMQ | grep "3\.6\.5" | wc -l`
if [ "$IS_RABBIT_INSTALLED" = "0" ]; then
exit 1
fi
IS_RABBIT_CONFIGURED=`rabbitmqctl list_users | grep bunny | grep "administrator" | wc -l`
if [ "$IS_RABBIT_CONFIGURED" = "0" ]; then
exit 1
fi
Don't forget to run the container by specifying the right host with the -h flag:
docker run -h my-docker -it --name=my-docker -v $(pwd)/htdocs:/var/www my-docker
The only thing that helped me was to change default value in rabbitmq-env.conf of MNESIA_BASE property to MNESIA_BASE=/data and I added this command RUN mkdir /data in Dockerfile before starting server and add users.