Systemctl service not executed on boot - systemctl

I created a short script to set an interface to down,
[Unit]
Description=Down
[Service]
Type=simple
ExecStart=/usr/bin/base -c 'ifconfig eth0 down'
I have done systemctl enable, and systemctl start works as expected to set the interface to down.
However, on rebooting it is not executed after logging in. Is there anything I missed?
The above is done on a RPi4 with the 64-bit version of Raspberian OS.

Could you perhaps be missing the [Install] property?
[Install]
WantedBy=multi-user.target

Related

dbus systemctl error : GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name com.example.Interface was not provided by any .service files

I am new to systemctl.
I want to register a binary file using dbus with systemctl and use it.
When the binary file created by cmake is executed, it works normally.
However, if you run systemctl after registering the service file, it does not work properly.
There are server, clientA, and B services to register systemctl.
Here is the service file.
[Unit]
Description=Server Service...
After=network.target
[Service]
ExecStart=/opt/test/bin/dbusExample_Server
ExecStop=/opt/test/bin/dbusExample_Server
Restart=on-failure
RestartSec=60
[Install]
WantedBy=multi-user.target
[Unit]
Description=Client A Service...
After=network.target
[Service]
BusName=com.example.Interface
ExecStart=/opt/test/bin/helloString
ExecStop=/opt/test/bin/helloString
Restart=on-failure
RestartSec=60
Environment="DISPLAY=:0"
[Install]
WantedBy=multi-user.target
[Unit]
Description=Client B Service...
After=network.target
[Service]
ExecStart=/opt/test/bin/B/dbusExample_Client_B
ExecStop=/opt/test/bin/B/dbusExample_Client_B
Restart=on-failure
RestartSec=60
Environment="DISPLAY=:0"
[Install]
WantedBy=multi-user.target
I run.
sudo systemctl daemon-reload
sudo systemctl start server.service
sudo systemctl start clientA.service
sudo systemctl start clientB.service
In my program, when qml is executed on client A and a message is input, a message is sent from the server, and a message is also received from client B.
Enter a message and check it.
sudo systemctl status server.service
sudo systemctl status clientA.service
sudo systemctl status clientB.service
When I checked the status of clientA.service, I could see the following error.
GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name com.example.Interface was not provided by any .service files
The path to my .service file is as follows.
/lib/systemd/system
I checked that it should be in a different path through a search, and moved the following path
/.local/share/dbus-1/services
/etc/systemd/system
But I haven't been able to verify that it works.
What points do I need to fix for it to work properly? Any help would be greatly appreciated!

Error using ssh-agent with systemd

I have been running a GO binary with no issues from my home/$user on a remote machine, however when I add the binary to systemd I get error creating SSH agent: "SSH agent requested but SSH_AUTH_SOCK not-specified" My unit file is as follows
[Unit]
Description=service
[Service]
Type=simple
Restart=always
RestartSec=5s
ExecStart=/home/$user/go/src/dir/binary
[Install]
WantedBy=multi-user.target
When you run the binary directly, you are running in your personal shell environment, where ssh-agent is likely running and SSH_AUTH_SOCK has been set for your environment.
When are running it via systemd, you are running it as root and the ssh-agent and related environment variables have not been set for your your user.
Understanding ssh-agent and ssh-add has more details.

set umask for tomcat8 via tomcat.service

I am trying to set a custom umask for a tomcat 8 instance, tried to make it the good way by using the UMask directive in systemd tomcat unit as seen here without luck.
I'd like to set a 022 umask cause the company dev needs to access tomcat / application logs and they are not in the same group as the tomcat user....
the crazy thing is that the systemd doc says :
Controls the file mode creation mask. Takes an access mode in octal notation. See umask(2) for details. Defaults to 0022.
But the logs (application / tomcat) are set to 640 (not the expected 755) :
-rw-r----- 1 top top 21416 Feb 1 09:58 catalina.out
My service file :
# Systemd unit file for tomcat
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target
[...]
User=top
Group=top
UMask=0022
[Install]
WantedBy=multi-user.target
Any thoughts about this ?
Thanks
Try adding UMASK as Environment variable into tomcat's service file:
[Service]
...
Environment='UMASK=0022'
...
Default catalina.sh is checking for environment's $UMASK:
# Set UMASK unless it has been overridden
if [ -z "$UMASK" ]; then
UMASK="0027"
fi
umask $UMASK
(It seems to me, that UMask from systemd is not used by Tomcat, but I am not completely sure.)
I think you can achieve this with systemd by doing the following:
~]# mkdir -p /etc/systemd/system/tomcat.service.d
~]# echo -e "[Service]\nUMask=0022" >/etc/systemd/system/tomcat.service.d/custom-umask.conf
~]# systemctl daemon-reload
~]# systemctl restart tomcat
/etc/systemd/system/tomcat.service.d/umask-user.conf should overwrite the default values.
Source: https://access.redhat.com/solutions/2220161
P.S: A umask of 0022 would give a file 0644 permissions and a directory 0755
if using jsvc to start Tomcat as daemon process, then we need to set the -umask argument in jsvc command line

Redis Daemon not creating a PID file

The Redis startup script is supposed to create a pid file at startup, but I've confirmed all the settings I can find, and no pid file is ever created.
I installed redis by:
$ yum install redis
$ chkconfig redis on
$ service redis start
In my config file (/etc/redis.conf) I checked to make sure these were enabled:
daemonize yes
pidfile /var/run/redis/redis.pid
And in the startup script (/etc/init.d/redis) there is:
exec="/usr/sbin/$name"
pidfile="/var/run/redis/redis.pid"
REDIS_CONFIG="/etc/redis.conf"
[ -e /etc/sysconfig/redis ] && . /etc/sysconfig/redis
lockfile=/var/lock/subsys/redis
start() {
[ -f $REDIS_CONFIG ] || exit 6
[ -x $exec ] || exit 5
echo -n $"Starting $name: "
daemon --user ${REDIS_USER-redis} "$exec $REDIS_CONFIG"
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $name: "
killproc -p $pidfile $name
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
These are the settings that came by default with the install. Any idea why no pid file is created? I need to use it for Monit.
(The system is RHEL 6.4 btw)
For those experiencing on Debian buster:
Editing
nano /etc/systemd/system/redis.service
and adding this line below redis [Service]
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
It suppose to look like this:
[Service]
Type=forking
ExecStart=/usr/bin/redis-server /etc/redis/redis.conf
ExecStop=/bin/kill -s TERM $MAINPID
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
PIDFile=/run/redis/redis-server.pid
then:
sudo systemctl daemon-reload
sudo systemctl restart redis.service
Check redis.service status:
sudo systemctl status redis.service
The pid file now should appear.
On my Ubuntu 18.04, I was getting the same error.
Error reported by redis (on /var/log/redis/redis-server.log):
# Creating Server TCP listening socket ::1:6379: bind: Cannot assign requested address
This is because I've disabled IPv6 on this host and redis-server package (version 5:4.0.9-1) for Ubuntu comes with:
bind 127.0.0.1 ::1
Editing /etc/redis/redis.conf and removing the ::1 address solves the problem. Example:
bind 127.0.0.1
Edit: As pointed out in the comments (thanks to #nicholas-vasilaki and #tommyalvarez), by default redis only allows connections from localhost. Commenting all the line, using:
# bind 127.0.0.1 ::1
works, but makes redis listen from the network (not only from localhost).
More details can be found in redis configuration file.
Problem was that the user redis did not have permission to create the pid file (or directory it was in). Fix:
sudo mkdir /var/run/redis
sudo chown redis /var/run/redis
Then I killed and restarted redis and sure enough, there was redis.pid
In CentOs 7 i need to add to the file:
$ vi /usr/lib/systemd/system/redis.service
The next line:
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
And then restart the service:
$ sudo systemctl daemon-reload
$ sudo systemctl restart redis.service
Reference:
CentOs 7: Systemd & PID File
i had a similar problem on Debian Buster, systemd complains about the missing PID file, even though the file exists and redis is running.
on my system the solution using "echo $MAINPID > /run/redis/redis.pid" works by accident, although/because the real PID file is set to /run/redis/redis-server.pid (spot the different filenames!) and on my system the content of /run/redis/redis.pid (the one of the echo) was empty.
in a discussion on systemd-devel#lists.freedesktop.org someone writes:
... systemd will add the MAINPID environment variable any time it
knows what the main PID is. It learns this by reading the PID file ...
So by the time ExecStartPost runs, the main PID may or may not be
known.
having an empty MAINPID environment variable can be even harmful: if you notice the different PID filenames in the suggested solution, and correct it, you may end up in a situation where the PID file written by redis gets overwritten by an empty file. this happened to me, the result was that systemctl start redis.service never finished.
i also noticed that another server with 100% same OS and configuration, but different hardware did not have this problem.
my conclusion is that it just hits some sort of race condition, systemd seems to look for a PID file just a little too early. on my system, whatever command i used as ExecStartPost, it will add enough delay to make the error disappear.
therefore a solution is to use "sleep 1" (sleep 0.1 works too, but 1 second may be on the safe side):
ExecStartPost=/bin/sleep 1
/etc/systemd/system/redis.service now looks like:
[Service]
Type=forking
ExecStart=/usr/bin/redis-server /etc/redis/redis.conf
ExecStartPost=/bin/sleep 1
ExecStop=/bin/kill -s TERM $MAINPID
PIDFile=/run/redis/redis-server.pid
...
an alternative solution is to use "supervised systemd":
/etc/redis/redis.conf:
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised systemd
override the redis-server.service file using:
systemctl edit redis-server.service
and enter the following:
[Service]
Type=notify
reload the service and the error should be gone:
sudo systemctl restart redis.service
sudo systemctl status redis.service
Here from 2018
Before start, I am on Ubuntu 18.04.I wrote this if anyone comes here
by searching same error.
In my case error is the same but problem is so different. No solutions that proposed here worked.
So I checked logs if they are exist and looked for is there anything useful. Found them on;
cat /var/log/redis/redis-server.log
Searched logs and found that problem is that another service is listening same port.
2963:C 21 Sep 11:07:33.007 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2963:C 21 Sep 11:07:33.008 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=2963, just started
2963:C 21 Sep 11:07:33.008 # Configuration loaded
2974:M 21 Sep 11:07:33.009 # Creating Server TCP listening socket 127.0.0.1:6379: bind: Address already in use
I checked who is listening.
netstat anp | grep 6379
Found it.
tcp6 0 0 :::6379 :::* LISTEN 3036/docker-proxy
It was docker image of redis that installed by another tool
root#yavuz:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a6a94d401700 redis:3.2 "docker-entrypoint.s…" 20 hours ago Up 3 hours 0.0.0.0:6379->6379/tcp incubatorsuperset_redis_1
So I stopped docker image
root#yavuz:~# docker stop incubatorsuperset_redis_1
And redis-server started without problem.
root#yavuz:~# systemctl start redis-server
root#yavuz:~# systemctl status redis-server
● redis-server.service - Advanced key-value store
Active: active (running) since Fri 2018-09-21 11:10:34 +03; 1min 49s ago
Process: 3671 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS)
For CentOS:
In my case name of Redis server is redis.service, start it edit
systemctl edit redis.service
Add this:
[Service]
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
PIDFile=/var/run/redis/redis.pid
Im my case it create file: /etc/systemd/system/redis.service.d/override.conf
After restart service:
systemctl daemon-reload
systemctl restart redis
And the pid file is:
cat /var/run/redis/redis.pid
=> 19755
sudo nano /etc/redis/redis.conf
Inside the file, find the supervised directive. This directive allows you to declare an init system to manage Redis as a service, providing you with more control over its operation. The supervised directive is set to no by default. Since you are running Ubuntu, which uses the systemd init system, change this to systemd.
My default, Redis does not run as a daemon, and that is why it does not create a pid file. If you look at /etc/redis/redis.conf, it says so explicitly under General.
#By default Redis does not run as a daemon. Use 'yes' if you need it...
daemonize no
So all you need to do is to change it to daemonize yes
For people struggling with getting it to work on Ubuntu 18.04 you need to edit /etc/redis/redis.conf and update the pidfile declaration to following:
pidfile "/var/run/redis/redis-server.pid"
Ubuntu 18. /var/run/redis had the wrong permissions:
drwxr-sr-x 2 redis redis 60 Apr 27 12:22 redis
Changed to 755 (drwxrwxr-x) and the pid file now appears.

Error with rabbit-mq server

I am trying to setup OpenStack on Ubuntu 12.04 using devstack. Now, the error I am getting is:
Setting up rabbitmq-server (2.7.1-0ubuntu4) ...
Starting rabbitmq-server: FAILED - check /var/log/rabbitmq/startup_{log, _err}
rabbitmq-server.
invoke-rc.d: initscript rabbitmq-server, action "start" failed.
dpkg: error processing rabbitmq-server (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
++ err_trap
++ local r=100
++ set +o xtrace
stack.sh failed
Any idea why am I getting this error?
I had this issue twice, when either hostname or ip address in the hosts file didn't match.
Therefore, check that you provide the correct ip address and hostname in the /etc/hosts file
Run sudo cat /etc/hostname to see your hostname
Output:
yoursite
Run sudo nano /etc/hosts
File contains:
127.0.0.1 yoursite
As you see from cat /etc/hostname, hostname is the same as in the /etc/hosts:
Run sudo rabbitmq-server start to start the rabbitmq-server
Try deleting the folder /var/lib/rabbitmq and re-running ./stack.sh
If that doesn't work either, run the following after stach.sh fails:
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq
chown -R rabbitmq:rabbitmq /var/log/rabbitmq
service rabbitmq-server restart
and check the status of rabbitmq using "rabbitmqctl status"
Similar thing happen to me. Rabbit depends on being able to resolve a hostname, run this:
echo "127.0.0.1 $(hostname -s)" | sudo tee -a /etc/hosts
This way works for me.
First go to
sudo vim /etc/hosts
and set
127.0.0.1 <hostname>
then open firewall
sudo rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
For a clean environment, this will not happen. You must run devstack for several times, and one of them failed but you didn't get it cleaned.
run command pf -ef | grep rabbitmq, kill all rabbitmq processes. then it would be fine to run ./stack.sh
it is highly recommended to run ./unstack.sh && ./clean.sh before ./stack.sh
Just to be sure, take a look to your local network
ip add
If there's no lo network, then you should enable it:
ifconfig lo up
Then restart the server again and let's see if it works again now
systemctl start rabbitmq-server
I had the same problem though my /etc/hosts and DNS were OK. I suspect that SystemV init script was started too early when the network was not ready yet. I rewrote the startup script to systemd on CentOS 7.8 and it seems to work well now.
[Unit]
Description=RabbitMQ
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
RuntimeDirectory=rabbitmq
PrivateTmp=true
Restart=on-failure
RestartSec=10
WorkingDirectory=/opt/data/rabbitmq/
User=rabbitmq
Group=rabbitmq
ExecStart=/opt/app/rabbitmq/default/sbin/rabbitmq-server
ExecStop=/opt/app/rabbitmq/default/sbin/rabbitmqctl stop
ExecStop=/bin/sh -c "while ps -p $MAINPID >/dev/null 2>&1; do sleep 1; done"
StandardOutput=journal
StandardError=inherit
[Install]
WantedBy=multi-user.target