Radius server failed to start in centos 7 - apache

At beginning I successfully configured radius server with mariadb and httpd. But I changed to hostname of the server and rebooted. Now even if the mariadb and httpd is running but radiusd failed to start. Here is the answer from journalctl -xe .. Please help me.
Jan 10 12:34:08 cpe.twcny.res.rr.com systemd[1]: Unit radiusd.service entered failed state.
Jan 10 12:34:08 cpe.twcny.res.rr.com systemd[1]: radiusd.service failed.
Jan 10 12:34:08 cpe.twcny.res.rr.com polkitd[963]: Unregistered Authentication Agent for unix-process:2183:15540 (system bus name :1.43, object path /org/
Jan 10 12:40:01 cpe.twcny.res.rr.com systemd[1]: Created slice User Slice of root.

Related

Unable to ssh VM after hardware configuration change

I followed the recommandation to reduce the size of my VM (number of CPU from 4 to 2 and memory from 16GO to 8 Go). After updating the configuration and restarting the VM i was not able to access the VM via ssh.
The VM has an external IP.
The troublshoot diagnostic using gcloud does not show any error or issue in the log. Everything is fine regarding the firewall configuration.
I tried to create a new VM under my project (same project as the original VM). I cannot access it with ssh. If i create a new project and a new VM instance under this new project then I can ssh it. --> The problem seems to be related to the project itself.
I tried to access vie serial port and I am getting these errors:
Mar 8 20:31:11 myvm systemd[1]: Started Google OSConfig Agent.
Mar 8 20:32:11 myvm OSConfigAgent[1173]: 2022-03-08T20:32:11.5643Z OSConfigAgent Critical main.go:100: Error parsing metadata, agent cannot start: network error when requesting metadata, make sure your instance has an active network and can reach the metadata server: Get http://169.254.169.254/computeMetadata/v1/?recursive=true&alt=json&wait_for_change=true&last_etag=0&timeout_sec=60: dial tcp 169.254.169.254:80: connect: network is unreachable
Mar 8 20:32:11 myvm systemd[1]: google-osconfig-agent.service: Main process exited, code=exited, status=1/FAILURE
Mar 8 20:32:11 myvm systemd[1]: google-osconfig-agent.service: Failed with result 'exit-code'.
Mar 8 20:32:12 myvm systemd[1]: google-osconfig-agent.service: Service hold-off time over, scheduling restart.
Mar 8 20:32:12 myvm systemd[1]: google-osconfig-agent.service: Scheduled restart job, restart counter is at 4.
I am blocked... I am asking for your support. Any idea or suggestion?

Cloudstack KVM installation failed

I'm installing cloudstack on ubuntu 20.04 by following this document.
I installed qemu-kvm and cloudstack-agent successfully but I'm not able to start libvirtd.service, on seeing the status I'm getting following errors
● libvirtd.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2021-03-16 18:00:09 IST; 1min 28s ago
TriggeredBy: ● libvirtd-admin.socket
● libvirtd.socket
● libvirtd-ro.socket
Docs: man:libvirtd(8)
https://libvirt.org
Process: 232313 ExecStart=/usr/sbin/libvirtd $libvirtd_opts (code=exited, status=6)
Main PID: 232313 (code=exited, status=6)
Mar 16 18:00:09 host systemd[1]: libvirtd.service: Scheduled restart job, restart counter is at 5.
Mar 16 18:00:09 host systemd[1]: Stopped Virtualization daemon.
Mar 16 18:00:09 host systemd[1]: libvirtd.service: Start request repeated too quickly.
Mar 16 18:00:09 host systemd[1]: libvirtd.service: Failed with result 'exit-code'.
Mar 16 18:00:09 host systemd[1]: Failed to start Virtualization daemon.
on seeing the log of journalctl -xe it is showing cloudstack-usage.service: Failed with result 'exit-code'
can any one suggest what whould be the issue.
Are you trying this on a virtualised VM, or baremetal host, or on a raspberrypi? This means some other service hasn't started which libvirtd may depend on. See if you can run "systemctl daemon-reload" and try to start libvirtd manually "systemctl start libvirtd", and then try rest. The cloudstack-usage service can be started once the mysql server is running. If you've further questions I encourage you to join the CloudStack users mailing list and ask questions there - http://cloudstack.apache.org/mailing-lists.html
I got that same error message when following the official install guide when starting the mysql server. The problem was for me that [mysqld] was missing in the my.conf file before the config snippet. The documentation is misleading in that case (like the secion header is only relevant when editing that alternative mysql config file mentioned later there).

Cannot start redis server on Centos 7 with systemctl

I have some trouble in starting Redis on CentOS7 with systemctl. What should I do to troubleshoot?
I can use the normal command to start the Redis. Like:
# /etc/init.d/redis start
or
/usr/local/bin/redis-server /etc/redis/config.conf
And here is my redis.service file which I put into /lib/systemd/system:
[Unit]
Description=Redis persistent key-value database
After=network.target
[Service]
Type=forking
PIDFILE=/var/run/redis_6379.pid
ExecStart=/etc/init.d/redis start
ExecStop=/etc/init.d/redis stop
PrivateTmp=true
[Install]
WantedBy=multi-user.target
But when I use command systemctl start redis to start redis server. I got nothing.
I try to use systemctl status redis to read the systemctl log, it shows me these messages:
● redis.service - Redis persistent key-value database
Loaded: loaded (/usr/lib/systemd/system/redis.service; disabled; vendor preset: disabled)
Active: active (exited) since Fri 2018-08-31 15:45:37 CST; 2 days ago
Aug 31 15:45:37 redisserver001 systemd[1]: Starting LSB: start and stop redis_6379...
Aug 31 15:45:37 redisserver001 systemd[1]: Started LSB: start and stop redis_6379.
Aug 31 15:45:37 redisserver001 redis[24755]: /var/run/redis_6379.pid exists, process is already running or crashed
Sep 03 10:31:21 redisserver001 systemd[1]: [/usr/lib/systemd/system/redis.service:6] Unknown lvalue 'PIDFILE' in section 'Service'
Sep 03 10:33:13 redisserver001 systemd[1]: [/usr/lib/systemd/system/redis.service:6] Unknown lvalue 'PIDFILE' in section 'Service'
Sep 03 10:45:32 redisserver001 systemd[1]: [/usr/lib/systemd/system/redis.service:7] Unknown lvalue 'PIDFILE' in section 'Service'
Sep 03 11:08:28 redisserver001 systemd[1]: [/usr/lib/systemd/system/redis.service:7] Unknown lvalue 'PIDFILE' in section 'Service'
The following items is the key configration that I think could impact the redis running. But I donn't know where I've make mistakes. Please help. Thanks a lot.
pidfile /var/run/redis_6379.pid
daemonize yes
supervised systemd
If an application specifies the "pidfile" property in the service file, then its the responsibility of the application to write the pid of the main process into that file, before the service initialization is complete. You need to make sure that your application is doing that. Systemd will read this value, and will prevent another forked process from being created if the user executes the "systemctl start ", and the pid file already exists. From the output you posted, it seems like systemd believes that the redis process is already running (because of the presence of the pid file, and doesnt create a new one). You can set the pid in the "ExecStartPost" clause of the service file. Something like:
ExecStartPost=/bin/sh -c 'umask 022; pgrep YOURSERVICE > /var/run/YOURSERVICE.pid'
The option must be PIDFile (case sensetive). From the manpage man systemd.service
PIDFile=
Takes a path referring to the PID file of the service. Usage of this option is recommended for
services where Type= is set to forking. The path specified typically points to a file below /run/. If
a relative path is specified it is hence prefixed with /run/. The service manager will read the PID
of the main process of the service from this file after start-up of the service. The service manager
will not write to the file configured here, although it will remove the file after the service has
shut down if it still exists. The PID file does not need to be owned by a privileged user, but if it
is owned by an unprivileged user additional safety restrictions are enforced: the file may not be a
symlink to a file owned by a different user (neither directly nor indirectly), and the PID file must
refer to a process already belonging to the service.

Nginx and Solr server on port 8983 but can't access Admin area

Im running Nginx and I just installed solr.
Service status reports everythign is ok...
sudo service solr statusroot#closer:~# sudo service solr status
● solr.service - LSB: Controls Apache Solr as a Service
Loaded: loaded (/etc/init.d/solr; bad; vendor preset: enabled)
Active: active (exited) since Sat 2018-07-14 18:21:14 UTC; 1s ago
Docs: man:systemd-sysv-generator(8)
Process: 2549 ExecStop=/etc/init.d/solr stop (code=exited, status=0/SUCCESS)
Process: 2699 ExecStart=/etc/init.d/solr start (code=exited, status=0/SUCCESS)
Jul 14 18:21:08 closer solr[2699]: If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
Jul 14 18:21:08 closer solr[2699]: *** [WARN] *** Your Max Processes Limit is currently 3896.
Jul 14 18:21:08 closer solr[2699]: It should be set to 65000 to avoid operational disruption.
Jul 14 18:21:08 closer solr[2699]: If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
Jul 14 18:21:08 closer solr[2699]: Warning: Available entropy is low. As a result, use of the UUIDField, SSL, or any other features that require
Jul 14 18:21:08 closer solr[2699]: RNG might not work properly. To check for the amount of available entropy, use 'cat /proc/sys/kernel/random/entr
Jul 14 18:21:14 closer solr[2699]: [194B blob data]
Jul 14 18:21:14 closer solr[2699]: Started Solr server on port 8983 (pid=2751). Happy searching!
Jul 14 18:21:14 closer solr[2699]: [14B blob data]
Jul 14 18:21:14 closer systemd[1]: Started LSB: Controls Apache Solr as a Service.
but if I try to go to xxx.xxx.xxx.xxx:8983/solr
I can't access the page...why?
do i have to ufw port 8983?
do i have to start apache?
else?
It worked after
sudo ufw allow 8983
I can't believe all the online guides never mentioned it.

How to solve race condition in etcd leader election?

While testing a Core Os cluster with three nodes, after successfully adding and removing few additional nodes, I encountered the following problem, supposedly due to a race condition during the election process for etcd.
Checking the new leader gives:
$ curl -L http://127.0.0.1:4001/v2/stats/leader
{"errorCode":300,"message":"Raft Internal Error","index":629006}
Journalctl for each machine in the cluster gives:
$ journalctl -r -u etcd
-- Logs begin at Wed 2014-11-12 15:09:01 UTC, end at Mon 2014-11-24 10:47:34 UTC. --
Nov 24 10:47:34 node-1 etcd[56576]: [etcd] Nov 24 10:47:34.307 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: term #5221 started.
Nov 24 10:47:34 node-1 etcd[56576]: [etcd] Nov 24 10:47:34.306 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: state changed from 'candidate' to 'follower'.
Nov 24 10:47:33 node-1 etcd[56576]: [etcd] Nov 24 10:47:33.098 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: state changed from 'follower' to 'candidate'.
Nov 24 10:47:32 node-1 etcd[56576]: [etcd] Nov 24 10:47:32.081 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: term #5219 started.
Nov 24 10:47:32 node-1 etcd[56576]: [etcd] Nov 24 10:47:32.081 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: state changed from 'candidate' to 'follower'.
Nov 24 10:47:31 node-1 etcd[56576]: [etcd] Nov 24 10:47:31.962 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: state changed from 'follower' to 'candidate'.
And listing the machines with fleet fails:
$ fleetctl list-machines
2014/11/24 10:56:19 INFO client.go:278: Failed getting response from http://127.0.0.1:4001/: dial tcp 127.0.0.1:4001: connection refused
2014/11/24 10:56:19 ERROR client.go:200: Unable to get result for {Get /_coreos.com/fleet/machines}, retrying in 100ms
2014/11/24 10:56:19 INFO client.go:278: Failed getting response from http://127.0.0.1:4001/: dial tcp 127.0.0.1:4001: connection refused
2014/11/24 10:56:19 ERROR client.go:200: Unable to get result for {Get /_coreos.com/fleet/machines}, retrying in 200ms
2014/11/24 10:56:19 INFO client.go:278: Failed getting response from http://127.0.0.1:4001/: dial tcp 127.0.0.1:4001: connection refused
Listing the machines in the cluster gives:
$ curl -L http://127.0.0.1:7001/v2/admin/machines
[{"name":"","state":"follower","clientURL":"http://100.72.62.35:4001","peerURL":"http://100.72.62.35:7001"},
{"name":"555cca74216644fea48990673b3d539c","state":"follower","clientURL":"http://100.72.62.59:4001","peerURL":"http://100.72.62.59:7001"},
{"name":"965d12d38a4a4b2c807bd232fb7b0db7","state":"follower","clientURL":"http://100.72.20.153:4001","peerURL":"http://100.72.20.153:7001"},
{"name":"a1b566dedb194c259f7eb2ffde5595b1","state":"follower","clientURL":"http://100.72.62.2:4001","peerURL":"http://100.72.62.2:7001"},
{"name":"a45efba827754b5f93c38b751a0ae273","state":"follower","clientURL":"http://100.72.62.31:4001","peerURL":"http://100.72.62.31:7001"},
{"name":"d041738235a9483cb814d37ca7fa4b6d","state":"follower","clientURL":"http://100.72.20.18:4001","peerURL":"http://100.72.20.18:7001"}]
but only three machines are currently running. I tried to add additional machines to reach the quorum with no avail.
I'm running the following version:
$ etcdctl -v
etcdctl version 0.4.6
for which, as mentioned here https://coreos.com/docs/distributed-configuration/etcd-api/#cluster-config, the leader module to force a leader has been removed. The ugly part is that since there is no quorum I'm not able to remove from the list of machines the ones that are not currently running using for example:
$ curl -L -XDELETE http://127.0.0.1:7001/v2/admin/machines/2abbf47a9e644bc69652a986d796d7a6
which has no effect. Is there any way to save the cluster?
In my understanding, you can save the cluster, but it isn't worth it.
The cluster is not accepting new machines because it needs a quorum to add new machines and there is not a quorum of existing machines. The same goes for removing machines and deleting keys.
If you can bring up enough machines listed as cluster members and have them successfully work as cluster members, you will have a quorum and save the cluster.
From what I can see, you have six machines listed as cluster members. You need to have at least four running for the existing cluster to operate.