CPanel/WHM Unknown License File Error - cpanel

So my issue is like the title suggests. However I have tried the following suggestions from this page (https://documentation.cpanel.net/display/ALD/Installation+Guide+-+Troubleshoot+Your+Installation#InstallationGuide-TroubleshootYourInstallation-Licenseerrors) with no results.
1.) curl -L http://cpanel.net/showip.cgi (shows my ip address on the server for use on the verify.cpanel.net script), this can be verified also here... (http://verify.cpanel.net/index.cgi?ip=xxx.xxx.xxx.xx) (I don't like showing my IP, but trust me it was verified.)
2.) /usr/local/cpanel/cpkeyclt
Updating cPanel license...Done. Update Failed!
Error message:
A License check appears to already be running.
Building global cache for cpanel...Done
So the above didn't work.
I then tried these commands.
3.) /usr/local/cpanel/etc/init/stopcpsrvd and then /usr/local/cpanel/scripts/upcp --sync to attempt to resynchronize.
This appears to successfully run but I still get the same error. Attached below is the error message I get when I attempt to login to WHM.
4.) I then tried running rdate -s rdate.cpanel.net as suggested in some other posts to have the times match up and then when I run (/usr/local/cpanel/cpkeyclt) it seems to time out and nothing ever happens.
Looking at the logs for the cpanel license (/usr/local/cpanel/logs/license_log) I see this.
Tue Jul 26 16:23:30 2016: Trying server 208.74.125.22
Tue Jul 26 16:23:45 2016: Timed out while connecting to port 2089
Tue Jul 26 16:24:00 2016: Timed out while connecting to port 80
Tue Jul 26 16:24:15 2016: Timed out while connecting to port 110
Tue Jul 26 16:24:30 2016: Timed out while connecting to port 143
Tue Jul 26 16:24:45 2016: Timed out while connecting to port 25
Tue Jul 26 16:25:00 2016: Timed out while connecting to port 23
Tue Jul 26 16:25:15 2016: Timed out while connecting to port 993
Tue Jul 26 16:25:30 2016: Timed out while connecting to port 995
Tue Jul 26 16:30:14 2016: License Update Request
Tue Jul 26 16:30:14 2016: Using full manual DNS resolution
Tue Jul 26 16:30:14 2016: Trying server 208.74.121.85
Tue Jul 26 16:30:29 2016: Timed out while connecting to port 2089
Any help is appreciated!
Notes
Results of running /usr/local/cpanel/etc/init/stopcpsrvd
/usr/local/cpanel/etc/init/stopcpsrvd
Waiting for “cpsrvd” to stop ……Gracefully Terminating processes: cpsrvd: with pids 20842 and owner root.......waited 1 second(s) for 1 process(es) to terminate....Done
…finished.
Startup Log
Starting PID 20839: /usr/local/cpanel/libexec/cpsrvd-dormant
Results of running /usr/local/cpanel/scripts/upcp –sync (Couldn't show everything because of text character limitations)
[2016-07-26 15:39:39 -0400] Detected cron=0 (Terminal detected)
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
=> Log opened from cPanel Update (upcp) - Slave (21620) at Tue Jul 26 15:41:53 2016
[2016-07-26 15:41:53 -0400] Maintenance completed successfully
[2016-07-26 15:41:54 -0400] 95% complete
[2016-07-26 15:41:54 -0400] Running Standardized hooks
[2016-07-26 15:41:54 -0400] 100% complete
[2016-07-26 15:41:54 -0400]
[2016-07-26 15:41:54 -0400] cPanel update completed
[2016-07-26 15:41:54 -0400] A log of this update is available at /var/cpanel/updatelogs/update.1469561979.log
[2016-07-26 15:41:54 -0400] Removing upcp pidfile
[2016-07-26 15:41:54 -0400]
[2016-07-26 15:41:54 -0400] Completed all updates
=> Log closed Tue Jul 26 15:41:54 2016

It turns out the answer was IPTables. Before that it was the rDate command that was necessary to fix it, but my IPTables was blocking the connections.
To temporarily disable your firewall do this.
iptables-save > /root/current.ipt
iptables -P INPUT ACCEPT; iptables -P OUTPUT ACCEPT
iptables -F INPUT; iptables -F OUTPUT
ping -c 3 google.com
iptables-restore < /root/current.ipt
rm -f /root/current.ipt
The first command saves a copy of your firewall settings.
The next 2 commands make it so all input/output are allowed (for outgoing and incoming connections)
Finally test by pinging the ip address that was giving the issue for cPanel in your log file.
If it works that means the update license command will work.
Simply run:
/usr/local/cpanel/cpkeyclt
and you are good to go.
You can restore back your rules by using the last 2 commands if you want:
iptables-restore < /root/current.ipt
rm -f /root/current.ipt
Be warned that you will be blocked again, unless you fix the firewall.

Related

Radius server failed to start in centos 7

At beginning I successfully configured radius server with mariadb and httpd. But I changed to hostname of the server and rebooted. Now even if the mariadb and httpd is running but radiusd failed to start. Here is the answer from journalctl -xe .. Please help me.
Jan 10 12:34:08 cpe.twcny.res.rr.com systemd[1]: Unit radiusd.service entered failed state.
Jan 10 12:34:08 cpe.twcny.res.rr.com systemd[1]: radiusd.service failed.
Jan 10 12:34:08 cpe.twcny.res.rr.com polkitd[963]: Unregistered Authentication Agent for unix-process:2183:15540 (system bus name :1.43, object path /org/
Jan 10 12:40:01 cpe.twcny.res.rr.com systemd[1]: Created slice User Slice of root.

Why can't upload files into dropbox at shutdown?

Fix as jayant say.
cat upload.sh
/home/Dropbox-Uploader/dropbox_uploader.sh upload -f /home/Dropbox-Uploader/.dropbox_uploader /home/material/* /
date >> /home/upload.log
All files in directory material can be uploaded into my dropbox with bash upload.sh.
I want to write a autorun service at shutdown to upload files into dropbox.
vim /etc/systemd/system/upload.service
[Unit]
Description=upload files into dropbox
Before=network.target shutdown.target reboot.target
[Service]
ExecStart=/bin/true
ExecStop=/bin/bash /home/upload.sh
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
Enable it with:
sudo systemctl enable upload.service
To reboot it.
journalctl -u upload
-- Logs begin at Thu 2018-01-18 22:38:54 EST, end at Tue 2018-04-10 06:55:43 EDT. --
Apr 10 06:48:27 localhost systemd[1]: Started upload files into dropbox.
Apr 10 06:48:27 localhost systemd[1]: Starting upload files into dropbox...
Apr 10 06:48:27 localhost bash[111]: which: no shasum in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
Apr 10 06:48:27 localhost bash[111]: > Uploading "/home/material/test.txt" to "/test.txt"...
Apr 10 06:48:27 localhost bash[111]: Error: Couldn't resolve host.
ln -s /usr/bin/sha1sum /usr/bin/shasum according to google's result.
Reboot the second time.
journalctl -u dropbox
Apr 10 06:55:04 localhost systemd[1]: Started upload files into dropbox.
Apr 10 06:55:04 localhost systemd[1]: Starting upload files into dropbox...
Apr 10 06:55:04 localhost bash[113]: shasum: invalid option -- 'a'
Apr 10 06:55:04 localhost bash[113]: Try 'shasum --help' for more information.
Apr 10 06:55:04 localhost bash[113]: shasum: invalid option -- 'a'
Apr 10 06:55:04 localhost bash[113]: Try 'shasum --help' for more information.
Apr 10 06:55:04 localhost bash[113]: > Uploading "/home/material/test.txt" to "/test.txt"...
Apr 10 06:55:04 localhost bash[113]: Error: Couldn't resolve host.
Do as Raushan say,new issue arised,
Uploading by 4 chunks *** FAILED dropbox
For the problem Uploading by 4 chunks *** FAILED dropbox ,some material say that if files exceeding 150 mb should be uploaded in chunks.
split -b 10m /home/upload.tar.gz /home/material/dropbox
ls /home/material
dropboxaa dropboxac dropboxae dropboxag ......
Both of them is less than 10m.
journalctl -u upload
Apr 19 01:45:26 localhost systemd[1]: Started upload files into dropbox.
Apr 19 01:45:26 localhost systemd[1]: Starting upload files into dropbox...
Apr 19 01:45:27 localhost bash[401]: > Uploading "/home/material/dropboxaa" to "/dropboxaa"... FAILED
Apr 19 01:45:27 localhost bash[401]: An error occurred requesting /upload
Apr 19 01:45:28 localhost bash[401]: > Uploading "/home/material/dropboxab" to "/dropboxab"... FAILED
Apr 19 01:45:40 localhost bash[401]: Some error occured. Please check the log.
Apr 19 01:45:40 localhost systemd[1]: upload.service: main process exited, code=exited, status=1/FAILURE
Apr 19 01:45:40 localhost systemd[1]: Unit upload.service entered failed state.
Apr 19 01:45:40 localhost systemd[1]: upload.service failed.
Why > Uploading "/home/material/dropboxaa" to "/dropboxaa"... FAILED?
It is not possible that the second instruction of your script executes without executing the first one. Try redirecting the error output of the dropbox_uploader.sh to see what is failing.
Assuming you are using dropbox-uploader, try specifying the exact location of the configuration file. See Running as cron job section in their README.md
/home/Dropbox-Uploader/dropbox_uploader.sh -f /path/to/.dropbox_uploader upload /home/material/* /
For the Couldn't resolve host problem :
Unit configuration should have dependency like
After=network.target instead of Before=network.target as the default shutdown order is inverse of startup
[Unit]
Description=upload files into dropbox
Before=shutdown.target reboot.target
After=network.target
[Service]
ExecStart=/bin/true
ExecStop=/bin/bash /home/upload.sh
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
Refer: https://serverfault.com/a/785355
For the shasum problem :
I am not sure about your OS distro, I am using Fedora 25.
In my case shasum binary is from perl-Digest-SHA package which can be installed by command yum install perl-Digest-SHA on RedHat based linux distro
Refer: https://superuser.com/a/1180163

rabbitmq-server don't start - unable to connect to epmd / Ubuntu 16.04

I followed this guide https://www.rabbitmq.com/install-debian.html and installed rabbitmq-server. However, it won't start with an error message:
Jul 31 20:29:49 76672.local rabbitmqctl[7519]: attempted to contact: [rabbit#76672]
Jul 31 20:29:49 76672.local rabbitmqctl[7519]: rabbit#76672:
Jul 31 20:29:49 76672.local rabbitmqctl[7519]: * unable to connect to epmd (port 4369) on 76672: badarg (unknown POSIX error)
Jul 31 20:29:49 76672.local rabbitmqctl[7519]: current node details:
Jul 31 20:29:49 76672.local rabbitmqctl[7519]: - node name: 'rabbitmq-cli-30#76672'
Jul 31 20:29:49 76672.local rabbitmqctl[7519]: - home dir: /var/lib/rabbitmq
Jul 31 20:29:49 76672.local rabbitmqctl[7519]: - cookie hash: VwJCJ/LkSvmUKaoPOglCcQ==
Jul 31 20:29:49 76672.local systemd[1]: Failed to start RabbitMQ broker.
Jul 31 20:29:49 76672.local systemd[1]: rabbitmq-server.service: Unit entered failed state.
Jul 31 20:29:49 76672.local systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
dpkg: error processing package rabbitmq-server (--configure):
subprocess installed post-installation script returned error exit status 1
Processing triggers for systemd (229-4ubuntu17) ...
Processing triggers for ureadahead (0.100.0-19) ...
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
altor_work#76672:
I tried to do this installation on a clear instance of Ubuntu and got the same error. I googled the error message and it seems I have some problem with network settings - I guess I should change some settings from their default state.
Any idea what needed to be changed? Or with which setting I should take my first try?
P.S. I'm completely novice in Unix. For me, it's just a cloud environment where I run my Python scripts.
I solved my problem by setting HOSTNAME in the file rabbitmq-env.conf. I don't know what exactly caused the problem in the first place.
My settings:
sudo cat /etc/hostname
76672.localhost
sudo cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 ubuntu16.04 ubuntu16
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.0.1 76672.local
/etc/rabbitmq/rabbitmq-env.conf
# Empty - if the file is empty rabbitmq doesn't start
HOSTNAME=76672.local # With this rabbitmq doesn't start either
HOSTNAME=localhost # With this all works
If it works with localhost setting only please check out the following:
fgrep BindToDevice /lib/systemd/system/epmd.socket

Guacamole fails to connect to xRDP server

I have a xrdp server running and would like to connect to it using Guacamole. However, each time I try to make any RDP connection it always fails with "You Have Been Disconnected." I know it is a fault with guacamole because I can log into xRDP using Remmina RDP client using the same credentials.
Here are my Logs:
/var/run/syslog :
Jul 26 10:02:36 ubuntu guacd[1291]: Creating new client for protocol "rdp"
Jul 26 10:02:36 ubuntu guacd[1291]: Connection ID is "$0c72bf59-0ff9-448d-a5a2-dc3229157122"
Jul 26 10:02:36 ubuntu guacd[5737]: Security mode: ANY
Jul 26 10:02:36 ubuntu guacd[5737]: Resize method: none
Jul 26 10:02:36 ubuntu guacd[5737]: User "#cce2ec3d-03c5-4387-be88-054a00927f56" joined connection "$0c72bf59-0ff9-448d-a5a2-dc3229157122" (1 users now present)
Jul 26 10:02:36 ubuntu guacd[5737]: Loading keymap "base"
Jul 26 10:02:36 ubuntu guacd[5737]: Loading keymap "en-us-qwerty"
Jul 26 10:02:36 ubuntu kernel: [ 4736.455320] guacd[5749]: segfault at 8000000000 ip 0000008000000000 sp 00007f3bc9f8bc98 error 14
Jul 26 10:02:36 ubuntu kernel: [ 4736.455323] traps: guacd[5750] general protection ip:7f3bcb074c69 sp:7f3bc978ac98 error:0
Jul 26 10:02:36 ubuntu kernel: [ 4736.455323]
Jul 26 10:02:36 ubuntu kernel: [ 4736.455325] in libguac.so.5.0.0[7f3bcb070000+d000]
Jul 26 10:02:36 ubuntu guacd[1291]: Connection "$0c72bf59-0ff9-448d-a5a2-dc3229157122" removed.
/var/log/tomcat8/Catalina.out :
10:02:33.079 [http-nio-8080-exec-2] WARN o.a.g.r.auth.AuthenticationService - Authentication attempt from 0:0:0:0:0:0:0:1 for user "-------" failed.
10:02:33.943 [http-nio-8080-exec-1] WARN o.a.g.r.auth.AuthenticationService - Authentication attempt from 0:0:0:0:0:0:0:1 for user "jonathan" failed.
10:02:36.100 [http-nio-8080-exec-6] INFO o.a.g.r.auth.AuthenticationService - User "guacadmin" successfully authenticated from 0:0:0:0:0:0:0:1.
10:02:36.241 [http-nio-8080-exec-10] INFO o.a.g.tunnel.TunnelRequestService - User "guacadmin" connected to connection "3".
10:02:38.179 [Thread-7] INFO o.a.g.tunnel.TunnelRequestService - User "guacadmin" disconnected from connection "3". Duration: 1937 milliseconds
Connection settings:
security mode: any
port: 3389
I am on ubuntu server 16.04. Any possible solutions would be much appreciated.
Try:
Removing the [path to libfreerdp*.so]/freerdp/guac*.so files that were copied, assuming this is the case.
Create symbolic links within [path to libfreerdp*.so]/freerdp/ to /usr/local/lib/freerdp/guac*.so, so you do not need to worry about
this going forward.
Source: RDP stopped working v0.9.9 - Apache Guacamole.

How to solve race condition in etcd leader election?

While testing a Core Os cluster with three nodes, after successfully adding and removing few additional nodes, I encountered the following problem, supposedly due to a race condition during the election process for etcd.
Checking the new leader gives:
$ curl -L http://127.0.0.1:4001/v2/stats/leader
{"errorCode":300,"message":"Raft Internal Error","index":629006}
Journalctl for each machine in the cluster gives:
$ journalctl -r -u etcd
-- Logs begin at Wed 2014-11-12 15:09:01 UTC, end at Mon 2014-11-24 10:47:34 UTC. --
Nov 24 10:47:34 node-1 etcd[56576]: [etcd] Nov 24 10:47:34.307 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: term #5221 started.
Nov 24 10:47:34 node-1 etcd[56576]: [etcd] Nov 24 10:47:34.306 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: state changed from 'candidate' to 'follower'.
Nov 24 10:47:33 node-1 etcd[56576]: [etcd] Nov 24 10:47:33.098 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: state changed from 'follower' to 'candidate'.
Nov 24 10:47:32 node-1 etcd[56576]: [etcd] Nov 24 10:47:32.081 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: term #5219 started.
Nov 24 10:47:32 node-1 etcd[56576]: [etcd] Nov 24 10:47:32.081 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: state changed from 'candidate' to 'follower'.
Nov 24 10:47:31 node-1 etcd[56576]: [etcd] Nov 24 10:47:31.962 INFO | 965d12d38a4a4b2c807bd232fb7b0db7: state changed from 'follower' to 'candidate'.
And listing the machines with fleet fails:
$ fleetctl list-machines
2014/11/24 10:56:19 INFO client.go:278: Failed getting response from http://127.0.0.1:4001/: dial tcp 127.0.0.1:4001: connection refused
2014/11/24 10:56:19 ERROR client.go:200: Unable to get result for {Get /_coreos.com/fleet/machines}, retrying in 100ms
2014/11/24 10:56:19 INFO client.go:278: Failed getting response from http://127.0.0.1:4001/: dial tcp 127.0.0.1:4001: connection refused
2014/11/24 10:56:19 ERROR client.go:200: Unable to get result for {Get /_coreos.com/fleet/machines}, retrying in 200ms
2014/11/24 10:56:19 INFO client.go:278: Failed getting response from http://127.0.0.1:4001/: dial tcp 127.0.0.1:4001: connection refused
Listing the machines in the cluster gives:
$ curl -L http://127.0.0.1:7001/v2/admin/machines
[{"name":"","state":"follower","clientURL":"http://100.72.62.35:4001","peerURL":"http://100.72.62.35:7001"},
{"name":"555cca74216644fea48990673b3d539c","state":"follower","clientURL":"http://100.72.62.59:4001","peerURL":"http://100.72.62.59:7001"},
{"name":"965d12d38a4a4b2c807bd232fb7b0db7","state":"follower","clientURL":"http://100.72.20.153:4001","peerURL":"http://100.72.20.153:7001"},
{"name":"a1b566dedb194c259f7eb2ffde5595b1","state":"follower","clientURL":"http://100.72.62.2:4001","peerURL":"http://100.72.62.2:7001"},
{"name":"a45efba827754b5f93c38b751a0ae273","state":"follower","clientURL":"http://100.72.62.31:4001","peerURL":"http://100.72.62.31:7001"},
{"name":"d041738235a9483cb814d37ca7fa4b6d","state":"follower","clientURL":"http://100.72.20.18:4001","peerURL":"http://100.72.20.18:7001"}]
but only three machines are currently running. I tried to add additional machines to reach the quorum with no avail.
I'm running the following version:
$ etcdctl -v
etcdctl version 0.4.6
for which, as mentioned here https://coreos.com/docs/distributed-configuration/etcd-api/#cluster-config, the leader module to force a leader has been removed. The ugly part is that since there is no quorum I'm not able to remove from the list of machines the ones that are not currently running using for example:
$ curl -L -XDELETE http://127.0.0.1:7001/v2/admin/machines/2abbf47a9e644bc69652a986d796d7a6
which has no effect. Is there any way to save the cluster?
In my understanding, you can save the cluster, but it isn't worth it.
The cluster is not accepting new machines because it needs a quorum to add new machines and there is not a quorum of existing machines. The same goes for removing machines and deleting keys.
If you can bring up enough machines listed as cluster members and have them successfully work as cluster members, you will have a quorum and save the cluster.
From what I can see, you have six machines listed as cluster members. You need to have at least four running for the existing cluster to operate.