Digital Ocean CyberPanel (on Ubuntu 18.04): ACME certificates blocked forbidden - 283 Failed to obtain SSL for domain. [issueSSLForDomain] - ssl

I installed a brand new DigitalOcean droplet using a marketplace base (so on paper everything should be OK out of the box).
When trying to issue certificates, i am getting this error:
[11.13.2019_04-48-28] /root/.acme.sh/acme.sh --issue -d thehouseinkorazim.co.il -d www.thehouseinkorazim.co.il --cert-file /etc/letsencrypt/live/thehouseinkorazim.co.il/cert.pem --key-file /etc/letsencrypt/live/thehouseinkorazim.co.il/privkey.pem --fullchain-file /etc/letsencrypt/live/thehouseinkorazim.co.il/fullchain.pem -w /home/thehouseinkorazim.co.il/public_html --force
[11.13.2019_04-48-28] [Errno 2] No such file or directory [Failed to obtain SSL. [obtainSSLForADomain]]
[11.13.2019_04-48-28] 283 Failed to obtain SSL for domain. [issueSSLForDomain]
[11.13.2019_04-48-34] Trying to obtain SSL for: thehouseinkorazim.co.il and: www.thehouseinkorazim.co.il
I checked and UFW is not installed.
I do have a network firewall but it is the same one as another droplet that does allow for certificates (same rules) so I think it is not the cause.
I searched all the answers online and no luck.
I even installed certboot to manually issue certificate but same error (i did it because I know you need to register initially to get certificates and I haven't so I thought it was the cause).
Any ideas? Thanks!
update: i did a clean droplet again, this is the issue without anything I did manually:
Cannot issue SSL. Error message: ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.crt': No such file or directory ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.key': No such file or directory 0,283 Failed to obtain SSL for domain. [issueSSLForDomain]
I checked and there is no folder "cert" under "conf" in the path written above.

There's an known SSL issue on recent version due to some environment/code changing. We already aware it and submitted a new version which has that issue fixed included. Please give it a day or two and you should be able to launch the new version from marketplace which comes with CyberPanel v1.9.2.
Best

Related

Compute engine- GCP Click to Deploy solution crashed

I am using Google Cloud deployment manager - Wordpress click to deploy solution.
I installed a certificate through the Virtual machine SSH on the compute engine page using Certbot. Immediately after I installed the certificate the page started showing
"ssl_error_bad_cert_domain " and didn´t open.
I went back to the SSH and deleted the certificate by using the certbot command $ sudo certbot delete . Since that didn´t solved the error I tried turning off and on and also restarting the VM which didn't resolve the issue either
I could see in the logs explorer there was an error coming from the VM saying: Invalid ssh key entry - expired key:[expired_key] so I requested a new one through the VM ssh using:
ssh-keygen -t rsa -f ~/.ssh/gcloud_instance1 -C username printed the content of cd ~/.ssh && cat gcloud_instance1.pub and then added that to the VM ssh-keys text-area. That did stop the errors on the logs explorer but didn't solve the issue since the Wordpress implementation still doesn't open.
Another thing to add is that when the VM was turned off and on the IP address changed, which I am not sure it is also causing the crash.
This is how the page currently looks like: webpage failing
This is the logs explorer : https://docs.google.com/spreadsheets/d/1cKXWkfaFbmUFakomwM_-TtoSelxWKBDxNBK7fnMy_PA/edit?usp=sharing
Any ideas what could be happening ?
Thanks
The "ssl_error_bad_cert_domain" error can happen when the SSL certificate is not properly installed or is not issued for the correct domain.
My guess is that you had a typo in the domain name or you need a static ip.
You can use this "-sudo certbot certificates" command to verify after the installation.
It is recommended to have a static IP for better stability, especially if you are using SSL certificates.

gitlab-runner's git clone fails with "Problem with the SSL CA cert (path? access rights?)"

For several months now I've had issues with gitlab-runner which is randomly failing with the following log:
Running with gitlab-runner 13.7.0 (943fc252)
on <gitlab-runner-name> <gitlab-runner-id>
Preparing the "shell" executor
00:00
Using Shell executor...
Preparing environment
00:00
Running on <hostname>...
Getting source from Git repository
00:00
Fetching changes...
Reinitialized existing Git repository in /var/gitlab-runner/builds/<gitlab-runner-id>/0/<gtlab-group>/<gitlab-project>/.git/
fatal: unable to access 'https://gitlab-ci-token:[MASKED]#<hostname>/<gtlab-group>/<gitlab-project>.git/': Problem with the SSL CA cert (path? access rights?)
ERROR: Job failed: exit status 1
This line is the crucial one:
fatal: unable to access 'https://gitlab-ci-token:[MASKED]#<hostname>/<gtlab-group>/<gitlab-project>.git/': Problem with the SSL CA cert (path? access rights?)
I tried unregistering the runner and registering a new one. It also failed with the same error after a while (the first run usually worked well).
Furthermore, runners on other machines are working correctly and never fail with the error message above.
I believe the issue is caused by the missing CI_SERVER_TLS_CA_FILE file in:
/var/gitlab-runner/builds/<gitlab-runner-id>/0/<gtlab-group>/<gitlab-project>.tmp/CI_SERVER_TLS_CA_FILE
I tried doing a git pull in the faulty directory and I got the same message. After I copied this missing file from another directory which had it, I got the following:
remote: HTTP Basic: Access denied
fatal: Authentication failed for 'https://gitlab-ci-token:<gitlab-runner-token>#gitlab.lab.sk.alcatel-lucent.com/<gtlab-group>/<gitlab-project>.git/'
As far as I know, these tokens are generated for a one-time use and are discarded after the job finishes. This leads me to believe the missing file is the issue.
Where is this file copied from? Why is it missing? What can I do to fix this issue?
I've been looking through the GitLab issues without luck.
It sounds like one or more of your runners doesn't trust the certificate on your gitlab host. You'll have to track down the root and intermediate certs used to sign your TLS cert, and add it to your runners' hosts.
For my runners on CentOS, I follow this guide (for CentOS, the commands are the same for higher versions): https://manuals.gfi.com/en/kerio/connect/content/server-configuration/ssl-certificates/adding-trusted-root-certificates-to-the-server-1605.html.

Chef Server - How to deal with self signed certificate?

I am installing Chef Server version 12.8.0-1 on Debian 8.5.
By downloading the .deb package files direct from the chef.io website I have successfully got the chef-server and chef-manage modules installed, configured and running.
I have got stuck trying to install the push jobs server. I used the command below...
chef-server-ctl install opscode-push-jobs-server
when the command runs I get the following errors...
Chef Client failed. 0 resources updated in 06 seconds
[2016-07-12T12:02:23+01:00] FATAL: Stacktrace dumped to /var/opt/opscode/local-mode-cache/chef-stacktrace.out
[2016-07-12T12:02:23+01:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2016-07-12T12:02:24+01:00] FATAL: OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=error: certificate verify failed
I believe the cause of the problem is a self signed certificate used on our corporate firewall to allow the security team to decode SSL traffic.
What I need to know is how to either get Chef to accept this certificate or get it to ignore self signed certs.
I know I could manually download and install the module but this issue will affect other things like installing cookbooks from the Chef supermarket so I'd rather find a solution that lets me use the Chef tools as intended.
Can anyone advise please?
Tensibai gave you the path for fixing Chef Server, you'll probably need to do it for the client too which is fortunately easier. Just drop the extra root cert in /etc/chef/trusted_certs.

SSL verification behind McAfee Proxy on LAMP VM

I've been trying off and on to get a LAMP development server operational behind my corporate firewall (McAfee Web Gateway). I have a Ubuntu/Trusty64 image on a virtualbox VM provisioned through Vagrant. I cannot get "some" {most} repositories to load for a proper sudo apt-get update. I'm getting a 401 authentication required error on all 'security.ubuntu.com trusty-security/*' sources and 'archive.ubuntu.com trusty/*' sources and all fail to fetch. Therefore most all sudo apt-get install {whatever} fails and I cannot add the necessary PPA repository to install the LAMP environment I want.
I can turn off SSL verification for some things and can get many things installed - but I need SSL working correctly within this environment.
Digging deeper, I find that if I curl -v https://url.com:443, I get the
curl(60): ssl certificate error: unable to get local issuer certificate.
I have the generic bundle 'ca-bundle.crt' installed locally in /usr/local/share/ca-certificates/ and ran sudo update-ca-certificates which seemed to update ca-certificates.crt in etc/ssl/certs/.
I ran a strace -o stracker.out curl -v https://url.com:443 and searched for the failing stat() as suggested in here by No-Bugs_Hare and found that curl was looking for 'c099e901.0' in /etc/ssl/certs/ and it isn't there. Googling that particular HEXID is no joy and am stuck at this step.
Next I tried strace -o traceOppenSSL.out openssl s_client -connect url.com:443 to see if I can get more detail but can't see what causes the
verify error:num=20:unable to get local issuer certificate
followed by two other errors (I'm sure all relating to the first one), then displays the "Server Certificate" within a BEGIN / END block, followed by a bunch of other metadata. The entire session ends with
Verify return code: 21 (unable to verify the first certificate).
So, this is not my forte and I'm doing what I can to try and get this VM operational. Like I said earlier, I've been trying many things and understand most of the issue is the fact that I'm behind a McAfee firewall within my corporate structure. I don't know how to troubleshoot more than what I've explained above but I'm willing to dig deeper.
I have a few questions. Why is curl looking for that particular hex ID and where would I find or generate the beast? Are there other troubleshooting steps I should try? The VM is a server-class Ubuntu install, so I only have a SSH CLI terminal and no WindowManager GUI to work with this.

Does buildout/easy_install/setup_tools verify SSL certificates?

I'm trying to diagnose this error:
Getting distribution for 'zc.buildout<2dev'.
Got zc.buildout 1.7.1.
Generated script '/opt/mytardis/releases/a549cd05272afe8f16c2fe5efe8158490acbde82/bin/buildout'.
Download error on http://pypi.python.org/simple/buildout-versions/: [Errno 104] Connection reset by peer -- Some packages may not be found!
Couldn't find index page for 'buildout-versions' (maybe misspelled?)
Download error on http://pypi.python.org/simple/: [Errno 104] Connection reset by peer -- Some packages may not be found!
Getting distribution for 'buildout-versions'.
STDERR: /usr/lib64/python2.6/distutils/dist.py:266: UserWarning: Unknown distribution option: 'src_root'
warnings.warn(msg)
While:
Installing.
Loading extensions.
Getting distribution for 'buildout-versions'.
Error: Couldn't find a distribution for 'buildout-versions'.
It happens deep inside a Chef + buildout installation stack. One thing I have discovered is that if I attempt to access the buildout-versions package directly:
$ wget https://pypi.python.org/packages/source/b/buildout-versions/buildout-versions-1.7.tar.gz#md5=731ecc0c9029f45826fa9f31d44e311d
--2013-07-09 12:50:18-- https://pypi.python.org/packages/source/b/buildout-versions/buildout-versions-1.7.tar.gz
Resolving proxy.redacted.com... 123.45.67.8
Connecting to proxy.redacted.com|123.45.67.8|:8080... connected.
ERROR: certificate common name “*.a.ssl.fastly.net” doesn’t match requested host name “pypi.python.org”.
To connect to pypi.python.org insecurely, use ‘--no-check-certificate’.
I can access the file fine from my desktop. So I suspect the proxy (provided by a university, and this server has to use it to reach the web). It's set with https_proxy=....
Is this the likely cause of buildout failing? Any way around it?
Your version of wget is too old.
wget started to support SNI (Server Name Indication) only since version 1.14 and that TLS extension is needed to be presented the correct certificate on pypi.python.org.
Yes, zc.buildout and easy_install both use urllib2 to retrieve HTTPS resources, which does not verify SSL certificates:
Warning: HTTPS requests do not do any verification of the server’s certificate.
Your wget tool does verify certificates, but your local certificate authorities certificates are incomplete, it seems; see SSL certificate rejected trying to access GitHub over HTTPS behind firewall for instructions on how to update those.
As for your original error, it appears your firewall proxy is doing the peer resets.
As per PEP 476, Python 2.7.9 remedies this situation. From that version onwards, urllib2 will verify SSL certificates by default.
Since Python 2.7.9 (released) / 3.4.3 (released soon), certificates are validated by default:
HTTPS certificate validation using the system's certificate store is
now enabled by default. See PEP 476 for details.
https://www.python.org/downloads/release/python-279/
you can try it:
wget http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg#md5=fe1f997bc722265116870bc7919059ea --no-check-certificate