CURL SSL Connect Error during install of Sitelock on Centos 6 server - ssl

I am trying to install Sitelock with Plesk 11 on my GoDaddy Centos 6.4 VPS. When I access Sitelock 1.0 through the Applications tab and click the install button I get this error:
Error: Installation of Sitelock failed. Non-zero exit status returned
by script. Output stream: 'CURL error. API request failed with
message:SSL connect error '.
Google does not return a result for this exact error message and none that I can find sound specifically related. I am not familiar with cURL. Digicert indicates the SSL is correctly installed and COMODO SSL Analyzer doesn't indicate any problems. How can I resolve this?

Related

Digital Ocean CyberPanel (on Ubuntu 18.04): ACME certificates blocked forbidden - 283 Failed to obtain SSL for domain. [issueSSLForDomain]

I installed a brand new DigitalOcean droplet using a marketplace base (so on paper everything should be OK out of the box).
When trying to issue certificates, i am getting this error:
[11.13.2019_04-48-28] /root/.acme.sh/acme.sh --issue -d thehouseinkorazim.co.il -d www.thehouseinkorazim.co.il --cert-file /etc/letsencrypt/live/thehouseinkorazim.co.il/cert.pem --key-file /etc/letsencrypt/live/thehouseinkorazim.co.il/privkey.pem --fullchain-file /etc/letsencrypt/live/thehouseinkorazim.co.il/fullchain.pem -w /home/thehouseinkorazim.co.il/public_html --force
[11.13.2019_04-48-28] [Errno 2] No such file or directory [Failed to obtain SSL. [obtainSSLForADomain]]
[11.13.2019_04-48-28] 283 Failed to obtain SSL for domain. [issueSSLForDomain]
[11.13.2019_04-48-34] Trying to obtain SSL for: thehouseinkorazim.co.il and: www.thehouseinkorazim.co.il
I checked and UFW is not installed.
I do have a network firewall but it is the same one as another droplet that does allow for certificates (same rules) so I think it is not the cause.
I searched all the answers online and no luck.
I even installed certboot to manually issue certificate but same error (i did it because I know you need to register initially to get certificates and I haven't so I thought it was the cause).
Any ideas? Thanks!
update: i did a clean droplet again, this is the issue without anything I did manually:
Cannot issue SSL. Error message: ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.crt': No such file or directory ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.key': No such file or directory 0,283 Failed to obtain SSL for domain. [issueSSLForDomain]
I checked and there is no folder "cert" under "conf" in the path written above.
There's an known SSL issue on recent version due to some environment/code changing. We already aware it and submitted a new version which has that issue fixed included. Please give it a day or two and you should be able to launch the new version from marketplace which comes with CyberPanel v1.9.2.
Best

SSL verification behind McAfee Proxy on LAMP VM

I've been trying off and on to get a LAMP development server operational behind my corporate firewall (McAfee Web Gateway). I have a Ubuntu/Trusty64 image on a virtualbox VM provisioned through Vagrant. I cannot get "some" {most} repositories to load for a proper sudo apt-get update. I'm getting a 401 authentication required error on all 'security.ubuntu.com trusty-security/*' sources and 'archive.ubuntu.com trusty/*' sources and all fail to fetch. Therefore most all sudo apt-get install {whatever} fails and I cannot add the necessary PPA repository to install the LAMP environment I want.
I can turn off SSL verification for some things and can get many things installed - but I need SSL working correctly within this environment.
Digging deeper, I find that if I curl -v https://url.com:443, I get the
curl(60): ssl certificate error: unable to get local issuer certificate.
I have the generic bundle 'ca-bundle.crt' installed locally in /usr/local/share/ca-certificates/ and ran sudo update-ca-certificates which seemed to update ca-certificates.crt in etc/ssl/certs/.
I ran a strace -o stracker.out curl -v https://url.com:443 and searched for the failing stat() as suggested in here by No-Bugs_Hare and found that curl was looking for 'c099e901.0' in /etc/ssl/certs/ and it isn't there. Googling that particular HEXID is no joy and am stuck at this step.
Next I tried strace -o traceOppenSSL.out openssl s_client -connect url.com:443 to see if I can get more detail but can't see what causes the
verify error:num=20:unable to get local issuer certificate
followed by two other errors (I'm sure all relating to the first one), then displays the "Server Certificate" within a BEGIN / END block, followed by a bunch of other metadata. The entire session ends with
Verify return code: 21 (unable to verify the first certificate).
So, this is not my forte and I'm doing what I can to try and get this VM operational. Like I said earlier, I've been trying many things and understand most of the issue is the fact that I'm behind a McAfee firewall within my corporate structure. I don't know how to troubleshoot more than what I've explained above but I'm willing to dig deeper.
I have a few questions. Why is curl looking for that particular hex ID and where would I find or generate the beast? Are there other troubleshooting steps I should try? The VM is a server-class Ubuntu install, so I only have a SSH CLI terminal and no WindowManager GUI to work with this.

CouchDB SSL decode error

I can't get CouchDB working over SSL. The certificates are fine, and indeed I have tested with self signed, and the test certificates for Couch at https://github.com/mochi/mochiweb/blob/master/examples/https/
It's an Ubuntu box, running couch 1.6.1 and the SSL certificates all check out when checked at https://www.digicert.com/help/
The error does not appear when testing via curl, but does when attempting connection from a browser. The error line in the log is:
SSL: certify: tls_connection.erl:375:Fatal error: decode error

Does buildout/easy_install/setup_tools verify SSL certificates?

I'm trying to diagnose this error:
Getting distribution for 'zc.buildout<2dev'.
Got zc.buildout 1.7.1.
Generated script '/opt/mytardis/releases/a549cd05272afe8f16c2fe5efe8158490acbde82/bin/buildout'.
Download error on http://pypi.python.org/simple/buildout-versions/: [Errno 104] Connection reset by peer -- Some packages may not be found!
Couldn't find index page for 'buildout-versions' (maybe misspelled?)
Download error on http://pypi.python.org/simple/: [Errno 104] Connection reset by peer -- Some packages may not be found!
Getting distribution for 'buildout-versions'.
STDERR: /usr/lib64/python2.6/distutils/dist.py:266: UserWarning: Unknown distribution option: 'src_root'
warnings.warn(msg)
While:
Installing.
Loading extensions.
Getting distribution for 'buildout-versions'.
Error: Couldn't find a distribution for 'buildout-versions'.
It happens deep inside a Chef + buildout installation stack. One thing I have discovered is that if I attempt to access the buildout-versions package directly:
$ wget https://pypi.python.org/packages/source/b/buildout-versions/buildout-versions-1.7.tar.gz#md5=731ecc0c9029f45826fa9f31d44e311d
--2013-07-09 12:50:18-- https://pypi.python.org/packages/source/b/buildout-versions/buildout-versions-1.7.tar.gz
Resolving proxy.redacted.com... 123.45.67.8
Connecting to proxy.redacted.com|123.45.67.8|:8080... connected.
ERROR: certificate common name “*.a.ssl.fastly.net” doesn’t match requested host name “pypi.python.org”.
To connect to pypi.python.org insecurely, use ‘--no-check-certificate’.
I can access the file fine from my desktop. So I suspect the proxy (provided by a university, and this server has to use it to reach the web). It's set with https_proxy=....
Is this the likely cause of buildout failing? Any way around it?
Your version of wget is too old.
wget started to support SNI (Server Name Indication) only since version 1.14 and that TLS extension is needed to be presented the correct certificate on pypi.python.org.
Yes, zc.buildout and easy_install both use urllib2 to retrieve HTTPS resources, which does not verify SSL certificates:
Warning: HTTPS requests do not do any verification of the server’s certificate.
Your wget tool does verify certificates, but your local certificate authorities certificates are incomplete, it seems; see SSL certificate rejected trying to access GitHub over HTTPS behind firewall for instructions on how to update those.
As for your original error, it appears your firewall proxy is doing the peer resets.
As per PEP 476, Python 2.7.9 remedies this situation. From that version onwards, urllib2 will verify SSL certificates by default.
Since Python 2.7.9 (released) / 3.4.3 (released soon), certificates are validated by default:
HTTPS certificate validation using the system's certificate store is
now enabled by default. See PEP 476 for details.
https://www.python.org/downloads/release/python-279/
you can try it:
wget http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg#md5=fe1f997bc722265116870bc7919059ea --no-check-certificate

ircd on aws ssl error

I have unrealirc running on my aws and it is compiled with ssl. I downloaded the server.key.pem to my machine. When I try to connect to the server I getSSL Error: ssl not available
I can log into aws through a terminal with my server key.
02[10:48] * Connecting to ec2-xx-xx-xx-114.compute-1.amazonaws.com (+6697)
-
02[10:48] * SSL error: ssl not available
-
02[10:48] * Connect cancelled
Also I added the correct port to my Security Group
Any Suggestions?
Are you missing the openssl shared libraries on your ec2 instance? That seems unlikely but without more info seems most likely from the error.
On a Redhat based image try:
yum search ssl
and Debian try
apt-cache search ssl
That will tell you what ssl libraries are installed.