curl error despite --insecure and w/o proxy: ssl3_get_record:wrong version number - ssl

I'm using git-ftp for deployment of some sites, and with one server I don't manage to establish a connection through TLS.
curl -vv --insecure ftps://linux12.unixserver.org:21
* Rebuilt URL to: ftps://linux12.unixserver.org:21/
* Trying 212.63.145.118...
* TCP_NODELAY set
* Connected to linux12.unixserver.org (212.63.145.118) port 21 (#0)
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
I already found several other questions, but my issue doesn't match.
I'm not using a proxy
Even --insecure will fail, so it can't be a cert trust issue
I tried --cacert as well, doesn't work
--tls-max 1.2 will change version to 1.2, but it doesn't change anything
Some source states that this error also occurs when the server doesn't deliver a cert at all.
openssl s_client -connect linux12.unixserver.org:21 -starttls ftp
Delivers a certificate, so that seems to be alright.
I can successfully connect by means of Nautilus, but it's warning me about the certificate, that the issuer is unknown.
Thanks very much for any hint on what else to try.

Actually two issues were involved in this case.
1) ftps is the wrong protocol for servers that only support explicit tls.
The right protocol would be ftpes. If curl is not compiled with support for it, you can use --ssl-reqd to enforce TLS, or just --ssl.
In context of git-ftp it works even if curl is compiled w/o ftpes.
2) The server didn't deliver a valid certificate chain, so the certificate could not be validated.
This is currently a ftp certificate bug in plesk.
The solution is to retrieve the certificate chain manually and provide the chain by means of --cacert <file>. If it's self-signed, extract the public key and use --pinnedpubkey <file>.
Thanks very much to Daniel Stenberg for the right hints.

Related

Curl Request TLS alert, unknown CA in Windows WSL

Running this command inside wsl 2 windows delivers the below output.
Can anyone explain why there are mixed TLSv1.3 and TLSv1.2 IN and OUT and is this a potential reason as to why its unable to get local issuer certificate.
The Windows host OS is Enterprise
I have installed ca-certificates and ran update-ca-certificates
curl -v https://google.com:443/
* Trying 172.217.169.78...
* TCP_NODELAY set
* Connected to google.com (172.217.169.78) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Are you using a network connection subject to monitoring or 'protection' such as antivirus, like one provided by a business, organization or school? If so you are probably getting a fake cert/chain from the interceptor.
Try openssl s_client -connect google.com:443 and look at the s:and i: lines under Certificate chain. (Many hosts today require SNI to respond correctly and if your OpenSSL is below 1.1.1 you need to add -servername x to provide SNI, but google is not one of them, and anyway since your curl is at least trying 1.3 it cannot be OpenSSL below 1.1.1.)
Or, if connecting from Chrome, Edge or IE (but maybe not Firefox) on the host Windows works normally, doubleclick the padlock and look at the cert chain to see if it leads to GlobalSign Root CA (as the real google does) or something else (like e.g. BlueCoat); if the latter the interceptor's root cert is installed in your host Windows store, but not the WSL system. You can export the cert from the host browser and put it in a file, and either use it manually with curl --cacert $file, or import it to the WSL system's truststore, but that depends on what system you are running in WSL which you didn't say.
Added: the mixture of TLS 1.3 and 1.2 in the logging info is probably because 1.3 uses the same record header version as 1.2 as a transition hack, with an extension that indicates it is really 1.3 only in the two Hello messages, and the callback probably doesn't deal with this.
Turns out there were missing certificates that once provided and installed it worked fine

curl: (60) SSL certificate problem: when uploading behind proxy

I need to do curl uploading behind company proxy. and I've getting the following two type of problems depending on the site that I try,
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
curl: (60) SSL certificate problem: unable to get local issuer certificate
Here are the details:
Case 1:
. . .
< HTTP/1.1 200 Connection established
< Proxy-agent: CCProxy
<
* Proxy replied 200 to CONNECT request
* CONNECT phase completed!
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CONNECT phase completed!
* CONNECT phase completed!
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* Closing connection 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
Case 2:
$ curl -vX POST -d "userId=5&title=Hello World&body=Post body." https://jsonplaceholder.typicode.com/posts
Note: Unnecessary use of -X or --request, POST is already inferred.
* Uses proxy env variable https_proxy == 'http://10.xx.xx.xx:808/'
* Trying 10.xx.xx.xx:808...
* TCP_NODELAY set
* Connected to 10.xx.xx.xx port 808 (#0)
* allocate connect buffer!
* Establish HTTP proxy tunnel to jsonplaceholder.typicode.com:443
> CONNECT jsonplaceholder.typicode.com:443 HTTP/1.1
> Host: jsonplaceholder.typicode.com:443
> User-Agent: curl/7.68.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection established
< Proxy-agent: CCProxy
<
* Proxy replied 200 to CONNECT request
* CONNECT phase completed!
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CONNECT phase completed!
* CONNECT phase completed!
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
The problem is not the above CCProxy, but our company is using the Zscaler transparent proxy which is intercepting SSL requests with its own certificate.
Is there any way to fix it pls?
$ curl --version
curl 7.68.0 (x86_64-pc-linux-gnu) libcurl/7.68.0 OpenSSL/1.1.1g zlib/1.2.11 brotli/1.0.7 libidn2/2.3.0 libpsl/0.21.0 (+libidn2/2.3.0) libssh2/1.8.0 nghttp2/1.40.0 librtmp/2.3
Release-Date: 2020-01-08
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux bullseye/sid
Release: testing
Codename: bullseye
Step 1 in both options will extract the Zscaler certificates.
OPTION 1 Direct curl
Download the certificates (all certificates are included in a single file)
Execute the curl command passing the certificateS you want to use.
# 1
openssl s_client -showcerts \
-connect jsonplaceholder.typicode.com:443 </dev/null 2>/dev/null \
| sed -n '/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/p' > typicode.crt
# 2
curl --cacert typicode.crt -v \
-d "userId=5&title=Hello World&body=Post body." \
https://jsonplaceholder.typicode.com/posts
OPTION 2 (installer script)
In case the curl command is executed by an installer you don't have control, then, update your certificates:
Extract the certificates from server (use the FQDN or IP and PORT, i.e: jsonplaceholder.typicode.com:443)
Move the XXX.crt certificate to your certificates directory
Update certificates
Execute installation script
# 1
openssl s_client -showcerts \
-connect jsonplaceholder.typicode.com:443 </dev/null 2>/dev/null \
| sed -n '/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/p' > typicode.crt
# 2
sudo mv typicode.crt /usr/local/share/ca-certificates/
# 3
sudo update-ca-certificates
# 4 execute your installer script
Bonus
In case you need/want to get the Zscaler certificates only, get the IP from: https://ip.zscaler.com
openssl s_client -showcerts -servername server -connect 165.225.216.33:443 > </dev/null 2>/dev/null | sed -n '/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/p' | grep -m1 -B-1 -- '-----END CERTIFICATE-----' > zscaler.crt
UPDATED (11/19/21):
Adding option 1, when is a direct curl and no need of install the certificates.
Optimized the command for extracting the certificates (creating the file)
Bonus: Getting the Zscaler IP
Tested on Ubuntu 20 and 18 behind Zscaler proxy.
Without certificate
With certificate
References:
How to install certificates for command line
unable to connect to server: x509: certificate signed by unknown authority
The answer is to "add that proxy's certificate to the CA bundle", thanks to Daniel Stenberg's answer. Then I guess I am suppose to fill in the rest. So here it is my attempt solving the remaining of the problems/questions --
Q: What is the easiest way to get that Zscaler certificate?
A: From here:
Go to Policy > SSL Inspection. In the Intermediate Root Certificate Authority for SSL Interception section, click Download Zscaler Root Certificate. Navigate to the ZscalerRootCerts. zip file and unzip it.
Q: How to add that certificate to the CA bundle?
A: See How to install company proxy certificate:
You can use curl --cacert <CA certificate> to supply your company CA cert.
Or you can add your company CA cert to /etc/pki/tls/certs/ and run make there to make it available system-wide.
This error (SSL certificate problem) means that the CA store that curl uses to verify the server's peer did not contain the cert and therefore the server couldn't be verified.
If want curl to work with a transparent proxy that terminates TLS you must add that proxy's certificate to the CA bundle or completely ignore the certificate check (which I recommend against).
A transparent proxy for TLS will of course make the connection completely unreliable and have broken security properties.

Git clone failed with Gitlab and HTTPS (error 503 inside)

I have a Gitlab installation on a Kimsufi server installed from sources.
I use Apache and HTTPS with self-signed certificate.
Almost everything is working fine.
This is the problem :
I can't clone repository via HTTPS. Only SSH works fine.
fatal: unable to access 'https://xxx/xxx/xxx.git/': The requested URL
returned error: 503
I think the problem comes from the Apache configuration (vhost).
Is there a log file somewhere or specific command I can run to debug this form client side or server side ?
Thx for help
Edit :
The request result with curl :
xxx#xxx:~/temp$ curl -v https://xxx.xxx.fr/xxx/xxx.git
* Hostname was NOT found in DNS cache
* Trying xxx.xxx.xxx.xxx...
* Connected to xx.xx.xx (xx.xx.xx.xx) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS alert, Server hello (2):
* SSL certificate problem: self signed certificate
* Closing connection 0
curl: (60) SSL certificate problem: self signed certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option
I think I have a certificate issue... Or CA ?

curl and openssl see different issuers

I'm very confused by this, and no doubt this is my misunderstanding or some such, but I'm trying to get my machine to talk to an upstream proxy, i'm using redsocks to transparently redirect to upstream.
Below we can see curl
root#Amachine:/# curl -v -k https://bower.herokuapp.com
* Rebuilt URL to: https://bower.herokuapp.com/
* Hostname was NOT found in DNS cache
* Trying 54.235.187.231...
* Connected to bower.herokuapp.com (54.235.187.231) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-SHA
* Server certificate:
* subject: C=US; ST=California; L=San Francisco; O=Heroku, Inc.; CN=*.herokuapp.com
* start date: 2014-01-21 00:00:00 GMT
* expire date: 2017-05-19 12:00:00 GMT
* issuer: CORPORATE PROXY
Issuer appears to be the corporate proxy. Breaking all ssl comms.
root#machine:/# openssl s_client -connect bower.herokuapp.com:443
CONNECTED(00000003)
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
0 s:/C=US/ST=California/L=San Francisco/O=Heroku, Inc./CN=*.herokuapp.com
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 High Assurance Server CA
1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 High Assurance Server CA
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
What's baffling me is that they have different issuers. Granted curl seems to hide most of what is going on. I can specify the root ca path and openssl works, and gives me an ok, but curl somehow is using a different path
I'm actually not sure how to debug what on earth is happening in curl. I thought I would get a similar issuer. I may be misunderstanding how s_client works though, does anyone know what is happening?
You have a SSL interception proxy in your network and curl is using it while openssl does not use it, or the proxy does not intercept the connections. It is not clear from your description what the case is exactly, but it might be
that you are using different machines, and from one the connections get intercepted while on the other not
that the intercepting proxy will not intercept connections without server name indication (SNI). Curl does SNI while openssl does not the way you use it. Use the -servername argument to retry with SNI.
1) You used the -k option to curl, which makes it ignore the CA verification - but at least it's showing what would the problem be, an MITM SSL proxy.
Presumably you can't bypass it, in this case a better option might be to retrieve the "CORPORATE PROXY" CA itself, and make it a trusted CA on your workstation. This is generally not a good idea, as it's destroying any effort that the CA's made to verify the certificate subject. On the other hand corporate networks generally make this decision for you anyway.
2) openssl is complaining only because it does not check the CA chain by default. It also seems you're not on the same network and/or use a different set of proxies than with curl. You may learn this if you check the environment for http_proxy or similar:
# printenv|egrep -i '(http|proxy)'
Or, if all else fails, perhaps the curl you're using is hardwired to use a different socks proxy, you can check with strace, what IP address curl and openssl is connecting to. Look for the connect syscall use with:
# strace -f -e connect curl https://www.google.com:443
As you mentioned, openssl needs the -CApath CERTIFICATEDIR option to verify the issuers with the CA certificates specially named in the CERTIFICATEDIR. Apart from CERTIFICATEDIR, it's actually checking the system certificate directory as well which was provided by the distribution - so as a shortcut, something as simple can usually work:
# openssl s_client -CApath 1 -connect bower.herokuapp.com:443
1 will be checked as a directory for certificates, but if it does not exist, the system will be consulted. Other useful options you can find in the manual for s_client
-servername SNI
Will send a hostname option in the initial clienthello packet so that the server (and the corporate proxy) can better decide which certificate to use on the host.
-CAfile FILE
If you know there's only a single acceptable CA for the connection.
-showcerts
If you want to record and analyse all the certificates in PEM format.
-status
It asks the server to provide the OCSP status of its own certificate via OCSP stapling and openssl will verify if it is valid.
In my case I had environment variable https_proxy defining proxy, which curl was fetching and using, while openssl was not using it. Thus, corporate proxy was serving different issuers for the certificate. After adding command parameter -proxy to openssl, both curl and openssl were serving same certificate chains.

Nagios check_ssl_cert error: SSL_CERT CRITICAL: Error: verify depth is 6

I am setting up a Nagios/Icinga Monitoring system to monitor my enviroment. I would like to monitor my SSL certs with check_ssl_cert by it is not working on all sites.
My command:
/usr/lib/nagios/plugins/check_ssl_cert -c 7 -w 28 -H 141.85.37.43 -r /etc/ssl/certs/
returns:SSL_CERT CRITICAL: Error: verify depth is 6
(141.85.37.43 is just an example adress, not my own, but makes the same mistake).
if i try
# openssl s_client -connect ftp.myDomain.de:443
CONNECTED(00000003)
140037719324328:error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error:s23_clnt.c:741:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 320 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---
or
# curl https://ftp.myDomain.de:443 -v
* About to connect() to ftp.myDomain.de port 443 (#0)
* Trying 212.xxx.xxx.xxx...
* connected
* Connected to ftp.myDomain.de (212.xxx.xxx.xxx) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS alert, Server hello (2):
* error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error
* Closing connection #0
curl: (35) error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error
I am using a crushFTP on a ubuntu system called ftp.myDomain.de. I can use it with https://ftp.myDomain.de without any problem.
The cert is installed as a .pem file and was validated vom Thawte.
Is there something wrong with my cert?
I thing i got something. It is something with my SSL-Certs. I need to check with ssl version 3to get a working result.
Icinga plugins # openssl s_client -connect ftp.myDomain.de:443 -ssl3
i modified check_ssl_cert and added a new param -ssl to define version, just like the check-http offered:
http://pastebin.com/f46YQFg3 (need to post it there, to long for stackoverflow.com)
and can check it with
Icinga plugins # /usr/lib/nagios/plugins/check_ssl_cert -c 7 -w 28 -H "ftp.myDomain.de" -r "/etc/ssl/certs/" --ssl 3
SSL_CERT OK - X.509 certificate for 'ftp.myDomain.de' from 'Thawte DV SSL CA' valid until Jun 5 23:59:59 2015 GMT (expires in 676 days)|days=676;28;7;;
so my problem is kind of solved but i need to figure out what is the difference to my old - no workaround needed - certs and if i am in need to change something there?
I got in contact with the developer behind check_ssl_cert and he optimized and implemented my solution in an updated version.
https://trac.id.ethz.ch/projects/nagios_plugins/wiki/check_ssl_cert
I came across this same problem on a new Nagios box and tried the latest version of check_ssl_cert without success.
In the end the solution was to install expect.
I cannot say for certain, as I do not have all the necessary details, but it would seem that your certificate is fine, it is just that it's authentication chain is too long for check_ssl_cert to verify it.
The error message says "Verify depth is 6". This means that the certificate verify chain is >6 items long, not that it is necessarily failing.
Around line 228 and 205 in check_ssl_cert, you see the code:
exec_with_timeout $TIMEOUT "echo 'Q' | $OPENSSL s_client ${CLIENT} ${CLIENTPASS} -connect $HOST:$PORT ${SERVERNAME} -verify 6 ${ROOT_CA} 2> ${ERROR} 1> ${CERT}"
Note the -verify 6 in there limiting the maximum chain length to test. If you change this to -verify 16 (which might be overkill but should handle your chain) it will most likely work.