I'm playing around with installing ubuntu server on VirtualBox and learning my way around linux. At one point I got the VM working and able to run curl, wget, apt-get and install docker through my company's proxy. I decided to rebuild it and now I've hit a strange issue. wget works with https but curl does not.
Curl is coming back with the following error for all https sites:
curl -v https://<url>
trying <IPAddress>...
connected to <proxyserver> port <port> (#0)
ALPN, offering http/1.1
cipher selection: ALL:!EXPORT:!EXPORT40:EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
successfully set certificate verify locations:
CAfile: /etc/ssls/certs/ca_certificate.crt
CApath: /etc/ssls/certs
TLSv1.2 (OUT), TLS header, Certificate Status (22):
TLSv1.2 (OUT), TLS handshake, client hello (1):
error: 140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
Any idea what the issue is?
Found the issue is specific to Ubuntu Server 17.10.1. Have installed Ubuntu server 16.04.4 LTS.
Related
I had a running gitlab on an online server with ssl for several years now. Due to a server problem, the provider restarted the machine. Since then, I cannot connect to my gitlab anymore. Does anybody have an idea, how to solve the problem?
Thanks.
root#git:~# curl -v -H -GET "https://gitlabAdress.com"
Rebuilt URL to: https://gitlabAdress.com/
Trying xxx.xxx.xxx.xxx...
TCP_NODELAY set
Connected to gitlabAdress.com (xxx.xxx.xxx.xxx) port 443 (#0)
ALPN, offering h2
ALPN, offering http/1.1
successfully set certificate verify locations:
CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs
TLSv1.3 (OUT), TLS handshake, Client hello (1):
OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to gitlabAdress.com:443
stopped the pause stream!
Closing connection 0 curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to gitlabAdress.com:443
I have been given the following files for setting up TLS for a website running on the domain example.com:
example.com.key (containing the private key)
example.com.cer (containing one certificate)
intermediate_example.com.crt (containing two certificates)
example.com.csr (containing one certificate request)
I'm using Traefik to host the site, and I've configured Traefik like so in the dynamic.yml config:
tls:
certificates:
- certFile: "certs/example.com.cer"
keyFile: "certs/example.com.key"
stores:
- default
Doing so resulted in a website I could access via Chrome and Firefox, but whenever trying a request with curl (or any program using its libraries), I get the following error:
β ~ curl -v https://test.example.com/
* Trying xxx.xxx.xxx.xxx:443...
* Connected to test.example.com (xxx.xxx.xxx.xxx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
Why is this working in browsers, but not via curl?
I have ensured that the ca-certificates package is installed on the host, and even when I download the most recent CA bundle and use curl --cacert cacert.pem β¦, it does not work.
What am I missing here?
The reason it does not work is that the intermediate certificate is missing in what Traefik is sending to the client.
The browsers can work around this using the Authority Information Access mechanism, and even macOS does this, fetching the missing information out-of-band, thereby allowing you to access the site normally. Some background is given here.
This is obviously a configuration error on the server. To fix it, at least for Traefik, you can concatenate everything into one .pem file. You don't need to add the CSR file here:
cat example.com.key example.com.cer intermediate_example.com.crt > cert.pem
Then, specify the same file twice in Traefik's config:
tls:
certificates:
- certFile: "certs/cert.pem"
keyFile: "certs/cert.pem"
stores:
- default
This is also mentioned in this discussion on the Traefik community board.
Running this command inside wsl 2 windows delivers the below output.
Can anyone explain why there are mixed TLSv1.3 and TLSv1.2 IN and OUT and is this a potential reason as to why its unable to get local issuer certificate.
The Windows host OS is Enterprise
I have installed ca-certificates and ran update-ca-certificates
curl -v https://google.com:443/
* Trying 172.217.169.78...
* TCP_NODELAY set
* Connected to google.com (172.217.169.78) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Are you using a network connection subject to monitoring or 'protection' such as antivirus, like one provided by a business, organization or school? If so you are probably getting a fake cert/chain from the interceptor.
Try openssl s_client -connect google.com:443 and look at the s:and i: lines under Certificate chain. (Many hosts today require SNI to respond correctly and if your OpenSSL is below 1.1.1 you need to add -servername x to provide SNI, but google is not one of them, and anyway since your curl is at least trying 1.3 it cannot be OpenSSL below 1.1.1.)
Or, if connecting from Chrome, Edge or IE (but maybe not Firefox) on the host Windows works normally, doubleclick the padlock and look at the cert chain to see if it leads to GlobalSign Root CA (as the real google does) or something else (like e.g. BlueCoat); if the latter the interceptor's root cert is installed in your host Windows store, but not the WSL system. You can export the cert from the host browser and put it in a file, and either use it manually with curl --cacert $file, or import it to the WSL system's truststore, but that depends on what system you are running in WSL which you didn't say.
Added: the mixture of TLS 1.3 and 1.2 in the logging info is probably because 1.3 uses the same record header version as 1.2 as a transition hack, with an extension that indicates it is really 1.3 only in the two Hello messages, and the callback probably doesn't deal with this.
Turns out there were missing certificates that once provided and installed it worked fine
I'm using git-ftp for deployment of some sites, and with one server I don't manage to establish a connection through TLS.
curl -vv --insecure ftps://linux12.unixserver.org:21
* Rebuilt URL to: ftps://linux12.unixserver.org:21/
* Trying 212.63.145.118...
* TCP_NODELAY set
* Connected to linux12.unixserver.org (212.63.145.118) port 21 (#0)
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
I already found several other questions, but my issue doesn't match.
I'm not using a proxy
Even --insecure will fail, so it can't be a cert trust issue
I tried --cacert as well, doesn't work
--tls-max 1.2 will change version to 1.2, but it doesn't change anything
Some source states that this error also occurs when the server doesn't deliver a cert at all.
openssl s_client -connect linux12.unixserver.org:21 -starttls ftp
Delivers a certificate, so that seems to be alright.
I can successfully connect by means of Nautilus, but it's warning me about the certificate, that the issuer is unknown.
Thanks very much for any hint on what else to try.
Actually two issues were involved in this case.
1) ftps is the wrong protocol for servers that only support explicit tls.
The right protocol would be ftpes. If curl is not compiled with support for it, you can use --ssl-reqd to enforce TLS, or just --ssl.
In context of git-ftp it works even if curl is compiled w/o ftpes.
2) The server didn't deliver a valid certificate chain, so the certificate could not be validated.
This is currently a ftp certificate bug in plesk.
The solution is to retrieve the certificate chain manually and provide the chain by means of --cacert <file>. If it's self-signed, extract the public key and use --pinnedpubkey <file>.
Thanks very much to Daniel Stenberg for the right hints.
When I send the request to https://DOMAIN:443/path it works correctly in every web browser I've tried. But when it comes to curl (and wget) I get an error. I already recompiled openssl and curl (latest versions) with not changes.
curl -vv https://DOMAIN:443/path
Output
β* TCP_NODELAY set
* Connected to DOMAIN (IPADDRESS) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to DOMAIN:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection toβ DOMAIN:443
Curl version
curl 7.61.0 (x86_64-pc-linux-gnu) libcurl/7.61.0 OpenSSL/1.1.1 zlib/1.2.11 libidn2/2.0.5 libpsl/0.20.2 (+libidn2/2.0.4) nghttp2/1.32.1 librtmp/2.3
Release-Date: 2018-07-11
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy PSL
Thanks for your help.
The error I was referring to might happen when the firewall is blocking Curl and wget connections. This theory is supported by these facts:
I received the same error using different operative systems, configurations, IP addresses and the result was the same.
Browser based request were successful.
Using alternatives to curl and wget worked perfectly (I used aria2 and kurly)
So, problem solved for me. I hope this answer can help anyone facing the same problem.