curl with `-k` and without `-k` - ssl

When I am opening a url using curl without -k, my request is passing and I am able to see the expected result.
$ curl -vvv https://MYHOSTNAME/wex/archive.info -A SUKU$RANDOM
* Trying 10.38.202.192...
* Connected to MYHOSTNAME (10.38.202.192) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate: *.MYCNAME
* Server certificate: ProdIssuedCA1
* Server certificate: InternalRootCA
> GET /wex/archive.info HTTP/1.1
> Host: MYHOSTNAME
> User-Agent: SUKU19816
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.10.2
< Date: Thu, 26 Jan 2017 01:08:40 GMT
< Content-Type: text/html;charset=ISO-8859-1
< Content-Length: 19
< Connection: keep-alive
< Set-Cookie: JSESSIONID=1XXXXXXXX3E58093E816FE62D81; Path=/wex/; HttpOnly
< X-WebProxy-Id: 220ffb81872a
<
status=Running
* Connection #0 to host MYHOSTNAME left intact
But when I am opening same url with -k its failing. To me its not making any sense since in my understanding the purpose of -k is only to skip certificate verification
$ curl -vvv https://MYHOSTNAME/wex/archive.info -A SUKU$RANDOM -k
* Trying 10.38.202.192...
* Connected to MYHOSTNAME (10.38.202.192) port 443 (#0)
* Server aborted the SSL handshake
* Closing connection 0
curl: (35) Server aborted the SSL handshake
Request flow:
SSL termination is happening on HAPROXY machine
HAPROXY will forward request to nginx

For troubleshooting this kind of problem, the --resolve option can be useful:
curl -k -I --resolve www.example.com:80:192.0.2.1 https://www.example.com/
Provide a custom address for a specific host and port pair. Using
this, you can make the curl requests(s) use a specified address and
prevent the otherwise normally resolved address to be used. Consider
it a sort of /etc/hosts alternative provided on the command line. The
port number should be the number used for the specific protocol the
host will be used for. It means you need several entries if you want
to provide address for the same host but different ports.
Especially if the site you’re trying to fetch from uses SNI: In that case you can use the --resolve option to specify the server name that gets used in the TLS client hello.
One troubleshooting step to try: update curl or compile it yourself from the sources and retry. For one thing, some curl versions (e.g., MacOS) supposedly don’t send SNI for -k/--insecure.
If that’s the issue you’ve hit and you can’t replace curl, there’s a workaround you can use that essentially involves creating your own CA and private keys and CSRs, and tweaks to your haproxy.
After setting it up, then in place of specifying -k/--insecure, you use --cacert or --capath:
curl https://example.com/api/endpoint --cacert certs/servers/example.com/chain.pem
curl https://example.com/api/endpoint --capath certs/ca
If the issue you’ve hit is due to SNI, you may also troubleshoot it with a site like https://sni.velox.ch/:
curl --insecure https://sni.velox.ch/
Otherwise, if it’s not SNI, then I recall seeing somewhere that -k/--insecure may not work as expected with some proxy configurations. So if you are going through some kind of proxy from the client side and you could somehow test directly without the proxy, that might be worth exploring.

Related

Unable to connect curl on HTTPS

I am trying to connect to server B from server A using curl (https). I have already tried with -k and it doesn't work.
I have looked into several posts and I spotted blog on this link but still issue exists.
When I do a curl from server A, I am getting following error:
* Rebuilt URL to: https://x.x.x.x:8443/
* Hostname was NOT found in DNS cache
* Trying x.x.x.x...
* Connected to x.x.x.x (x.x.x.x) port 8443 (#0)
* successfully set certificate verify locations:
* CAfile: /tmp/cert_test/certRepo
CApath: /etc/ssl/certs/
* SSLv3, TLS handshake, Client hello (1):
* error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
* Closing connection 0
curl: (35) error:140770FC:**SSL routines:SSL23_GET_SERVER_HELLO:**unknown protocol
I went on the server B (https://x.x.x.x:8443/) from the browser and downloaded the root, intermediate and the client certificate. As suggested in the blog, I have created a new folder and combined all the public certs into one directory and tried to execute the curl command
curl -v --cacert /tmp/cert_test/certRepo https://x.x.x.x:8443
I am getting GET_SERVER_HELLO:unknown protocol
any thoughts?
Curl version from the Client machine:
curl 7.37.0 (x86_64-suse-linux-gnu)
libcurl/7.37.0 OpenSSL/0.9.8j
zlib/1.2.7
libidn/1.10
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet
tftp
Features: GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz
I am very sure the server is using TLSv1.2.
you did not post your curl/libssl version, but my best guess is that you're using an ancient build of a ssl/tls library, and/or an ancient version of curl which does not support whatever version of ssl/tls that server us ysubg. update your libssl and curl and try again. also post the output of curl --version.
PS, if you're on linux, you can get rough curl+openssl compile instructions here.

curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number

When I try to connect to any server (e.g. google.com) using curl (or libcurl) I get the error message:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
Verbose output:
$ curl www.google.com --verbose
* Rebuilt URL to: www.google.com/
* Uses proxy env variable no_proxy == 'localhost,127.0.0.1,localaddress,.localdomain.com'
* Uses proxy env variable http_proxy == 'https://proxy.in.tum.de:8080'
* Trying 131.159.0.2...
* TCP_NODELAY set
* Connected to proxy.in.tum.de (131.159.0.2) port 8080 (#0)
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* Closing connection 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number'
For some reason curl seems to use TLSv1.3 even if I force it to use TLSv1.2 with the command --tlsv1.2 (it will still print TLSv1.3 (OUT), ..."
I am using the newest version of both Curl and OpenSSL :
$ curl -V
curl 7.61.0-DEV (x86_64-pc-linux-gnu) libcurl/7.61.0-DEV OpenSSL/1.1.1 zlib/1.2.8
Release-Date: [unreleased]
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP UnixSockets HTTPS-proxy
I think this is a problem related to my installation of the programms.
Can somebody explain to me what this error message means?
* Uses proxy env variable http_proxy == 'https://proxy.in.tum.de:8080'
^^^^^
The https:// is wrong, it should be http://. The proxy itself should be accessed by HTTP and not HTTPS even though the target URL is HTTPS. The proxy will nevertheless properly handle HTTPS connection and keep the end-to-end encryption. See HTTP CONNECT method for details how this is done.
If anyone is getting this error using Nginx, try adding the following to your server config:
server {
listen 443 ssl;
...
}
The issue stems from Nginx serving an HTTP server to a client expecting HTTPS on whatever port you're listening on. When you specify ssl in the listen directive, you clear this up on the server side.
This is a telltale error that you are serving HTTP from the HTTPS port.
You can easily test with telnet
telnet FQDN 443
GET / HTTP/1.0
[hit return twice]
and if you see regular HTTP document here [not some kind of error], you know that your configuration is incorrect and the responding server is not SSL encrypting the response.
Simple answer
If you are behind a proxy server, please set the proxy for curl. The curl is not able to connect to server so it shows wrong version number.
Set proxy by opening subl ~/.curlrc or use any other text editor. Then add the following line to file:
proxy= proxyserver:proxyport
For e.g. proxy = 10.8.0.1:8080
If you are not behind a proxy, make sure that the curlrc file does not contain the proxy settings.
Also check your /etc/hosts file. Wasted 2 hours on this. If you have an url rerouted to 127.0.0.1 or any other loopback, this will fail the ssl handshake.
In my case the cause of this error was that my web server was not configured to listen to IPv6 on SSL port 443. After enabling it the error disappeared.
Here's how you do it for Apache:
<VirtualHost ip.v4.address:443 ip:v::6:address:443>
...
</VirtualHost>
And for nginx:
listen 443 ssl http2;
listen [::]:443 ssl http2;
Thanks to #bret-weinraub,
I found that something is weird about the server's reply. After a bit of investigation, it turned out that I have a static IP in /etc/hosts file for the target domain and as they have changed their IP address I'm not getting to the correct server.
More simply in one line:
proxy=192.168.2.1:8080;curl -v example.com
eg. $proxy=192.168.2.1:8080;curl -v example.com
xxxxxxxxx-ASUS:~$ proxy=192.168.2.1:8080;curl -v https://google.com|head -c 15 % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Trying 172.217.163.46:443...
* TCP_NODELAY set
* Connected to google.com (172.217.163.46) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
Another possible cause of this problem is if you have not enabled the virtual host's configuration file in Apache (or if you don't have that virtual host at all) and the default virtual host in Apache is only configured for non-SSL connections -- ie there's no default virtual host which can talk SSL. In this case because Apache is listening on port 443 the request for the virtual host that doesn't exist will arrive at the default virtual host -- but that virtual host doesn't speak SSL.
In the case of using MySQL CLI to connect to an external MySQL DB, depending on the version of MySQL, you can pass the --ssl-mode=disabled like:
$ mysql --ssl-mode=disabled -h yourhost.tld -p
Or simply in your client config, for example in /etc/my.cnf.d/client.cnf:
[client]
ssl-mode=DISABLED
This is for dev and sometimes security and these things can be forfeited in certain situations in a closed, private dev environment.

Can an insecure docker registry be given a CA signed certificate so that clients automatically trust it?

Currently, I have set up a registry in the following manner:
docker run -d \
-p 10.0.1.4:443:5000 \
--name registry \
-v `pwd`/certs/:/certs \
-v `pwd`/registry:/var/lib/registry \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/certificate.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/private.key \
registry:latest
Using Docker version 17.06.2-ce, build cec0b72
I have obtained my certificate.crt, private.key, and ca_bundle.crt from Let's Encrypt. And I have been able to establish https connections when using these certs on a nginx server, without having to explicitly trust the certificates on the client machine/browser.
Is it possible to setup a user experience with a docker registry similar to that of a CA certified website being accessed via https, where the browser/machine trusts the root CA and those along the chain, including my certificates?
Note:
I can of course specify the certificate in the clients docker files as described in this tutorial: https://docs.docker.com/registry/insecure/#use-self-signed-certificates . However, this is not an adequate solution for my needs.
Output of curl -v https://docks.behar.cloud/v2/:
* Trying 10.0.1.4...
* TCP_NODELAY set
* Connected to docks.behar.cloud (10.0.1.4) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: docks.behar.cloud
* Server certificate: Let's Encrypt Authority X3
* Server certificate: DST Root CA X3
> GET /v2/ HTTP/1.1
> Host: docks.behar.cloud
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 2
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< X-Content-Type-Options: nosniff
< Date: Sun, 10 Sep 2017 23:05:01 GMT
<
* Connection #0 to host docks.behar.cloud left intact
Short answer: Yes.
My issue was caused by my os not having a build in trust of the root certificates from which my SSL certificate was signed by. This is likely due to the age of my os. See the answer from Matt for more information.
Docker will normally use the the OS provided CA bundle, so certificates signed by trusted roots should work without extra config.
Let's Encrypt certificates are cross signed by an IdentTrust root certificate (DST Root CA X3) so most CA bundles should already trust their certificates. The Lets Encrypt root cert (ISRG Root X1) is also distributed but will not be as widespread due to it being more recent.
Docker 1.13+ will use the host systems CA bundle to verify certificates. Prior to 1.13 this may not happen if you have installed a custom root cert. So if you use curl without any TLS warning then docker commands should also work the same.
To have DTR recognize the certificates you need to edit the configuration file so that you specify your certs correctly. DTR accepts and has special parameters for LetsEncrypt Certs. They also have specific requirements for them. You will need to make a configuration file and mount the appropriate directories and then there should be no further issues with insecure-registry errors and unrecognized certs.
...
http:
addr: localhost:5000
prefix: /my/nested/registry/
host: https://myregistryaddress.org:5000
secret: asecretforlocaldevelopment
relativeurls: false
tls:
certificate: /path/to/x509/public
key: /path/to/x509/private
clientcas:
- /path/to/ca.pem
- /path/to/another/ca.pem
letsencrypt:
cachefile: /path/to/cache-file
email: emailused#letsencrypt.com
...

1and1 HTTPS redirect does not work but HTTP does

I have a web app running on Heroku and domain managed by 1und1 (German version of domain registrar 1and1). To make the app available via "example.com" I did the following:
Created www.example.com subdomain in 1und1.
Attached it to www.example.com.herokudns.com as described in Heroku's guides (CNAME www.example.com.herokudns.com).
Ordered SSL certs from 1und1 and used them to setup HTTPS on Heroku side.
Set up HTTP redirect example.com -> https://www.example.com to make top level domain to point to Heroku.
This all worked fine until I tried to get the app by https://example.com - Chrome shows me "This site can’t provide a secure connection" page with ERR_SSL_PROTOCOL_ERROR.
cURL output:
#1.
curl https://example.com
curl: (35) Server aborted the SSL handshake
#2.
curl -vs example.de
Rebuilt URL to: example.de/
Trying <example.de 1und1 IP address here>...
TCP_NODELAY set
Connected to example.de (<example.de 1und1 IP address here>) port 80 (#0)
GET / HTTP/1.1
Host: example.de
User-Agent: curl/7.51.0
Accept: */*
< HTTP/1.1 302 Found
< Content-Type: text/html; charset=iso-8859-1
< Content-Length: 203
< Connection: keep-alive
< Keep-Alive: timeout=15
< Date: Tue, 11 Jul 2017 14:19:30 GMT
< Server: Apache
< Location: http://www.example.de/
...
#3.
curl -vs https://example.de
Rebuilt URL to: https://example.de/
Trying <example.de 1und1 IP address here>...
TCP_NODELAY set
Connected to wavy.de (<example.de 1und1 IP address here>) port 443 (#0)
Unknown SSL protocol error in connection to example.de:-9838
Curl_http_done: called premature == 1
Closing connection 0
So, the question is: how can I set up HTTPS redirect with 1und1 and Heroku?
Answering to my question.
After spending some time to google the issue out I found this article https://ubermotif.com/1and1-nightmare-bad-registrar-can-ruin-day. They faced the same issue. I decided to call to 1und1 support (they only offer calls no chats or email tickets). They told it is their issue, the GUI screwed up and they will put the dns settings to their DB by hands.
The issue is not solved yet, I'm waiting while dns changes will be applied/propagated.
This type of error comes because of server or website. You should try following tips to fix the errors:
Disable QUIC Protocol
Remove or Modify Host file by removing bad programs or the website you searching for Clear SSL state by following steps:
Start Menu > Control Panel > Network and Internet > Network and Sharing Center
Click on Internet Options from the left button When internet properties dialog box will open, go in content tab and select 'Clear SSL' option.
Check system time that it is matching with current time or not
Check Firewall to see your website IP address has been blocked or not, and if blocked then remove from it

HAProxy loses client certificate after close

I set up HAProxy 1.5.14 to use client certificates. This works fine for a single request, but HAProxy seems to lose the client certificate after an HTTP close.
From the haproxy.conf:
frontend localhost_https
bind *:8443 ssl crt /etc/private/server.pem.key_and_cert no-sslv3 ca-file /etc/certs/client_ca.pem verify required
option forceclose
default_backend my_http
I'm using forceclose to reliably trigger an HTTP close. Calling curl now shows that the first request authenticates correctly, but the second request does not.
$ (curl --cert /tmp/client.pem https://localhost:8443/ https://localhost:8443/ -vk > /dev/null) 2>&1|grep HTTP
> GET / HTTP/1.1
< HTTP/1.1 200 OK
> GET / HTTP/1.1
< HTTP/1.0 401 Unauthorized
Am I missing something? Why does HAProxy not send a request to the client to send the client certificate on the second request?