I have created a Kube certificate using certificate and key. When I try to access my services it returns an error saying "Server can't provide a secure connection". When accessing through curl it shows the following error.
I have tried everything that shows in the internet and also when I describe my ingress it shows that the secret has added.
* About to connect() to ***.***.com port 443 (#0)
* Trying IP...
* Connected to ***.***.com (IP) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* NSS error -12263 (SSL_ERROR_RX_RECORD_TOO_LONG)
* SSL received a record that exceeded the maximum permissible length.
* Closing connection 0
curl: (35) SSL received a record that exceeded the maximum permissible length.
Looks to be an issue with the wildcard dns name. the below one doesnt appear to be correct.
***.***.com
regenerate the certificate for dns name like *.<application-domain>.com
I am trying to connect to server B from server A using curl (https). I have already tried with -k and it doesn't work.
I have looked into several posts and I spotted blog on this link but still issue exists.
When I do a curl from server A, I am getting following error:
* Rebuilt URL to: https://x.x.x.x:8443/
* Hostname was NOT found in DNS cache
* Trying x.x.x.x...
* Connected to x.x.x.x (x.x.x.x) port 8443 (#0)
* successfully set certificate verify locations:
* CAfile: /tmp/cert_test/certRepo
CApath: /etc/ssl/certs/
* SSLv3, TLS handshake, Client hello (1):
* error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
* Closing connection 0
curl: (35) error:140770FC:**SSL routines:SSL23_GET_SERVER_HELLO:**unknown protocol
I went on the server B (https://x.x.x.x:8443/) from the browser and downloaded the root, intermediate and the client certificate. As suggested in the blog, I have created a new folder and combined all the public certs into one directory and tried to execute the curl command
curl -v --cacert /tmp/cert_test/certRepo https://x.x.x.x:8443
I am getting GET_SERVER_HELLO:unknown protocol
any thoughts?
Curl version from the Client machine:
curl 7.37.0 (x86_64-suse-linux-gnu)
libcurl/7.37.0 OpenSSL/0.9.8j
zlib/1.2.7
libidn/1.10
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet
tftp
Features: GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz
I am very sure the server is using TLSv1.2.
you did not post your curl/libssl version, but my best guess is that you're using an ancient build of a ssl/tls library, and/or an ancient version of curl which does not support whatever version of ssl/tls that server us ysubg. update your libssl and curl and try again. also post the output of curl --version.
PS, if you're on linux, you can get rough curl+openssl compile instructions here.
When I try to connect to any server (e.g. google.com) using curl (or libcurl) I get the error message:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
Verbose output:
$ curl www.google.com --verbose
* Rebuilt URL to: www.google.com/
* Uses proxy env variable no_proxy == 'localhost,127.0.0.1,localaddress,.localdomain.com'
* Uses proxy env variable http_proxy == 'https://proxy.in.tum.de:8080'
* Trying 131.159.0.2...
* TCP_NODELAY set
* Connected to proxy.in.tum.de (131.159.0.2) port 8080 (#0)
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* Closing connection 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number'
For some reason curl seems to use TLSv1.3 even if I force it to use TLSv1.2 with the command --tlsv1.2 (it will still print TLSv1.3 (OUT), ..."
I am using the newest version of both Curl and OpenSSL :
$ curl -V
curl 7.61.0-DEV (x86_64-pc-linux-gnu) libcurl/7.61.0-DEV OpenSSL/1.1.1 zlib/1.2.8
Release-Date: [unreleased]
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP UnixSockets HTTPS-proxy
I think this is a problem related to my installation of the programms.
Can somebody explain to me what this error message means?
* Uses proxy env variable http_proxy == 'https://proxy.in.tum.de:8080'
^^^^^
The https:// is wrong, it should be http://. The proxy itself should be accessed by HTTP and not HTTPS even though the target URL is HTTPS. The proxy will nevertheless properly handle HTTPS connection and keep the end-to-end encryption. See HTTP CONNECT method for details how this is done.
If anyone is getting this error using Nginx, try adding the following to your server config:
server {
listen 443 ssl;
...
}
The issue stems from Nginx serving an HTTP server to a client expecting HTTPS on whatever port you're listening on. When you specify ssl in the listen directive, you clear this up on the server side.
This is a telltale error that you are serving HTTP from the HTTPS port.
You can easily test with telnet
telnet FQDN 443
GET / HTTP/1.0
[hit return twice]
and if you see regular HTTP document here [not some kind of error], you know that your configuration is incorrect and the responding server is not SSL encrypting the response.
Simple answer
If you are behind a proxy server, please set the proxy for curl. The curl is not able to connect to server so it shows wrong version number.
Set proxy by opening subl ~/.curlrc or use any other text editor. Then add the following line to file:
proxy= proxyserver:proxyport
For e.g. proxy = 10.8.0.1:8080
If you are not behind a proxy, make sure that the curlrc file does not contain the proxy settings.
Also check your /etc/hosts file. Wasted 2 hours on this. If you have an url rerouted to 127.0.0.1 or any other loopback, this will fail the ssl handshake.
In my case the cause of this error was that my web server was not configured to listen to IPv6 on SSL port 443. After enabling it the error disappeared.
Here's how you do it for Apache:
<VirtualHost ip.v4.address:443 ip:v::6:address:443>
...
</VirtualHost>
And for nginx:
listen 443 ssl http2;
listen [::]:443 ssl http2;
Thanks to #bret-weinraub,
I found that something is weird about the server's reply. After a bit of investigation, it turned out that I have a static IP in /etc/hosts file for the target domain and as they have changed their IP address I'm not getting to the correct server.
More simply in one line:
proxy=192.168.2.1:8080;curl -v example.com
eg. $proxy=192.168.2.1:8080;curl -v example.com
xxxxxxxxx-ASUS:~$ proxy=192.168.2.1:8080;curl -v https://google.com|head -c 15 % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Trying 172.217.163.46:443...
* TCP_NODELAY set
* Connected to google.com (172.217.163.46) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
Another possible cause of this problem is if you have not enabled the virtual host's configuration file in Apache (or if you don't have that virtual host at all) and the default virtual host in Apache is only configured for non-SSL connections -- ie there's no default virtual host which can talk SSL. In this case because Apache is listening on port 443 the request for the virtual host that doesn't exist will arrive at the default virtual host -- but that virtual host doesn't speak SSL.
In the case of using MySQL CLI to connect to an external MySQL DB, depending on the version of MySQL, you can pass the --ssl-mode=disabled like:
$ mysql --ssl-mode=disabled -h yourhost.tld -p
Or simply in your client config, for example in /etc/my.cnf.d/client.cnf:
[client]
ssl-mode=DISABLED
This is for dev and sometimes security and these things can be forfeited in certain situations in a closed, private dev environment.
I use squid 3.5 with its sslbump feature for https traffic filtering. I generated my private key and cert files with openssl. However,the browser received the warning message when i open https websites that the certificate was issued by an unknown authority. I created ssl certificates with comodo but i still got the same warning message.
Is there a way to remove this warning?
# Squid normally listens to port 3128
http_port 3128 ssl-bump cert=/var/tmp/example.com.cert key=/var/tmp/example.com.private
# Squid listen Port
cert=/var/tmp/example.com.cert
# SSL Bump Config
always_direct allow all
ssl_bump server-first all
url_rewrite_program /usr/bin/sh /var/tmp/middle_squid_wrapper.sh start -C /var/tmp/middle_squid_config.rb
# required to fix HTTPS sites (if SslBump is enabled)
acl fix_ssl_rewrite method GET
acl fix_ssl_rewrite method POST
url_rewrite_access allow fix_ssl_rewrite
url_rewrite_access deny all
You don't say what client OS you are using, but it sounds very much like you didn't import your squid certificate to the correct certificate store on the client.
When you install the certificate on a Windows client it should be imported into the Trusted Root Certificate Authorities'->'certificates folder.
The client should then trust the certificate.
I keep getting ssl certificate error from google web master tool like below.
Dear Webmaster, The host name of your site, https://myapp.com/, does not match any of the "Subject Names" in your SSL certificate, which were:
*.herokuapp.com
herokuapp.com
This will cause many web browsers to block users from accessing your site, or to display a security warning message when your site is accessed. To correct this problem, please get a new SSL certificate by a Certificate Authority (CA) with a "Subject Name" or "Subject Alternative DNS Names" that matches your host name. Thanks, The Google Web Crawling Team
I set ssl to my heroku app by following instructions of Heroku dev center.
https://devcenter.heroku.com/articles/ssl-certificate
https://devcenter.heroku.com/articles/ssl-endpoint
I also am using rack_rewrite for 301 redirect for redirecting from naked domain to www subdomain.
It seems everything is going fine from browser, when I access naked domain, it will redirect to https://www.myapp.com without any SSL error.
output from heroku are like below
heroku certs --remote production
Endpoint Common Name(s) Expires Trusted
---------------------- ---------------------------------- -------------------- -------
XXXXXXXX.herokussl.com www.myapp.com, myapp.com 2013-08-05 00:20 PHT True
heroku certs:info --remote production
Fetching information on SSL endpoint XXXXXXX.herokussl.com... done
Certificate details:
subject: /serialNumber=XXXXXXXXXX www.rapidssl.com/resources/cps (c)12/OU=Domain Control Validated - RapidSSL(R)/CN=www.myapp.com
start date: (some date)
expire date: (some date)
common name(s): www.myapp.com, myapp.com
issuer: /serialNumber=XXXXXXXXXXX www.rapidssl.com/resources/cps (c)12/OU=Domain Control Validated - RapidSSL(R)/CN=www.myapp.com
SSL certificate is verified by a root authority.
domain settings
Type NAME TTL Points to
ALIAS myapp.com 3600 xxxxxx.herokussl.com
CNAME www.myapp.com 3600 xxxxxx.herokussl.com
Why I keep getting the error from Google?
Naked Domains are not supported. See the documentation section at Heroku Endpoint SSL