I am trying to download files from an https site and keep getting the following error:
OpenSSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
Unable to establish SSL connection.
From reading blogs online I gather I have to provide the server cert and the client cert. I have found steps on how to download the server cert but not the client cert. Does anyone have a complete set of steps to use wget with SSL? I also tried the --no-check-certificate option but that did not work.
wget version: wget-1.13.4
openssl version: OpenSSL 1.0.1f 6 Jan 2014
trying to download all lecture resources from a course's webpage on coursera.org. So, the URL would look something like this: https://class.coursera.org/matrix-002/lecture
Accessing this webpage online requires form authentication, not sure if that is causing the failure.
It works from here with same OpenSSL version, but a newer version of wget (1.15). Looking at the Changelog there is the following significant change regarding your problem:
1.14: Add support for TLS Server Name Indication.
Note that this site does not require SNI. But www.coursera.org requires it.
And if you would call wget with -v --debug (as I've explicitly recommended in my comment!) you will see:
$ wget https://class.coursera.org
...
HTTP request sent, awaiting response...
HTTP/1.1 302 Found
...
Location: https://www.coursera.org/ [following]
...
Connecting to www.coursera.org (www.coursera.org)|54.230.46.78|:443... connected.
OpenSSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
Unable to establish SSL connection.
So the error actually happens with www.coursera.org and the reason is missing support for SNI. You need to upgrade your version of wget.
You probably have an old version of wget. I suggest installing wget using Chocolatey, the package manager for Windows. This should give you a more recent version (if not the latest).
Run this command after having installed Chocolatey (as Administrator):
choco install wget
I was in SLES12 and for me it worked after upgrading to wget 1.14, using --secure-protocol=TLSv1.2 and using --auth-no-challenge.
wget --no-check-certificate --secure-protocol=TLSv1.2 --user=satul --password=xxx --auth-no-challenge -v --debug https://jenkins-server/artifact/build.x86_64.tgz
One alternative is to replace the "https" with "http" in the url that you're trying to download from to just circumvent the SSL connection. Not the most secure solution, but this worked in my case.
I was having this problem on Ubuntu 12.04.3 LTS (well beyond EOL, I know...) and got around it with:
sudo apt-get update && sudo apt-get install ca-certificates
Basically your OpenSSL uses SSLv3 and the site you are accessing does not support that protocol.
Just update your wget:
sudo apt-get install wget
Or if it is already supporting another secure protocol, just add it as argument:
wget https://example.com --secure-protocol=PROTOCOL_v1
Below command for download files from TLSv1.2 website.
curl -v --tlsv1.2 https://example.com/filename.zip
It`s worked!
Otherwise might be just simpler to use curl instead.
There is no peculiar need to specify any option and can be simply:
curl https://example.com/filename.zip
with curl there is no need to add the -v option when facing the wget SSL error.
Related
I am trying to use the wget binary for windows, in order to download an entire website onto a USB drive. I tried to run the following wget command, but it just failed. I don't know why. I don't know how to read the following error message.
with http:
E:\gardening> wget --mirror --convert-links --html-extension --no-timestamping --no-clobber -erobots=off --page-requisites --user-agent=Mozilla http://www.eattheweeds.com/
--2022-07-12 17:48:33-- http://www.eattheweeds.com/
Resolving www.eattheweeds.com... 45.60.22.231
Connecting to www.eattheweeds.com|45.60.22.231|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.eattheweeds.com/ [following]
--2022-07-12 17:48:33-- https://www.eattheweeds.com/
Connecting to www.eattheweeds.com|45.60.22.231|:443... connected.
OpenSSL: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
Unable to establish SSL connection.
with https:
E:\gardening> wget --mirror --convert-links --html-extension --no-timestamping --no-clobber -erobots=off --page-requisites --user-agent=Mozilla https://www.eattheweeds.com/
--2022-07-12 17:45:38-- https://www.eattheweeds.com/
Resolving www.eattheweeds.com... 45.60.22.231
Connecting to www.eattheweeds.com|45.60.22.231|:443... connected.
OpenSSL: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
Unable to establish SSL connection.
I got my wget from this website.
https://sourceforge.net/projects/gnuwin32/files/wget/1.11.4-1/wget-1.11.4-1-setup.exe/download?use_mirror=cfhcable
Could my version of wget have something to do with this?
GNU Wget 1.11.4
Copyright (C) 2008 Free Software Foundation, Inc
Edit:
I downloaded wget version GNU Wget 1.21.3 built on mingw32 and it worked!
https://eternallybored.org/misc/wget/
https://builtvisible.com/download-your-website-with-wget/
The problem is your version of wget do not support new versions of TLS. And the web site need TLS v1.1 or 1.2. As you found you need new version of wget
As you use Windows next time maybe will be wise to use Power Shell which have incorporated version of wget
I set up a secured NiFi cluster with TLS certificates provided by the organisation.On accessing the UI I am getting the error as "javax.net.ssl.SSLPeerUnverifiedException: Hostname abc.com not verified: certificate: sha256/abc/abcabc= DN: CN=abc.com, OU=Abc Operations, O=Abc Corporation Limited, C=SG subjectAltNames: [abc.com]".I have referred the link https://nifi.apache.org/docs/nifi-docs/html/walkthroughs.html#securing-nifi-with-provided-certificates.
Is there anything I missed to enable peer to peer communication while using SSL?
I had same problem and found solution in NiFi TLS-toolkit.
Notion: on my cluster auth worked correctly and problem was only in java verification SSL
Shortly: problem indeed in --subjectAlternativeNames
Generating ssl-keys with own rootCA not worked for me. Good instrunction (but old): https://community.cloudera.com/t5/Community-Articles/How-to-create-user-generated-keys-for-securing-NiFi/ta-p/245551
CentOS Linux 8
NiFi 1.14.0
nifi-toolkit 1.15.2
My way with NiFi TLS-toolkit:
Download nifi-toolkit-*.tar.gz to linux machine (let's ip machine is 0.0.0.1, we need it because this VM will be as "certificateAuthorityHostname") link at this page
sudo wget https://dlcdn.apache.org/nifi/1.15.2/nifi-toolkit-1.15.2-bin.tar.gz
Unarchive it
sudo tar -xvf nifi-toolkit-1.15.2-bin.tar.gz
Generate all keys by long command
../security_output - this dir (or any other name) need to be created before run main command (it's useful to store all key-files in one place)
sudo ./bin/tls-toolkit.sh standalone -h - this help-command to better understand args
OU - equal VM-names in my cluster
!!! --subjectAlternativeNames - it's main reason why raise error javax.net.ssl.SSLPeerUnverifiedException: Hostname <ip / dns> not verified
-O - this arg overwrite your keys in folder, be careful
generaet coomand: sudo ./bin/tls-toolkit.sh standalone --hostnames '0.0.0.1,0.0.0.2,0.0.0.3' -c '0.0.0.1' -C 'CN=0.0.0.1,OU=nifi-prod-cluster-01' -C 'CN=0.0.0.2,OU=nifi-prod-cluster-02' -C 'CN=0.0.0.3,OU=nifi-prod-cluster-03' -O -o ../security_output --subjectAlternativeNames '0.0.0.1,0.0.0.2,0.0.0.3,nifi-prod-cluster-01,nifi-prod-cluster-02,nifi-prod-cluster-03'
After generating keys I archive full dir security_output:
sudo tar -zcvf security_output.tar.gz security_output
And copy this tar/dir to other VM of cluster: to 0.0.0.2 and 0.0.0.3 in my example
Then we need to move keystore.jks and truststore.jks to nifi/conf/ directory near nifi.properties
Edit nifi.properties. Passwords of keys will be in security_output/0.0.0.X/nifi.properties. I replace only this params:
nifi.security.autoreload.enabled=false
nifi.security.autoreload.interval=10 secs
nifi.security.keystore=./conf/keystore.jks
nifi.security.keystoreType=jks
nifi.security.keystorePasswd=34dgsOBKdS+9DGHIm849ALK3JaNBdd738ddsgjfghb4J
nifi.security.keyPasswd=34dgsOBKdS+9DGHIm849ALK3Jaddsgjfghb4J
nifi.security.truststore=./conf/truststore.jks
nifi.security.truststoreType=jks
nifi.security.truststorePasswd=/n1xI9AjcwutNBdd738uOQeQL5O9ALK3i3KwylEYMW5
nifi.security.user.authorizer=single-user-authorizer
nifi.security.allow.anonymous.authentication=false
nifi.security.user.login.identity.provider=single-user-provider
nifi.security.user.jws.key.rotation.period=PT1H
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
Restart nifi:
sudo service nifi restart && tail -f /opt/nifi/logs/nifi-app.log
UPD. Maybe you want to set one password for keys for all machines (it's easier to setup) or set number of days for keys: https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#standalone
Links:
Usefull link for my guide (but old): https://pierrevillard.com/tag/tls-toolkit/
This helps me find good idea: https://community.cloudera.com/t5/Community-Articles/Using-the-TLS-Toolkit-to-simplify-security/ta-p/247531
I am trying to get Redis 6 (with TLS enabled during compilation, tests after compilation were successful) to work. I am using Lets Encrypt certificate and following configuration:
tls-port 63790
tls-cert-file /etc/letsencrypt/live/myserver.net/cert.pem
tls-key-file /etc/letsencrypt/live/myserver.net/privkey.pem
tls-ca-cert-dir /etc/letsencrypt/live/myserver.net/
tls-auth-clients no
tls-protocols "TLSv1.2 TLSv1.3"
and this client command from localhost
redis-cli --tls --cert /etc/letsencrypt/live/myserver.net/cert.pem --key /etc/letsencrypt/live/myserver.net/privkey.pem --cacert /etc/letsencrypt/live/myserver.net/fullchain.pem -h myserver.net -p 63790 -a password
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Could not connect to Redis at myserver.net:63790: SSL_connect failed: certificate verify failed
this is output from redis log:
Error accepting a client connection: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca
While I am using openssl client with same certificates, i am able to connect and get ping reply from Redis server
No matter if I change
tls-ca-cert-dir /etc/letsencrypt/live/myserver.net/
to
tls-ca-cert
on server side
or
--cacert /etc/letsencrypt/live/myserver.net/fullchain.pem to chain.pem on client side
I tried to all versions of
tls-protocols ""
and change
tls-auth-clients no
to
tls-auth-clients optional
but I am still stuck with same error
OpenSSL version is 1.1.1
Redis version is 6.0.8
OS: Ubuntu 20.04
Can you help me to find out reason why is TLS not working, please?
Thank you
Wil
Ahh, SOLVED!
I was putting wrong CA chain. I had to chain root and intermediate certs downloaded from LE website into new file. It may come handy for someone with same problem.
I have a script that runs every day on an Ubuntu 14.04 server. The script is a simple wget command that downloads a file from a remote server and saves it to the local file system:
wget https://example.com/resources/scripts/myfile.php -O myfile.php
It has worked fine for months until this morning when suddenly when I run it I get:
--2020-05-30 11:57:16-- https://example.com/resources/scripts/myfile.php
Resolving example.com (example.com)... xx.xx.xx.xx
Connecting to example.com (example.com)|xx.xx.xx.xx|:443... connected.
ERROR: cannot verify example.com's certificate, issued by ‘/C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Domain Validation Secure Server CA’:
Issued certificate has expired.
To connect to example.com insecurely, use `--no-check-certificate'.
The SSL for the domain is valid and expires in Jan. 2022. Nothing has changed on that front. And yet somehow wget no longer sees that.
Here is another interesting fact. If I run this same exact command on an Ubuntu 18 box, it works like a charm without any complaints. This tells me something is wrong with my Ubuntu 14.04 machine.
Curl produces the same error:
curl https://example.com
curl: (60) SSL certificate problem: certificate has expired
This post suggest that the certificate bundle is out of date. I have downloaded the suggested PEM file and tried running wget with by specifying the --ca-certificate=cacert.pem option, but to no avail.
I have also tried running: apt install ca-certificates and update-ca-certificates, but that did not work either.
Again, everything works great on an Ubuntu 18 box, but not Ubuntu 14 or 16. Also why did it work fine until this morning when I know nobody has touched the box? Clearly something is out of date, but I can't seem to figure out how to fix it.
Does anybody have any suggestions?
I had the same error two days ago with Comodo Certificate and ubuntu 16.04.
The problem was like say mrmuggles this https://support.sectigo.com/Com_KnowledgeDetailPage?Id=kA03l00000117LT.
I fixed with this steps:
vi /etc/ca-certificates.conf
Remove the line (or comment)
specifying AddTrust_External_Root.crt
apt update && apt install ca-certificates
update-ca-certificates -f -v
https://askubuntu.com/questions/440580/how-does-one-remove-a-certificate-authoritys-certificate-from-a-system
Like the original poster the method of editing ca-certificates.conf did not work for me on Ubuntu 14.04.
What did work:
Run sudo dpkg-reconfigure ca-certificates
Deselect the problem CA: AddTrust_External_Root
Press OK
My understanding is that deletes the expired CA of AddTrust_External_Root and the newer CA USERTrust_RSA_Certification_Authority is used instead.
For wget add --no-check-certificate
Example: wget https://example.com/resources/scripts/myfile.php --no-check-certificate -O myfile.php
I'm setting up a domain registry as described here:
https://docs.docker.com/registry/deploying/
I generated a certificate for docker.mydomain.com and started the docker using their command on my server:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
I've started the docker and pointed to certificates I obtained using letsencrypt (https://letsencrypt.org/).
Now, when I browse to https://docker.mydomain.com:5000/v2/ I will get a page with just '{}', with a green lock (succesful secure page request).
But when I try to do a docker login docker.mydomain.com:5000 from a different server I see a error in the registry docker:
TLS handshake error from xxx.xxx.xxx.xxx:51773: remote error: bad certificate
I've tried some different variations in setting up the certificates, and gotten errors like:
remote error: unknown certificate authority
and
tls: first record does not look like a TLS handshake
What am I missing?
Docker seams to not support SNI : https://github.com/docker/docker/issues/9969
Update : Docker now should support SNI.
It's mean, when connecting to your server during the tls transaction, the docker client do not specify the domain name, so your server show the default certificate.
The solution could be to change to default certificate of your server to be to one valid for the docker domain.
This site works only in browsers with SNI support.
To check if your (sub-)domain works with clients not SNI-aware, you can use ssllabs.com/ssltest : If you DONT see the message, "This site works only in browsers with SNI support. " then it will works.