I recently updated my webserver to Ubuntu 16.04 and after the update, I'm getting issues with browsers refusing to connect when the url doesn't include https://
I made sure to check ufw to verify 'Apache Full' was allowed and it was, not sure what to check from here. Any help is greatly appreciated! :)
Unless someone else online here has solved the same problem with the same version of Ubuntu, you will probably have to debug this. I cannot debug it for you because I am not at your keyboard. However, I can get you started.
From a machine other than the web server, try the command
openssl s_client -connect HOSTNAME:80
Replace HOSTNAME with the web server's hostname. If it complains, "Connection refused," then your new web server is no longer serving HTTP. On the other hand, if OpenSSL connects, then your new web server is at least trying to serve HTTP. (Note that OpenSSL, if called as above, won't do anything useful when it connects. It should just drop the connection after a few seconds, but the point is that is connects.)
If, for purpose of comparison, you wish to see what a good HTTP connection looks like, then try
openssl s_client -connect stackoverflow.com:80
Related
I created a self-signed certificate in a local pc and now I can't access to the localhost showing the following error in both chrome & firefox. There is no option to accept the risk and continue.
I tried removing the generated cert and key files but the issue is still there.
Is there way to rollback that change? Or any other way to continue.
OS: OpenSUSE Thumbleweed
HSTS is blocking you, so clear HSTS config in used browser for used domain (locahost). Random blog post how to do that: https://www.thesslstore.com/blog/clear-hsts-settings-chrome-firefox/
I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?
Background:
I have a running app at ports 8080 in the remote server and a https ingress proxy at 443 on the same server, which redirects everything to 8080 app after handling the SSL.
What I want to do:
I want to communicate with the app through SSL remotely, while not having access directly to this domain (it is on a local network, I can access the server remotely via a different domain).
What I did:
I tunneled 443 port from my remote server ssh -L 3001:0.0.0.0:443 user#example.com. I then added 127.0.0.1 example.com to my /etc/hosts to make sure that the domain on my system is resolved properly.
Now, what I can do is enter https://example.com:3001/some/thing/ in firefox and it gets me a proper response from the server, while everything is ran through ssl without any problems. I also am able to use curl without checking the certificate: curl --insecure https://example.com:3001/some/thing works fine.
At the same time secure curl call fails: curl https://example.com:3001/some/thing with the error:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Just to make sure both are using the same certificates, I actually used this tool: https://curl.haxx.se/docs/mk-ca-bundle.html to create a ca-bundle.crt from the most recent firefox certificates and passed it to curl with --cacert ca-bundle.crt. No luck - the same error. (I also tried following other curl tutorial on getting the local installation of firefox's certs, also no luck).
Question
What is going on? Why is curl's output different from firefox's even if I seem to use the same certificates? How can I debug this?
Side note
The real reason I am concerned about it is that with a normal (local) access to the server, I observed the same behaviour: I could connect to the server through chrome on https, but my react native app could not. I suspect the app to use libcurl under the hood or something similar and I believe debugging this problem could help me understand what's the problem with the app.
I have a site which is served over HTTPS, but which iTunes can't find. My suspicion is that it's related to the iTunes backend server being Java 6, and Java 6 not supporting SNI. SSL Labs seems to hint that my site does require SNI (see this report, and search for SNI), but I can't think why. Have I misunderstood multi-domain certificates? I've got multiple sites running on the same server, but my understanding was that as long as all the URLs were listed as Subject Alternative Names on the certificate, that all would be well.
Does anyone know a good way to check if a URL requires SNI support on the client to access it? I don't have a Windows XP/Java 6 install around to play with sadly.
The reports from SSLLabs regarding SNI are usually correct. Your understanding that SNI is not needed if your certificate contains all possible hosts is correct too. But, not needed in theory does not mean that your server setup does not require SNI anyway.
I don't have a Windows XP/Java 6 install around to play with sadly.
Given that you only specify what you don't have I will assume that you have everything else which might be used. A simple way to check is openssl:
# without SNI
$ openssl s_client -connect host:port
# use SNI
$ openssl s_client -connect host:port -servername host
Compare the output of both calls of openssl s_client. If they differ in the certificate they serve or if the call w/o SNI fails to establish an SSL connection than you need SNI to get the correct certificate or to establish a SSL connection at all.
An easy way to check if a site relies on SNI is this:
openssl s_client -servername alice.sni.velox.ch -tlsextdebug -msg \
-connect alice.sni.velox.ch:443 2>/dev/null | grep "server name"
And if in that output you see the following, it means the site is using SNI.
TLS server extension "server name" (id=0), len=0
The above is a summary of an answer at serverfault.
Nginx in general, and your site in particular, accepts but doesn't require SNI. To test this you cannot easily use Oracle Java out of the box, because its cacerts does not include DST Root CA X3 which is the root cert used (initially) by 'Let's Encrypt' who issued your site's cert; this is true for all versions of Oracle Java up to current (8u74). Windows (hence IE and Chrome on Windows) and Firefox do have this root cert; I can't say for other OS or browsers.
To fix this so you can easily test, either:
use Oracle Java 6 but modify JRE/lib/security/cacerts to add the DSTX3 cert, obtained either from your OS or browser, or by following the link at https://letsencrypt.org/certificates/ to https://www.identrust.com/certificates/trustid/root-download-x3.html -- except that page nonstandardly gives you only the base64 body of the cert so you must manually add the PEM header and trailer lines before Java keytool will import it.
use Oracle Java 6 as-is but configure your application (with system properties) to use a custom truststore which you create containing the DSTX3 cert as above.
use a version of Java 6 that does include this root cert in cacerts. In particular I use CentOS 6 and its openjdk packages (for 6, 7, and 8) use a systemwide CA 'bundle' that includes DSTX3, which is what made it easy for me to do this test. I expect, but can't confirm, that other RedHat variants do the same. For other distros and platforms I can't say; if not, see above.
Monitor the connection attempt with wireshark or similar to see that the ClientHello does not contain SNI, but the connection succeeds and is successfully used for an HTTP request.
If you actually want to communicate with the server instead of testing it for SNI, simply omit the final 'monitor' step.
I just followed the steps for using SSL on localhost: https://www.digitalocean.com/community/tutorials/how-to-create-a-ssl-certificate-on-apache-for-ubuntu-14-04
But when i access https://localhost, i get this message:
Page Web inaccessible
ERR_CONNECTION_REFUSED
I'm using Apache2 with Ubuntu trusty on Vagrant.
Let me know if you want more informations.
Thank's
There are three possibilities about this message:
Your self-signed certificate is invalid for some reason.
Please see your Apache error log.
Your Apache SSL/TLS protocols do not match with those of your browser.
Try something like the following from a command prompt: openssl s_client -connect localhost:443 to test the SSL/TLS connection. Please update your question with the output.
May be there is a firewall between your browser and your Apache server?
The problem could be due to Apache performing a reverse DNS lookup on the URL - the result of which does not match any of your PC's aliases.
Try https://127.0.0.1 or https://<hostname>.