A few months ago I created the following website for my company: www.mydomain.co.uk. I created it using Lightsail / Bitnami and as part of this process I created an SSL using the 'LetEncrypt' tool. I have also got Cloudflare set up which is providing a SSL/TLS encryption.
Website was live for a number of months and yesterday it stopped working - I suspect this is because the Bitnami SSL certificate had expired. So I set the Cloudflare SSL/TLS setting from 'Full (Strict' to 'Full' and it works fine now.
So do I need to renew the SSL certificate? Or is it obsolete as I am using Cloudflare?
Thanks :-)
The SSL provided by Cloudflare only protects the connection between the client and Cloudflare, but not between Cloudflare and your server. To protect this connection you need SSL on your own server, i.e. using SSL here is not obsoleted by using Cloudflare.
The chance that some attacker is between the client and Cloudflare though is much higher than an attacker being between Cloudflare and your server. In the server the attacker is usually near the client, i.e. typically in the same network like in an open WiFi hotspot or by creating a rogue hotspot. Not much effort and expertise is needed for this kind of attacks. Contrary to this infiltrating connections between Cloudflare and your host is much harder, but government or other attackers with more resources can do it.
Full security does not check the certificate and thus only protects against passive sniffing of the connections. It does not protect against active man in the middle attacks where the attacker intercepts the traffic, decrypts it so it can be read and modified and clear and the re-encrypts again. To protect against such an attack a properly valid certificate has to be used and Cloudflare need to be configured to validate it properly, i.e. full strict. While active attacks need more access the network than passive attacks they are not out of scope for advanced attackers. So better use a valid certificate and switch on full strict.
Related
I've been investigating the possibility of migrating to using Let's Encrypt to maintain the SSL certificates we have in place for the various resources we use for our operations. We have the following resources using SSL certificates:
Main website (www.example.com / example.com) - Hosted and maintained by a 3rd party who also maintains the SSL certificate
Client portal website (client.example.com) - IIS site hosted and maintained by us on a server located in a remote data center
FTP server (ftp.example.com) - WS_FTP Server hosted and maintained by us on a server located in a remote data center
Hardware firewall (firewall.example.com) - Local security appliance for our internal network
Remote Desktop Gateway (rd.example.com) - RDP server hosted and maintained by us on a server located locally
As indicated above, the SSL certificate for the main website (www) is maintained by the 3rd-party host, so I don't generally mess with that one. However, as you can tell, the DNS records for each of these endpoints point to a variety of different IP addresses. This is where my inexperience with the overall process of issuing and deploying SSL certificates has me a bit confused.
First of all, since I don't manage or maintain the main website, I'm currently manually generating the CSR's for each of the endpoints from the server/service that provides the endpoint - one from the IIS server, a different one from the RDP server, another from the WS_FTP server, and one from the hardware firewall. The manual process, while not excessively time-consuming, still requires me to go through several steps with different server systems requiring different processes.
I've considered using one of Let's Encrypt's free wildcard SSL certificates to cover all four of these endpoints (*.example.com), but I don't want to "interfere" with what our main website host is doing on that end. I realize the actual certificate itself is presented by the server to which the client is connecting, so it shouldn't matter (right?), but I'd probably still be more comfortable with individual SSL certificates for each of the subdomain endpoints.
So, I've been working on building an application using the Certes ACME client library in an attempt to automatically handle the entire SSL process from CSR to deployment. However, I've run into a few snags:
The firewall is secured against connections on port 80, so I wouldn't be able to serve up the HTTP-01 validation file for that subdomain (fw.example.com) on the device itself. The same is true for the FTP server's subdomain (ftp.example.com).
My DNS is hosted with a provider that does not currently offer an API (they say they're working on one), so I can't automate the process of the DNS-01 validation by writing the TXT record to the zone file.
I found the TLS-ALPN-01 validation method, but I'm not sure whether or not this is appropriate for the use case I'm trying to implement. According to the description of this method from Let's encrypt (emphasis mine):
This challenge is not suitable for most people. It is best suited to authors of TLS-terminating reverse proxies that want to perform host-based validation like HTTP-01, but want to do it entirely at the TLS layer in order to separate concerns. Right now that mainly means large hosting providers, but mainstream web servers like Apache and Nginx could someday implement this (and Caddy already does).
Pros:
It works if port 80 is unavailable to you.
It can be performed purely at the TLS layer.
Cons:
It’s not supported by Apache, Nginx, or Certbot, and probably won’t be soon.
Like HTTP-01, if you have multiple servers they need to all answer with the same content.
This method cannot be used to validate wildcard domains.
So, based on my research so far and my environment, my three biggest questions are these:
Would the TLS-ALPN-01 validation method be an effective - or even available - option for generating the individual SSL certificates for each subdomain? Since the firewall and FTP server cannot currently serve up the appropriate files on port 80, I don't see any way to use the HTTP-01 validation for these subdomains. Not being able to use an API to automate a DNS-01 validation would make that method generally more trouble than it's worth. While I could probably do the HTTP-01 validation for the client portal - and maybe the RDP server (I haven't gotten that far in my research yet) - I'd still be left with handling the other two subdomains manually.
Would I be better off trying to do a wildcard certificate for the subdomains? Other than "simplifying" the process by reducing the number of SSL certificates that need to be issued, is there any inherent benefit to going this route versus using individual certificates for each subdomain? Since the main site is hosted/managed by a 3rd-party and (again) I can't currently use an API to automate a DNS-01 validation, I suppose I would need to use an HTTP-01 validation. Based on my understanding, that means that I would need to get access/permission to create the response file, along with the appropriate directories on that server.
Just to be certain, is there any chance of causing some sort of "conflict" if I were to generate/deploy a wildcard certificate to the subdomains while the main website still used its own SSL certificate for the www? Again, I wouldn't think that to be the case, but I want to do my best to avoid introducing more complexity and/or problems into the situation.
I've responded to your related question on https://community.certifytheweb.com/t/tls-alpn-01-validation/1444/2 but the answer is to use DNS validation and my suggestion is to use Certify DNS (https://docs.certifytheweb.com/docs/dns/providers/certifydns), which is an alternative managed alternative cloud implementation of acme-dns (CNAME delegation of DNS challenge responses.
Certify DNS is compatible with most existing acme-dns clients so it can be used with acme-dns compatible clients as well as with Certify The Web (https://certifytheweb.com)
We have a website with SSL configured. 2 days back SSL certificate was expired so I purchased a new instead of renewing. I have configured the new one. Now some of users are still getting SSL certificate expired issue although the new one is configured.
I want to force the browser to recheck the new SSL certificate using some server side configuration since we can not go and update each user browser certificate manually. It have to be done using some server side configuration. We are using Nginx.
This is really critical to us.
Please help in this regard.
Thanks!
The certificate is validated by the client only when the server sends one. The server sends one with each full TLS handshake. The browser does not somehow cache an old certificate and ignore the one sent by the server when validating.
It is more likely that you've not fully rolled out the new certificate on the server side. For example if you have multiple servers make sure that all have the new certificate. If your server provides access for IPv4 and IPv6 make sure that in both cases the proper certificate is served. If you provide service on multiple ports make sure that they all use the new certificate.
It's also possible your affected users are behind a proxy that caches certificates. For example if they're behind a Smoothwall proxy that generates its own certificates after inspecting HTTPS traffic and caches them.
Either way, if you've updated the certificates on your server and restarted the necessary services, it's probably nothing you have control over and will most likely resolve itself in time.
I have a small REST server running on a local network, with a bunch of client applications connected (via IP address).
I want to secure this traffic so that access tokens cannot be sniffed.
According to This answer I can create a self signed cert. using an IP (although not common). If i go down this route (assuming no physical access to the server box itself) is this secure?
Yes. It is as secure as CA-signed certificate as long as users always install the correct certificate. (Make sure the the certificate is distributed securely.)
If distributing it securely seems too much trouble, then you may see answers here and consider using certificate issued by letsencrypt.org if you can.
I have a doubt that I had for years and now I decided to try to understand it. I know that when a user hits a website with SSL all headers are encrypted, even the HOST header.
So, in order to enable SSL in a server, you need to have a single IP to every certificate you have cause Apache, for example, wont know which VHOST it should redirect the user if the HOST header is encrypted.
My question is: how does Cloudflare knows which domain the user is using to access its CDN if it does not know the HOST before the decrypt happens?
Server Name Indication (SNI) allows TLS clients to specify the host they are attempting to connect to give the server a chance to serve the right certificate. It is supported in most browsers.
CloudFlare's page on their free SSL offering indicates they use SNI.
Now, CloudFlare has multiple offerings. Their paid plans don't actually rely on SNI (that's why they support all browsers). Only the free plans do.
For the paid plans, CloudFlare presumably uses dedicated IPs, though even in that case they can still pool multiple domains under a single certificate (using Subject Alternative Names).
We host many sites with domains on a single IP ie. www.domain1.com, www.domain2.com. We want to secure /admin using SSL. Historically each SSL cert needs a unique IP address. These are small sites and acquiring / assigning an individual IP to each site is unrealistic both in terms of maintainability and cost. Because we are not using sub domains the wildcard SSL cert approach won't work.
Googling around I found that Apache can do this by using TLS, see answer here:
https://serverfault.com/questions/109766/ssl-site-not-using-the-correct-ip-in-apache-and-ubuntu
My question is whether this is possible with IIS 7.5 too? If so, does anyone know how to set this up?
Thanks in advance
Dave
SSL and TLS basically are the same. TLS is the successor to SSL where TLS 1.0 is basically the same as SSL 3.1.
What makes the difference though is the support for SNI. This allows the browser to tell the server for which hostname the request is without the need for the server to decrypt the request.
Normally a webserver looks at the hostname header to decide for which virtual site the request is. But when SSL/TLS is used, the entire request (including all headers) are encrypted. In order to read the headers the server would have to decrypt the request but it can't do that without using the proper certificates. To know which certificate to use, it would need to know for which site the request is but it can't know that because that information is in the encrypted request. A classic chicken/egg problem. This is where SNI steps in.
SNI requires a reasonably new OS / browser / server and is not yet supported by IIS. It will be supported in Windows Server 2012 and IIS 8.0 (due for release this year).