How cloudflare SSL works - ssl

I have a doubt that I had for years and now I decided to try to understand it. I know that when a user hits a website with SSL all headers are encrypted, even the HOST header.
So, in order to enable SSL in a server, you need to have a single IP to every certificate you have cause Apache, for example, wont know which VHOST it should redirect the user if the HOST header is encrypted.
My question is: how does Cloudflare knows which domain the user is using to access its CDN if it does not know the HOST before the decrypt happens?

Server Name Indication (SNI) allows TLS clients to specify the host they are attempting to connect to give the server a chance to serve the right certificate. It is supported in most browsers.
CloudFlare's page on their free SSL offering indicates they use SNI.
Now, CloudFlare has multiple offerings. Their paid plans don't actually rely on SNI (that's why they support all browsers). Only the free plans do.
For the paid plans, CloudFlare presumably uses dedicated IPs, though even in that case they can still pool multiple domains under a single certificate (using Subject Alternative Names).

Related

is there a way to encrypt Host header with HTTPS?

The ISP in my country block my website for a political reasons.
In the first time, they block my website IP address, so every time they block the IP, I change the IP to a new one, so I can avoid the block.
But now, they block my website by block the name of Host, so All IPs blocked.
is there a way to encrypt Host header with HTTPS?
or any one have idea to avoid the block
HTTP Host header is encrypted in HTTPS, but SSL Certificate hostname (Server Name Indication) isn't. So you have to bypass SNI-based HTTPS Filtering.
For the details try this document Efficiently Bypassing SNI-based HTTPS Filtering
is there a way to encrypt Host header with HTTPS?
The Host header is encrypted when HTTPS is used. This means they cannot block based on this. They might use DNS based blocking or SNI based blocking using Deep Packet Inspection.
Both can usually not be bypassed by simply making changes to your website (except for changing the domain name), i.e. client side support like using a different DNS server, using a VPN or similar would be required to bypass the block. While ESNI (encrypted SNI) might be an option in theory, some states like China simply block such traffic they can not analyze. There is fantastic research about bypassing censorship in various countries and there are a few tricks which can be employed server side. But the details very much depend on how exactly the blocking works in your specific case - which is unknown.
https://blog.cloudflare.com/encrypted-sni/
Perhaps TLS 1.3 could help with encrypted SNIs

ssl connection, using a hostname that is not in the SAN list of the host's certificate

I am quite new to ssl stuffs but I am afraid I can guess the final answer of the following problem/question:
We are building hardware (let's call them servers) that WILL have IP address modifications along there lifetime. Each Server must be reachable in a secured manner. We are planning to use a TLS 1.3 secured connection to perform some actions on the servers (update firmware, change configuration and so on). As a consequence we need to provide the server's with one certificate (each) so that they can state their identity. PKI issue is out of the scope of this question (we suppose) and we can take for granted that the clients and the servers will share a common trusted CA to ensure the SSL handshake goes ok. The server's will serve http connection on there configured (changeable) IP addresses only. There is no DNS involved on the loop.
We are wondering how to set the servers' certificates appropriately.
As IP will change, it cannot be used as the common name in the server's certificate.
Therefore, we are considering using something more persistent such as a serial number or a MAC address.
The problem is, as there is no DNS in the loop, the client can not issue http request to www.serialNumberOfServer.com and must connect to http://x.y.z.t (which will change frequently (at least frequently enough so that we don't issue a new server's certificate at each time))
If we get it right, ssl handshake requires to have the hostname (that's in the URL we are connecting to) matching either the commonName of the server's Certificate or one of its Subject's Alternative Name (SAN). Right? Here, it would be x.y.z.t.
So we think we are stucked in a situation in which the server cannot use it's IP to prove its identity and the client wants to use it exclusively to connect to the server.
Is there any work around?
Are we missing something?
Any help would be very (VERY) appreciated. Do not hesitated in cas you should need more detailed explanation!
For what it's worth, the development environment will be Qt using the QNetworkAccessManager/QSSlstuffs framework.
If you're not having the client use DNS at all, then you do have a problem. The right solution is to use DNS or static hostname lists (/etc/hosts, eg, on unix* or hosts.txt on windows eg.). That will let you set names appropriately.
If you can only use IP addresses, another option is to put all of your IP addresses into the certificate that the server might use. This is only doable if you have a reasonable small number of addresses that they might get assigned to.
Or you could keep a cache of certificates on the server with one address for each, and have part of the webserver start process to select the right certificate. Requires a bit more complex startup.
Edit: Finally, some SSL stacks (e.g. openssl) let you decide whether or not each particular verification error should be accepted as an error or that it can be ignored. This would let you override the errors on the client side. However, this is hard to implement properly and very prone to security issues if you don't bind the remote certificate properly it means you're subjecting yourself to man-in-the-middle or other attacks by blindly accepting any old certificate. I don't remember if Qt's SSL library gives you this level of flexibility or not (I don't believe so but didn't go pull up the documentation).
Went back on the subject 9 mont later!
Turns out there is an easy solution (at least with Qt framework)
Qt's QNetworkRequest::setPeerVerifyName does the job for us. It allows to connect to an host using its IP and verify a given CN during SSL handshake
See Qt's documentation extract below:
void QNetworkRequest::setPeerVerifyName(const QString &peerName)
Sets peerName as host name for the certificate validation, instead of the one used for the TCP connection.
This function was introduced in Qt 5.13.
See also peerVerifyName.
Just tested it positively right now

Is certificate authorization via HTTPS possible?

I am using the Let's Encrypt IIS client from https://github.com/Lone-Coder/letsencrypt-win-simple to generate a certificate for a server. Since the certificate is only valid for three months, I want it to auto-renew.
But the server for which I need that auto-renewing certificate is only bound to https:||mysubdomain.mydomain.com:443 and smtp:||mysubdomain.mydomain.com:25.
Both http:||mysubdomain.mydomain.com:80 and ftp:||mysubdomain.mydomain.com:21 point to a different server.
As you may have guessed, the error that is now thrown during the process is "The ACME server was probably unable to reach http:||mysubdomain.mydomain.com:80/.well-known/acme-challenge/abcdefgh...xyz".
It is completely clear to me why, but I can't fix it, because http:||mysubdomain.mydomain.com has to point to the other server. If the ACME server would try https:||mysubdomain.mydomain.com:443/.well-known/acme-challenge/abcdefgh...xyz, but ignore any certificate issue, he would successfully find the challenge.
Is there anything I can do, any feature I have overlooked, that would help me to get automated renewal working?
There are multiple options:
http-01
Redirect http://example.com/.well-known/acme-challenge/* to https://example.com/.well-known/acme-challenge/*, Boulder will happily follow any such redirect and ignore the provided certificate. That's the most simple way if you have access to the other server and can configure that redirect. It's a permanent redirect that you don't have to adjust, it'll be just fine every three months.
The option to use HTTPS directly has been removed due to security issues with some popular server software that uses the first host defined if some other virtual host doesn't define any HTTP host, which might lead to wrong issuances in multi-user environments aka shared hosting.
tls-sni-01
If you want to use just port 443, you can use another challenge type called tls-sni-01. But I think there's no client for Windows available yet that supports that challenge type.
dns-01
If you have control over the DNS via a simple API, you could also use the DNS challenge, it's completely independent of the port you can use.

Single IP, Multiple SSL Certs, NOT using wildcard, TLS on IIS 7 possible?

We host many sites with domains on a single IP ie. www.domain1.com, www.domain2.com. We want to secure /admin using SSL. Historically each SSL cert needs a unique IP address. These are small sites and acquiring / assigning an individual IP to each site is unrealistic both in terms of maintainability and cost. Because we are not using sub domains the wildcard SSL cert approach won't work.
Googling around I found that Apache can do this by using TLS, see answer here:
https://serverfault.com/questions/109766/ssl-site-not-using-the-correct-ip-in-apache-and-ubuntu
My question is whether this is possible with IIS 7.5 too? If so, does anyone know how to set this up?
Thanks in advance
Dave
SSL and TLS basically are the same. TLS is the successor to SSL where TLS 1.0 is basically the same as SSL 3.1.
What makes the difference though is the support for SNI. This allows the browser to tell the server for which hostname the request is without the need for the server to decrypt the request.
Normally a webserver looks at the hostname header to decide for which virtual site the request is. But when SSL/TLS is used, the entire request (including all headers) are encrypted. In order to read the headers the server would have to decrypt the request but it can't do that without using the proper certificates. To know which certificate to use, it would need to know for which site the request is but it can't know that because that information is in the encrypted request. A classic chicken/egg problem. This is where SNI steps in.
SNI requires a reasonably new OS / browser / server and is not yet supported by IIS. It will be supported in Windows Server 2012 and IIS 8.0 (due for release this year).

HTTPS Certificate for internal use

I'm setting up a webserver for a system that needs to be used only through HTTPS, on an internal network (no access from outside world)
Right now I got it setup with a self-signed certificate, and it works fine, except for a nasty warning that all browsers fire up, as the CA authority used to sign it is naturally not trusted.
Access is provided by a local DNS domain name resolved on local DNS server (example: https://myapp.local/), that maps that address to 192.168.x.y
Is there some provider that can issue me a proper certificate for use on an internal domain name (myapp.local)? Or is my only option to use a FQDN on a real domain, and later map it to a local IP address?
Note: I would like an option where it's not needed to mark the server public key as trusted on each browser, as I have not control over workstations.
You have two practical options:
Stand up your own CA. You can do it with OpenSSL and there's a lot of Google info out there.
Keep using your self-signed cert, but add the public key to your trusted certs in the browser. If you're in an Active Directory domain, this can be done automatically with group policy.
I did the following, which worked nicely for me:
I got a wildcard SSL cert for *.mydomain.com (Namecheap, for example, provide this cheaply)
I created a CNAME DNS record pointing "mybox.mydomain.com" at "mybox.local".
I hope that helps - unfortunately you'll have the expense of a wildcard cert for your domain name, but you may already have that.
You'd have to ask the typical cert people for that. For ease of use I'd get with the FQDN though, you might use a subdomain to your already registered one: https://mybox.example.com
Also you might want to look at wildcard certificates, providing a blanket cert for (e.g.) https://*.example.com/ - even usable for virtual hosting, should you need more than just this one cert.
Certifying sub- or sub-sub domains of FQDN should be standard business - maybe not for the point&click big guys that proud themselves to provide the certificates in just 2 minutes.
In short: To make the cert trusted by a workstation you'd have to either
change settings on the workstations (which you don't want) or
use an already trusted party to sign your key (which you're looking for a way around).
That's all your choices. Choose your poison.
I would have added this as a comment but it was a bit long..
This is not really an answer to your questions, but in practice I've found that it's not recommended to use a .local domain - even if it's on your "local" testing environment, with your own DNS Server.
I know that Active Directory uses the .local name by default when your install DNS, but even people at Microsoft say to avoid it.
If you have control over the DNS Server you can use a .com, .net, or .org domain - even if it's internal and private only. This way, you could actually buy the domain name that you are using internally and then buy a certificate for that domain name and apply it to your local domain.
I had a similar requirement, have our companys browsers trust our internal websites.
I didnt want our public DNS to issue public DNS for our internal sites, so the only way to make this work that I found was to use an internal CA.
Heres the writeup for this,
https://medium.com/#mike.reider/getting-firefox-chrome-to-trust-your-internal-websites-internal-certificate-authority-a53ba2d4c2af
i think the answer is NO.
out-of-the-box, browsers won't trust certificates unless it's ultimately been verified by someone pre-programmed into the browser, e.g. verisign, register.com.
you can only get a verified certificate for a globally unique domain.
so i'd suggest instead of myapp.local you use myapp.local.yourcompany.com, for which you should be able to get a certificate, provided you own yourcompany.com. it'll cost you thought, several hundred per year.
also be warned wildcard certificates might only go down to one level -- so you could use it for a.yourcompany.com and local.yourcompany.com but maybe not b.a.yourcompany.com or myapp.local.yourcompany.com, unless you pay more.
(does anyone know, does it depend on the type of wildcard certificate? are sub-sub-domains trusted by the major browsers?)
Development purpose only
This docker image solves the problem (thanks to local-ip.co): https://github.com/medic/nginx-local-ip.
It launches a reverse proxy in the port 443 with a public cert that works with any *.my.local-ip.co domain. Eg. your local IP is 192.168.10.10 → 192-168-10-10.my.local-ip.co already points to it (it's a public domain)! Assuming the app is running in your computer at the port 8080, you only need to execute this to proxy pass your app and expose it at the URL https://192-168-10-10.my.local-ip.co:
$ APP_URL=http://192.168.10.10:8080 docker-compose up
The domain is resolved with any public DNS you have configured in the devices where you want to access the app, but your traffic keeps local between your app and the client (through the proxy), so you can even use it to connect with devices within the same LAN network, without any of the traffic going out to internet, all the traffic is local.
The reason that is mostly useful for development is that anybody can launch an application with this same certificate, so is not really secure, but helpful when you need to expose your app with HTTPS while developing or testing (e.g. HTML5 apps in Android that are loaded with Webview).