nginx SSL and hsts questions - ssl

I have an Nginx with SSL already without HSTS.
But in the backend, few services are not https.
Is there any potential risk for the enable the HSTS?
I am worried about the HSTS header will break the internal route when the HSTS header exists, force redirect to HTTPS
For example
current:
public user -> https://www.123.com --- internal service ---> Http://internalA.123.com -> Http://internalA.123.com
Will it become below?
public user -> "https://www.123.com" --- internal service ---> Https://internalA.123.com -> Https://internalA.123.com
If yes, then The service will definitely break with HSTS

HSTS is solely about the connection between client (browser) and web server (nginx). It does not matter what the server then does with the received data, i.e. if the data are processed locally or if the server is just a reverse proxy to some other server. Similar HTTPS between client and server does only protect the communication between these two and does neither protect any further connections from the reverse proxy nor does it secure local processing of data at the server side.

HSTS is an instruction from the server (nginx) to client (browser) to say "hey next time just assume I'm on HTTPS." So in above scenario it will not be used for the back end connection as Steffen says.
However there are definitely a few dangers to be aware of:
First of all if you set this at the top level domain (e.g. 123.com) and use the includeSubDomains then every domain on 123.com is suddenly potentially given HSTS protection. This will likely still not affect your backend connections however if you happen to visit https://123.com in your browser (maybe you just typed that in the URL bar) and pick up that header and then try to visit http://intranet.123.com or http://dev.123.com or even http://dev.123.com:8080 then all of them will redirect to HTTPS and if they are not available over HTTPS then they will fail and you just can't visit those sites anymore. This can be difficult to track down as perhaps not many visit the bare domain so it "works fine for me".
Of course if you're only using HTTPS on all your sites on that domain (including all your internal sites) then this is not an issue but if not...
As an extra protection you can also submit your site to a preload list which will then be hardcoded into browsers in their next release, and some other client also use this hardcoded list. This is an extra protection though brings with it extra risks as one of the requirements for it is that the top level domain is included with includeSubDomains. I realise you haven't asked about this, but since you're asking about the risk of this I think it's well worth mentioning here. So, with preloading HSTS, it suddenly brings all above risks into play even without visiting https://123.com to pick up the header. And, as it's months (or even years) between browser releases this is basically irreversible. You could quite happily be running HSTS on your www domain, think it's all working fine and decide to upgrade to the preload list as well cause you've not seen any issues and suddenly with the next browser release all your HTTP-only internal sites stop working and you need to upgrade them all HTTPS immediately or they can't be accessed.
In summary take care with HSTS if still have some sites on HTTP-only at present. Consider only returning it on sites that need it (e.g. https://123.com without includeSubdomains and https://www.123.com with includeSubdomains) and be extra careful with preloading it. Personally I think preload is overkill for most sites but if you really want to do it, then best practice is to first load a resource from https://123.com on your home page with a small expiry and increase it slowly. That way everyone picks this up before submitting it to preload list and there are no surprises.
But HSTS is good, should be used on all public facing websites IMHO and I don't want this answer putting you off - just understand how it works and so the risks with it.

Related

How can I allow http and https versions of my site, but make it default to https?

I just added https support to a site I own. Normally I would add a couple lines to my .htaccess file to redirect http to https, but in this case I want to leave both accessible. (Why? It has user data stored in IndexedDB and http/https versions of the same site cannot access the same database, so I want to let users keep using the old http version with their old data in it.)
That works fine, except one thing. When a user types "example.com" into his browser, it takes him to "http://example.com". To get the https version, you need to explicitly type "https://example.com". Is there any way to swap that behavior, so https is the default unless you explicitly ask for http?
The default protocol is a choice of the browser, not your server. Just wait a bit, one of the next versions of Chrome / Chromium will change the default and I expect others to follow suit.
(that makes this question "not programming related" though)

HSTS: Using includeSubdomains when I have no subdomains

The question is almost in the title itself. If I had an app and use includeSubdomains for the HSTS header but have no subdomains at all, is this considered good or bad?
It is good.
If you plan on submitting your site to Google's HSTS Preload list, you will need to have the includeSubdomains directive even if you have no subdomains.
If you ever plan on having a subdomain, it'll mean that you will need to set it up to support HTTPS from Day 1. I'm considering this as a good thing as it is a plus for security.
It's good.
Let's say you have https://example.com and that's all you use. HSTS ensures you can only use HTTPS on this domain. This prevents downgrade attacks.
Without includeSubDomain, an attacker could set up and use a fake subdomain like http://www.example.com or http://secure.example.com or http://anyotherlegitimatssounsingsubdomain.example.com and swerve them over http and get people somehow to go there instead of https://example.com. Of course this requires access to manipulate the DNS of the victim but that's possible through certain techniques.
As it's a subdomain of your main domain it will look legitimate (though won't have https) and can also potentially leak or override cookies for the main domain.
Just because YOU don't use a subdomain doesn't mean your users know that.
For an app this is perhaps less critical as the URL will be set on the app and more difficult to change, and they typically don't use cookies, but it's still considered best practice to use includeSubDomain.

Prevent http page from redirecting to https page

I have a website (userbob.com) that normally serves all pages as https. However, I am trying to have one subdirectory (userbob.com/tools/) always serve content as http. Currently, it seems like Chrome's HSTS feature (which I don't understand how it works) is forcing my site's pages to load over https. I can go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set, and the next query will work as I want without redirecting to an https version. However, if I try to load the page a second time, it ends up redirecting again. The only way I can get it to work is if I go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set after each request. How do I let browsers know that I want all my pages from userbob.com/tools/ to load as http? My site uses an apache/tomcat web server.
(Just FYI, the reason I want the pages in the tools directory to serve pages over http instead of https is because some of them are meant to iframe http pages. If I try to iframe an http page from an https page I end up getting mixed-content errors.)
HTTP Strict Transport Security (or HSTS) is a setting your site can send to browsers which says "I only want to use HTTPS on my site - if someone tries to go to a HTTP link, automatically upgrade them to HTTPS before you send the request". It basically won't allow you to send any HTTP traffic, either accidentally or intentionally.
This is a security feature. HTTP traffic can be intercepted, read, altered and redirected to other domains. HTTPS-only websites should redirect HTTP traffic to HTTPS, but there are various security issues/attacks if any requests are still initially sent over HTTP so HSTS prevents this.
The way HSTS works is that your website sends a HTTP Header Strict-Transport-Security with a value of, for example, max-age=31536000; includeSubDomains on your HTTPS requests. The browser caches this and activates HSTS for 31536000 seconds (1 year), in this example. You can see this HTTP Header in your browsers web developer tools or by using a site like https://securityheaders.io . By using the chrome://net-internals/#hsts site you are able to clear that cache and allow HTTP traffic again. However as soon as you visit the site over HTTPS it will send the Header again and the browser will revert back to HTTPS-only.
So to permanently remove this setting you need to stop sending that Strict-Transport-Security Header. Find this in your Apache/Tomcat server and turn it off. Or better yet change it to max-age=0; includeSubDomains for a while first (which tells the browser to clear the cache after 0 seconds and so turns it off without having to visit chrome://net-internals/#hsts, as long as you visit the site over HTTPS to pick up this Header, and then remove the Header completely later.
Once you turn off HSTS you can revert back to having some pages on HTTPS and some on HTTP with standard redirects.
However it would be remiss of me to not warn you against going back to HTTP. HTTPS is the new standard and there is a general push to encourage all sites to move to HTTPS and penalise those that do not. Read his post for more information:
https://www.troyhunt.com/life-is-about-to-get-harder-for-websites-without-https/
While you are correct that you cannot frame HTTP content on a HTTPS page, you should consider if there is another way to address this problem. A single HTTP page on your site can cause security problems like leaking cookies (if they are not set up correctly). Plus frames are horrible and shouldn't be used anymore :-)
You can use rewrite rules to redirect https requests to http inside of subdirectory. Create an .htaccess file inside tools directory and add the following content:
RewriteEngine On
RewriteCond %{HTTPS} on
RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
Make sure that apache mod_rewrite is enabled.
Basically any HTTP 301 response from an HTTPS request indicating a target redirect to HTTP should never be honored at all by any browser, those servers doing that are clearly violating basic security, or are severaly compromized.
However a 301 reply to an HTTPS request can still redirect to another HTTPS target (including on another domain, provided that other CORS requirements are met).
If you navigate an HTTPS link (or a javascript event handler) and the browser starts loading that HTTPS target which replies with 301 redirect to HTTP, the behavior of the browser should be like if it was a 500 server error, or a connection failure (DNS name not resolved, server not responding timeout).
Such server-side redirect are clearly invalid. And website admins should never do that ! If they want to close a service and inform HTTPS users that the service is hosted elsewhere and no longer secure, they MUST return a valid HTTPS response page with NO redirect at all, and this should really be a 4xx error page (most probably 404 PAGE NOT FOUND) and they should not redirect to another HTTPS service (e.g. a third-party hosted search engine or parking page) which does not respect CORS requirements, or sends false media-types (it is acceptable to not honor the requested language and display that page in another language).
Browsers that implement HSTS are perfectly correct and going to the right direction. But I really think that CORS specifications are a mess, just tweaked to still allow advertizing network to host and control themselves the ads they broadcast to other websites.
I strongly think that serious websites that still want to display ads (or any tracker for audience measurement) for valid reasons can host these ads/trackers themselves, on their own domain and in the same protocol: servers can still get themselves the ads content they want to broadcast by downloading/refreshing these ads themselves and maintaining their own local cache. They can track their audience themselves by collecting the data they need and want and filtering it on their own server if they want this data to be analysed by a third party: websites will have to seriously implement thelselves the privacy requirements.
I hate now those too many websites that, when visited, are being tracked by dozens of third parties, including very intrusive ones like Facebook and most advertizing networks, plus many very weak third party services that have very poor quality/security and send very bad content they never control (including fake ads, fake news, promoting illegal activities, illegal businesses, invalid age rating...).
Let's return to the origin of the web: one site, one domain, one third party. This does not mean that they cannot link to other third party sites, but these must done only with an explicit user action (tapping or clicking), and visitors MUST be able to kn ow wherre this will go to, or which content will be displayed.
This is even possible for inserting videos (e.g. Youtube) in news articles: the news website can host themselves a cache of static images for the frame and icons for the "play" button: when users click that icon, it will start activating the third party video, and in that case the thirf party will interact directly with that user and can collect other data. But the unactivated contents will be tracked only by the origin website, under their own published policy.
In my local development environment I use apache server. What worked for me was :
Open you config file in sites-availabe/yoursite.conf. then add the following line inside your virtualhost:
Header always set Strict-Transport-Security "max-age=0". Restart your server.

Temporary redirect during server maintenance (https to http)

I'm coming to you because I'm stuck on the following problem:
I have a website, hosted on a server on which I will be doing messy maintenance stuff (understand I'm not sure what I'm doing so I might crash everything).
I'd like to temporarily redirect all the traffic to a simle page stating the website is undermaintenance and will be back soon.
So this page must be hosted on another server, since mine will be down.
To make matters more complicated, I have an ssl certificate on my whole website, so most of my users have the https adress memorized in their browser (and that's also what's memorized by google).
I've tried hosting the simple page on a free hosting, and also on microsoft azure (because I already have an account for another web-project). However, i've encountered the same problem in both cases: the users coming to the website see big red flags from thei browsers, saying that the connexion isn't private. (ERR_CERT_COMMON_NAME_INVALID)
What would be the proper way to proceed and redirect my users in a smooth way?
Thanks in advance!
Rouli

removing cookies on another domain using mod-rewrite and apache

I have built a cookie consent module that is used on many sites, all using the same server architecture, on the same cluster. For the visitors of these sites it is possible to administer their cookie settings (eg. no advertising cookies, but allow analytics cookes) on a central domain that keeps track of the user preferences (and sites that are visited).
When they change their settings, all sites that the visitor has been to that are using my module (kept in cookie) are contacted by loading it with a parameter in hidden iframes. I tried the same with images.
On these sites a rewrite rule is in place that detects that parameter and then retracts the cookie (set the date in the past) and redirects to a page on the module site (or an image on the module site).
This scheme is working in all browsers, except IE, as it needs a P3P (Probably the reason why it is not working for images is similar).
I also tried loading a non-existent image on the source domain (that is, the domain that is using the module) through an image tag, obviously resulting in a 404. This works on all browsers, except Safari, which doesn't set cookies on 404's (at least, that is my conclusion).
My question is, how would it be possible to retract the cookie consent cookie on the connected domains, given that all I can change are the rewrite rules?
I hope that I have explained the problem well enough for you guys to give an answer, and that a solution is possible...
I am still not able to resolve this question, but when looked at it the other way around there is a solution. Using JSONP (for an example, see: Basic example of using .ajax() with JSONP?), the client domain can load information from the master server and compare that to local information.
Based on that, the client site can retract the cookie (or even replace it) and force a reload which will trigger the rewrite rules...
A drawback of this solution is that it will hit the server for every pageview, and in my case, that's a real problem. Only testing that every x minutes or so (by setting a temporary cookie) would provide a solution.
Another, even more simple solution would be to expire all the cookies on the client site every x hour. This will force a revisit of the main domain as well.