How HSTS expiration works - ssl

In ASP.NET Core 2.2 application we have enabled HSTS using app.UseHsts(); which adds HSTS with max-age of 30 days in the response header.
In the fiddler
Strict-Transport-Security: max-age=2592000
Then in Chrome, if I go to chrome://net-internals/#hsts and query our domain name, I get:
Found:
static_sts_domain:
static_upgrade_mode: UNKNOWN
static_sts_include_subdomains:
static_sts_observed:
static_pkp_domain:
static_pkp_include_subdomains:
static_pkp_observed:
static_spki_hashes:
dynamic_sts_domain: subdomain.example.com //our domain name here
dynamic_upgrade_mode: FORCE_HTTPS
dynamic_sts_include_subdomains: false
dynamic_sts_observed: 1572023505.777819
dynamic_sts_expiry: 1574615505.777818
Questions
What is the unit of dynamic_sts_observed and dynamic_sts_expiry. It does not look like it's in seconds. How is the value calculated?
If user keeps visiting site every day, does the value keep updating? In other words, is it sliding expiration?
What happens after expiry?
What happens if user has already visited the site and his browser already cached HSTS for 30 days. But in couple of days we changed the value from 30 days to 90 days. When would user's browser get the updated value? After expiry or on next visit?
the URL user browse is already a subdomain like https://subdomain.example.com The SSL certificate we use is wildcard certificate. *.example.com. So HSTS configuration do i need to includeSubdomain?
like Strict-Transport-Security: max-age=2592000, includeSubDomain

It seconds in unix epoch time (I.e. since 1/1/1970). As Joachim’scommebt states you can view and convert that here: https://www.unixtimestamp.com/index.php
Yes.
It is as if you never saw the HSTS header (I.e. HTTPS will not be enforced).
After the next visit as, as per 2, it is a sliding expiration which is recalculated on each visit.
If the policy is at top level then you need to use includeSubDomains (note with an S as you missed that in your question), for it to affect the sub domains. You can also publish a separate policy (the same one or a different one) on each sub domain. Your top level policy is only loaded if visitors go to that site (e.g. https://example.com) so if they only go to sub domain (e.g. https://www.example.com) then the browser will not cache the top level policy. Best practice is to use includeSubDomains on all policies and to load an asset (e.g. a single pixel or maybe the company logo) from the top level domain to force that top level policy to be picked up as well so all other sub domains are protected as well. This only works if you don’t have any http-only sites (e.g. http://intranet.example.com or http://blog.example.com), in which case the best you can do is have a top level policy without includeSubDomains and then a different policy on each sub domain that does fully support HTTPS with includeSubDomains. The certificate as no bearing on HSTS (other than the fact you obviously need one for each domain protected!).

Related

Subdomain preserves cookies from main domain

I've set a cookie banner for a domain.com. It's a plugin for CMS. It works with user consent - blocking or giving permission to load a GTM script.
And I have an e-shop based on PrestaShop with a cookie banner plugin, which works with the same logic.
e-shop is placed on a subdomain.domain.com.
The problem: once a user grants consent on domain.com GTM cookies are loaded, the user clicks on a button leading to subdomain.domain.com (e-shop) and all previous cookies are loaded. What is not a good behaviour for a GDPR
So, is there an easy way to clear previous cookies?
Prestashop runs on Apache server
From MDN:
Domain attribute
The Domain attribute specifies which hosts can receive a cookie. If unspecified, the attribute defaults to the same host that set the cookie, excluding subdomains. If Domain is specified, then subdomains are always included. Therefore, specifying Domain is less restrictive than omitting it. However, it can be helpful when subdomains need to share information about a user.
When setting your cookies, omit the Domain attribute.
This will prevent future cookies being applied to the subdomains (and will overwrite the old ones which were on the next visit which sets them).

nginx SSL and hsts questions

I have an Nginx with SSL already without HSTS.
But in the backend, few services are not https.
Is there any potential risk for the enable the HSTS?
I am worried about the HSTS header will break the internal route when the HSTS header exists, force redirect to HTTPS
For example
current:
public user -> https://www.123.com --- internal service ---> Http://internalA.123.com -> Http://internalA.123.com
Will it become below?
public user -> "https://www.123.com" --- internal service ---> Https://internalA.123.com -> Https://internalA.123.com
If yes, then The service will definitely break with HSTS
HSTS is solely about the connection between client (browser) and web server (nginx). It does not matter what the server then does with the received data, i.e. if the data are processed locally or if the server is just a reverse proxy to some other server. Similar HTTPS between client and server does only protect the communication between these two and does neither protect any further connections from the reverse proxy nor does it secure local processing of data at the server side.
HSTS is an instruction from the server (nginx) to client (browser) to say "hey next time just assume I'm on HTTPS." So in above scenario it will not be used for the back end connection as Steffen says.
However there are definitely a few dangers to be aware of:
First of all if you set this at the top level domain (e.g. 123.com) and use the includeSubDomains then every domain on 123.com is suddenly potentially given HSTS protection. This will likely still not affect your backend connections however if you happen to visit https://123.com in your browser (maybe you just typed that in the URL bar) and pick up that header and then try to visit http://intranet.123.com or http://dev.123.com or even http://dev.123.com:8080 then all of them will redirect to HTTPS and if they are not available over HTTPS then they will fail and you just can't visit those sites anymore. This can be difficult to track down as perhaps not many visit the bare domain so it "works fine for me".
Of course if you're only using HTTPS on all your sites on that domain (including all your internal sites) then this is not an issue but if not...
As an extra protection you can also submit your site to a preload list which will then be hardcoded into browsers in their next release, and some other client also use this hardcoded list. This is an extra protection though brings with it extra risks as one of the requirements for it is that the top level domain is included with includeSubDomains. I realise you haven't asked about this, but since you're asking about the risk of this I think it's well worth mentioning here. So, with preloading HSTS, it suddenly brings all above risks into play even without visiting https://123.com to pick up the header. And, as it's months (or even years) between browser releases this is basically irreversible. You could quite happily be running HSTS on your www domain, think it's all working fine and decide to upgrade to the preload list as well cause you've not seen any issues and suddenly with the next browser release all your HTTP-only internal sites stop working and you need to upgrade them all HTTPS immediately or they can't be accessed.
In summary take care with HSTS if still have some sites on HTTP-only at present. Consider only returning it on sites that need it (e.g. https://123.com without includeSubdomains and https://www.123.com with includeSubdomains) and be extra careful with preloading it. Personally I think preload is overkill for most sites but if you really want to do it, then best practice is to first load a resource from https://123.com on your home page with a small expiry and increase it slowly. That way everyone picks this up before submitting it to preload list and there are no surprises.
But HSTS is good, should be used on all public facing websites IMHO and I don't want this answer putting you off - just understand how it works and so the risks with it.

Cloufdront Masks User Agent and Remote Address

We are running an SPA that communicates with an API. Both are exposed to the public via Cloudfront.
We now have the issue that the requests we see in the backend are masked by Cloudfront. Meaning:
The Remote Address we see is the address of the AWS Cloud
The User Agent Header field is set to "Amazon Cloudfront" and not the browser of the user
So Cloudfront somehow intercepts the request in a way we didn't anticipate.
I already went through these steps: https://aws.amazon.com/premiumsupport/knowledge-center/configure-cloudfront-to-forward-headers/ but ended up cutting the connection between the API and the frontend.
We don't care about caching implications (we don't have a lot of traffic), we just need to the right fields to show up in the backend.
By default, most request headers are removed, because CloudFront's default behaviors are generally designed around optimal caching. CloudFront's default header handling behavior is documented.
If you need to see specific headers at the origin, whitelist those headers for forwarding in the cache distribution. The documentation refers to this as “Selecting the Headers on Which You Want CloudFront to Base Caching” -- and that is what it does -- but that description masks what's actually happening. CloudFront removes the rest of the headers because it has no way of knowing for certain whether a specific header with a certain value might change the response that the origin generates. If it didn't remove these headers by default, there would be confusion in the other direction when users saw the "wrong" responses served from the cache.
In your case, you almost certainly don't want to include the Host header in what you are whitelisting for forwarding.
When testing, especially, be sure you also set the Error Caching Minimum TTL to 0, because the default value is 300 seconds... so you can't see whether the problem is fixed for up to 5 minutes after you fix it. This default is also by design, a protective measure to avoid overloading your origin with requests that are likely to continue to fail.
When examining responses from CloudFront, keep an eye on the Age response header, which is present any time the response is served from cache. It tells you how long it's been (in seconds) since CloudFront obtained the response it is currently returning to you.
If you want to disable CloudFront caching, you can set Maximum, Minimum, and Default TTL all to 0 (this only affects 2xx and 3xx HTTP responses -- errors are cached for a different time window, as noted above), or your origin can consistently return Cache-Control: s-maxage=0, which will prevent CloudFront from caching the response.

Prevent http page from redirecting to https page

I have a website (userbob.com) that normally serves all pages as https. However, I am trying to have one subdirectory (userbob.com/tools/) always serve content as http. Currently, it seems like Chrome's HSTS feature (which I don't understand how it works) is forcing my site's pages to load over https. I can go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set, and the next query will work as I want without redirecting to an https version. However, if I try to load the page a second time, it ends up redirecting again. The only way I can get it to work is if I go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set after each request. How do I let browsers know that I want all my pages from userbob.com/tools/ to load as http? My site uses an apache/tomcat web server.
(Just FYI, the reason I want the pages in the tools directory to serve pages over http instead of https is because some of them are meant to iframe http pages. If I try to iframe an http page from an https page I end up getting mixed-content errors.)
HTTP Strict Transport Security (or HSTS) is a setting your site can send to browsers which says "I only want to use HTTPS on my site - if someone tries to go to a HTTP link, automatically upgrade them to HTTPS before you send the request". It basically won't allow you to send any HTTP traffic, either accidentally or intentionally.
This is a security feature. HTTP traffic can be intercepted, read, altered and redirected to other domains. HTTPS-only websites should redirect HTTP traffic to HTTPS, but there are various security issues/attacks if any requests are still initially sent over HTTP so HSTS prevents this.
The way HSTS works is that your website sends a HTTP Header Strict-Transport-Security with a value of, for example, max-age=31536000; includeSubDomains on your HTTPS requests. The browser caches this and activates HSTS for 31536000 seconds (1 year), in this example. You can see this HTTP Header in your browsers web developer tools or by using a site like https://securityheaders.io . By using the chrome://net-internals/#hsts site you are able to clear that cache and allow HTTP traffic again. However as soon as you visit the site over HTTPS it will send the Header again and the browser will revert back to HTTPS-only.
So to permanently remove this setting you need to stop sending that Strict-Transport-Security Header. Find this in your Apache/Tomcat server and turn it off. Or better yet change it to max-age=0; includeSubDomains for a while first (which tells the browser to clear the cache after 0 seconds and so turns it off without having to visit chrome://net-internals/#hsts, as long as you visit the site over HTTPS to pick up this Header, and then remove the Header completely later.
Once you turn off HSTS you can revert back to having some pages on HTTPS and some on HTTP with standard redirects.
However it would be remiss of me to not warn you against going back to HTTP. HTTPS is the new standard and there is a general push to encourage all sites to move to HTTPS and penalise those that do not. Read his post for more information:
https://www.troyhunt.com/life-is-about-to-get-harder-for-websites-without-https/
While you are correct that you cannot frame HTTP content on a HTTPS page, you should consider if there is another way to address this problem. A single HTTP page on your site can cause security problems like leaking cookies (if they are not set up correctly). Plus frames are horrible and shouldn't be used anymore :-)
You can use rewrite rules to redirect https requests to http inside of subdirectory. Create an .htaccess file inside tools directory and add the following content:
RewriteEngine On
RewriteCond %{HTTPS} on
RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
Make sure that apache mod_rewrite is enabled.
Basically any HTTP 301 response from an HTTPS request indicating a target redirect to HTTP should never be honored at all by any browser, those servers doing that are clearly violating basic security, or are severaly compromized.
However a 301 reply to an HTTPS request can still redirect to another HTTPS target (including on another domain, provided that other CORS requirements are met).
If you navigate an HTTPS link (or a javascript event handler) and the browser starts loading that HTTPS target which replies with 301 redirect to HTTP, the behavior of the browser should be like if it was a 500 server error, or a connection failure (DNS name not resolved, server not responding timeout).
Such server-side redirect are clearly invalid. And website admins should never do that ! If they want to close a service and inform HTTPS users that the service is hosted elsewhere and no longer secure, they MUST return a valid HTTPS response page with NO redirect at all, and this should really be a 4xx error page (most probably 404 PAGE NOT FOUND) and they should not redirect to another HTTPS service (e.g. a third-party hosted search engine or parking page) which does not respect CORS requirements, or sends false media-types (it is acceptable to not honor the requested language and display that page in another language).
Browsers that implement HSTS are perfectly correct and going to the right direction. But I really think that CORS specifications are a mess, just tweaked to still allow advertizing network to host and control themselves the ads they broadcast to other websites.
I strongly think that serious websites that still want to display ads (or any tracker for audience measurement) for valid reasons can host these ads/trackers themselves, on their own domain and in the same protocol: servers can still get themselves the ads content they want to broadcast by downloading/refreshing these ads themselves and maintaining their own local cache. They can track their audience themselves by collecting the data they need and want and filtering it on their own server if they want this data to be analysed by a third party: websites will have to seriously implement thelselves the privacy requirements.
I hate now those too many websites that, when visited, are being tracked by dozens of third parties, including very intrusive ones like Facebook and most advertizing networks, plus many very weak third party services that have very poor quality/security and send very bad content they never control (including fake ads, fake news, promoting illegal activities, illegal businesses, invalid age rating...).
Let's return to the origin of the web: one site, one domain, one third party. This does not mean that they cannot link to other third party sites, but these must done only with an explicit user action (tapping or clicking), and visitors MUST be able to kn ow wherre this will go to, or which content will be displayed.
This is even possible for inserting videos (e.g. Youtube) in news articles: the news website can host themselves a cache of static images for the frame and icons for the "play" button: when users click that icon, it will start activating the third party video, and in that case the thirf party will interact directly with that user and can collect other data. But the unactivated contents will be tracked only by the origin website, under their own published policy.
In my local development environment I use apache server. What worked for me was :
Open you config file in sites-availabe/yoursite.conf. then add the following line inside your virtualhost:
Header always set Strict-Transport-Security "max-age=0". Restart your server.

removing cookies on another domain using mod-rewrite and apache

I have built a cookie consent module that is used on many sites, all using the same server architecture, on the same cluster. For the visitors of these sites it is possible to administer their cookie settings (eg. no advertising cookies, but allow analytics cookes) on a central domain that keeps track of the user preferences (and sites that are visited).
When they change their settings, all sites that the visitor has been to that are using my module (kept in cookie) are contacted by loading it with a parameter in hidden iframes. I tried the same with images.
On these sites a rewrite rule is in place that detects that parameter and then retracts the cookie (set the date in the past) and redirects to a page on the module site (or an image on the module site).
This scheme is working in all browsers, except IE, as it needs a P3P (Probably the reason why it is not working for images is similar).
I also tried loading a non-existent image on the source domain (that is, the domain that is using the module) through an image tag, obviously resulting in a 404. This works on all browsers, except Safari, which doesn't set cookies on 404's (at least, that is my conclusion).
My question is, how would it be possible to retract the cookie consent cookie on the connected domains, given that all I can change are the rewrite rules?
I hope that I have explained the problem well enough for you guys to give an answer, and that a solution is possible...
I am still not able to resolve this question, but when looked at it the other way around there is a solution. Using JSONP (for an example, see: Basic example of using .ajax() with JSONP?), the client domain can load information from the master server and compare that to local information.
Based on that, the client site can retract the cookie (or even replace it) and force a reload which will trigger the rewrite rules...
A drawback of this solution is that it will hit the server for every pageview, and in my case, that's a real problem. Only testing that every x minutes or so (by setting a temporary cookie) would provide a solution.
Another, even more simple solution would be to expire all the cookies on the client site every x hour. This will force a revisit of the main domain as well.