The question is almost in the title itself. If I had an app and use includeSubdomains for the HSTS header but have no subdomains at all, is this considered good or bad?
It is good.
If you plan on submitting your site to Google's HSTS Preload list, you will need to have the includeSubdomains directive even if you have no subdomains.
If you ever plan on having a subdomain, it'll mean that you will need to set it up to support HTTPS from Day 1. I'm considering this as a good thing as it is a plus for security.
It's good.
Let's say you have https://example.com and that's all you use. HSTS ensures you can only use HTTPS on this domain. This prevents downgrade attacks.
Without includeSubDomain, an attacker could set up and use a fake subdomain like http://www.example.com or http://secure.example.com or http://anyotherlegitimatssounsingsubdomain.example.com and swerve them over http and get people somehow to go there instead of https://example.com. Of course this requires access to manipulate the DNS of the victim but that's possible through certain techniques.
As it's a subdomain of your main domain it will look legitimate (though won't have https) and can also potentially leak or override cookies for the main domain.
Just because YOU don't use a subdomain doesn't mean your users know that.
For an app this is perhaps less critical as the URL will be set on the app and more difficult to change, and they typically don't use cookies, but it's still considered best practice to use includeSubDomain.
Related
I have an Nginx with SSL already without HSTS.
But in the backend, few services are not https.
Is there any potential risk for the enable the HSTS?
I am worried about the HSTS header will break the internal route when the HSTS header exists, force redirect to HTTPS
For example
current:
public user -> https://www.123.com --- internal service ---> Http://internalA.123.com -> Http://internalA.123.com
Will it become below?
public user -> "https://www.123.com" --- internal service ---> Https://internalA.123.com -> Https://internalA.123.com
If yes, then The service will definitely break with HSTS
HSTS is solely about the connection between client (browser) and web server (nginx). It does not matter what the server then does with the received data, i.e. if the data are processed locally or if the server is just a reverse proxy to some other server. Similar HTTPS between client and server does only protect the communication between these two and does neither protect any further connections from the reverse proxy nor does it secure local processing of data at the server side.
HSTS is an instruction from the server (nginx) to client (browser) to say "hey next time just assume I'm on HTTPS." So in above scenario it will not be used for the back end connection as Steffen says.
However there are definitely a few dangers to be aware of:
First of all if you set this at the top level domain (e.g. 123.com) and use the includeSubDomains then every domain on 123.com is suddenly potentially given HSTS protection. This will likely still not affect your backend connections however if you happen to visit https://123.com in your browser (maybe you just typed that in the URL bar) and pick up that header and then try to visit http://intranet.123.com or http://dev.123.com or even http://dev.123.com:8080 then all of them will redirect to HTTPS and if they are not available over HTTPS then they will fail and you just can't visit those sites anymore. This can be difficult to track down as perhaps not many visit the bare domain so it "works fine for me".
Of course if you're only using HTTPS on all your sites on that domain (including all your internal sites) then this is not an issue but if not...
As an extra protection you can also submit your site to a preload list which will then be hardcoded into browsers in their next release, and some other client also use this hardcoded list. This is an extra protection though brings with it extra risks as one of the requirements for it is that the top level domain is included with includeSubDomains. I realise you haven't asked about this, but since you're asking about the risk of this I think it's well worth mentioning here. So, with preloading HSTS, it suddenly brings all above risks into play even without visiting https://123.com to pick up the header. And, as it's months (or even years) between browser releases this is basically irreversible. You could quite happily be running HSTS on your www domain, think it's all working fine and decide to upgrade to the preload list as well cause you've not seen any issues and suddenly with the next browser release all your HTTP-only internal sites stop working and you need to upgrade them all HTTPS immediately or they can't be accessed.
In summary take care with HSTS if still have some sites on HTTP-only at present. Consider only returning it on sites that need it (e.g. https://123.com without includeSubdomains and https://www.123.com with includeSubdomains) and be extra careful with preloading it. Personally I think preload is overkill for most sites but if you really want to do it, then best practice is to first load a resource from https://123.com on your home page with a small expiry and increase it slowly. That way everyone picks this up before submitting it to preload list and there are no surprises.
But HSTS is good, should be used on all public facing websites IMHO and I don't want this answer putting you off - just understand how it works and so the risks with it.
I have a website (userbob.com) that normally serves all pages as https. However, I am trying to have one subdirectory (userbob.com/tools/) always serve content as http. Currently, it seems like Chrome's HSTS feature (which I don't understand how it works) is forcing my site's pages to load over https. I can go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set, and the next query will work as I want without redirecting to an https version. However, if I try to load the page a second time, it ends up redirecting again. The only way I can get it to work is if I go to chrome://net-internals/#hsts and delete my domain from Chrome's HSTS set after each request. How do I let browsers know that I want all my pages from userbob.com/tools/ to load as http? My site uses an apache/tomcat web server.
(Just FYI, the reason I want the pages in the tools directory to serve pages over http instead of https is because some of them are meant to iframe http pages. If I try to iframe an http page from an https page I end up getting mixed-content errors.)
HTTP Strict Transport Security (or HSTS) is a setting your site can send to browsers which says "I only want to use HTTPS on my site - if someone tries to go to a HTTP link, automatically upgrade them to HTTPS before you send the request". It basically won't allow you to send any HTTP traffic, either accidentally or intentionally.
This is a security feature. HTTP traffic can be intercepted, read, altered and redirected to other domains. HTTPS-only websites should redirect HTTP traffic to HTTPS, but there are various security issues/attacks if any requests are still initially sent over HTTP so HSTS prevents this.
The way HSTS works is that your website sends a HTTP Header Strict-Transport-Security with a value of, for example, max-age=31536000; includeSubDomains on your HTTPS requests. The browser caches this and activates HSTS for 31536000 seconds (1 year), in this example. You can see this HTTP Header in your browsers web developer tools or by using a site like https://securityheaders.io . By using the chrome://net-internals/#hsts site you are able to clear that cache and allow HTTP traffic again. However as soon as you visit the site over HTTPS it will send the Header again and the browser will revert back to HTTPS-only.
So to permanently remove this setting you need to stop sending that Strict-Transport-Security Header. Find this in your Apache/Tomcat server and turn it off. Or better yet change it to max-age=0; includeSubDomains for a while first (which tells the browser to clear the cache after 0 seconds and so turns it off without having to visit chrome://net-internals/#hsts, as long as you visit the site over HTTPS to pick up this Header, and then remove the Header completely later.
Once you turn off HSTS you can revert back to having some pages on HTTPS and some on HTTP with standard redirects.
However it would be remiss of me to not warn you against going back to HTTP. HTTPS is the new standard and there is a general push to encourage all sites to move to HTTPS and penalise those that do not. Read his post for more information:
https://www.troyhunt.com/life-is-about-to-get-harder-for-websites-without-https/
While you are correct that you cannot frame HTTP content on a HTTPS page, you should consider if there is another way to address this problem. A single HTTP page on your site can cause security problems like leaking cookies (if they are not set up correctly). Plus frames are horrible and shouldn't be used anymore :-)
You can use rewrite rules to redirect https requests to http inside of subdirectory. Create an .htaccess file inside tools directory and add the following content:
RewriteEngine On
RewriteCond %{HTTPS} on
RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
Make sure that apache mod_rewrite is enabled.
Basically any HTTP 301 response from an HTTPS request indicating a target redirect to HTTP should never be honored at all by any browser, those servers doing that are clearly violating basic security, or are severaly compromized.
However a 301 reply to an HTTPS request can still redirect to another HTTPS target (including on another domain, provided that other CORS requirements are met).
If you navigate an HTTPS link (or a javascript event handler) and the browser starts loading that HTTPS target which replies with 301 redirect to HTTP, the behavior of the browser should be like if it was a 500 server error, or a connection failure (DNS name not resolved, server not responding timeout).
Such server-side redirect are clearly invalid. And website admins should never do that ! If they want to close a service and inform HTTPS users that the service is hosted elsewhere and no longer secure, they MUST return a valid HTTPS response page with NO redirect at all, and this should really be a 4xx error page (most probably 404 PAGE NOT FOUND) and they should not redirect to another HTTPS service (e.g. a third-party hosted search engine or parking page) which does not respect CORS requirements, or sends false media-types (it is acceptable to not honor the requested language and display that page in another language).
Browsers that implement HSTS are perfectly correct and going to the right direction. But I really think that CORS specifications are a mess, just tweaked to still allow advertizing network to host and control themselves the ads they broadcast to other websites.
I strongly think that serious websites that still want to display ads (or any tracker for audience measurement) for valid reasons can host these ads/trackers themselves, on their own domain and in the same protocol: servers can still get themselves the ads content they want to broadcast by downloading/refreshing these ads themselves and maintaining their own local cache. They can track their audience themselves by collecting the data they need and want and filtering it on their own server if they want this data to be analysed by a third party: websites will have to seriously implement thelselves the privacy requirements.
I hate now those too many websites that, when visited, are being tracked by dozens of third parties, including very intrusive ones like Facebook and most advertizing networks, plus many very weak third party services that have very poor quality/security and send very bad content they never control (including fake ads, fake news, promoting illegal activities, illegal businesses, invalid age rating...).
Let's return to the origin of the web: one site, one domain, one third party. This does not mean that they cannot link to other third party sites, but these must done only with an explicit user action (tapping or clicking), and visitors MUST be able to kn ow wherre this will go to, or which content will be displayed.
This is even possible for inserting videos (e.g. Youtube) in news articles: the news website can host themselves a cache of static images for the frame and icons for the "play" button: when users click that icon, it will start activating the third party video, and in that case the thirf party will interact directly with that user and can collect other data. But the unactivated contents will be tracked only by the origin website, under their own published policy.
In my local development environment I use apache server. What worked for me was :
Open you config file in sites-availabe/yoursite.conf. then add the following line inside your virtualhost:
Header always set Strict-Transport-Security "max-age=0". Restart your server.
A lot of people talk about 301 redirecting incoming requests to one canonical url for SEO or other purposes.
This can be useful. Suppose if for example a search engine ranked the urls and unfortunately treated a url with a www. and without differently. Like for example, for Facebook, there is http://facebook.com, http://www.facebook.com, (and even more, like https://).
I guess my question would be, if there is a difference, would it be better overall to redirect to the url with a www. subdomain, or without? Reasoning would be really appreciated. Thank you.
Often, shorter is better (e.g. use domain.com instead of www.domain.com), since your URLs will be shorter and thus your HTTP requests and responses will also be smaller.
However, one thing to keep in mind is that if your site uses cookies, and you set a cookie on domain.com, that will get sent to all subdomains. If you want to keep a "cookieless" subdomain for performance reasons (e.g. requests for images at images.domain.com don't carry the cookies) then you should consider using the "www." prefix in the canonical URLs if you need cookies to be sent in the requests for pages but not for sub-resources.
You'll also want to use the "www." form if your domain is of a funky (ccTLD) format like XX.YY, because cookies work properly on www.XX.YY but you'll have problems with older browsers if you try to set cookies on XX.YY.
Alright, you think that this might be one of the most asked question on the internet, and you're tired reading the exact same answers. So let's focus on one of the most common answer, and forget about the others.
One of the common answer is:
"The https-site and the http-site are two completely different sites;
it’s a little bit like having a www version of the site and a non-www
version. Make sure you have 301 redirects from the http URLs to the
https ones." (source:
http://www.seomoz.org/ugc/seo-for-https-with-s-like-secure)
So here's my question:
Why are people saying that https and http are two different websites? How different is https://www.mydomain.com from http://www.mydomain.com?
The URI is the same and the content is the same. Only the protocol changes.
Why would the protocol have any impact on SEO? Whether or not the content is encrypted from point A to point B, why would that matter SEO wise?
Thanks for your help!
-H
Http and https could technically be two different sites. You could configure your server to server completely different content. They have two different urls (the difference being that s).
That being said, almost all webmasters with both http and https serve nearly identical content whether the site is secure or not. Google recognizes this and allows you to run both at the same time without having to fear duplicate content penalties.
If you are moving from one one to another, you should treat it similarly to other url changes.
Put 301 redirects in place so that each page gets properly redirected to the same content at its new url
Register both versions in Google Webmaster Tools
I have not personally done this switch, but it should be doable without problems. I have made other types of sitewide url changes without problems in the last couple years.
The other alternative would be to run both http and https at the same time and switch users over more gradually. As they log in, for example.
Update to above answer as on August 2014, Google has just confirmed that sites secured by SSL will start getting a ranking boost. Check official statement here: http://googlewebmastercentral.blogspot.in/2014/08/https-as-ranking-signal.html
Don't think about it in terms of protocol. Think about it in terms of potentiality from a search engines point of view.
http://example.com and http://www.example.com can be completely different sites.
http://example.com/ and http://www.example.com/home can be completely different pages.
https://www.example.com and http://www.example.com can, again, be completely different sites.
In addition to this, https pages have a very hard time ranking. google etc.
If your entire site is https and pops an SSL certificate to an HTTP request, G views them as secure and that they're https for a reason. It's sometimes not very clever in this regard. If you have secure product or category pages, for instance, they simply will not rank compared to competitors. I have seen this time and again.
In recent months, it is becoming very clear Google will gently force webmasters to move to HTTPS.
Why are people saying that https and http are two different websites?
How different is www.mydomain.com from
www.mydomain.com?
Answer: Use the site: operator to find duplicate content. Go to a browser and type:
site:http://example-domain.com
and
site:https://example-domain.com
If you see both versions indexed in Google or other search engines they are duplicates. You must redirect the HTTP version to the HTTPS version to avoid diluting your websites authority and a possible penalty from Google's Panda algorithm.
Why would the protocol have any impact on SEO?
Answer:
For ecommerce websites, Google will not rank them well without being
secure. They do not want users to get their bank info etc stolen.
Google will be giving ranking boosts to sites that move to HTTPS in
the future. Although it is not a large ranking signal now, it could
become larger.
The guys at Google Chrome have submitted a proposal to dish out
warnings to users for ALL websites not using HTTPS. Yes, I know it
sounds crazy, but check
this out.
Info taken from this guide on how to move to HTTPS without killing your rank.
Recently, if SSL is inactive in Firefox browser, it shows an error. You must enable SSL and redirect the URL to HTTPS 301
One of YSlow's measurables is to use cookie-free domains to serve static files.
"When the browser requests a static
image and sends cookies with the
request, the server ignores the
cookies. These cookies are unnecessary
network traffic. To workaround this
problem, make sure that static
components are requested with
cookie-free requests by creating a
subdomain and hosting them there." --
Yahoo YSlow
I interpret this to mean that I could experience performance gains if I move www.example.com/images to static.example.com/images.
Although this is easy to do, I would lose the handy ability within my content management system (Joomla/WordPress) to easily reference and link to these images.
Is it possible to use .htaccess to redirect all requests for a particular folder on www.example.com to a folder on static.example.com instead? Would this method also fool the CMS into thinking the images were located in the default locations on its own domain?
Is it possible to use .htaccess to redirect all requests
for a particular folder on www.example.com to a folder on
static.example.com instead?
Possible, but counter productive — the client would have to make an HTTP request, get the redirect response, then make another HTTP request.
This costs a lot more than the single line of cookie data saved!
Would this method also fool the CMS into thinking the images
were located in the default locations on its own domain?
No.
Although this is easy to do, I would
lose the handy ability within my
content management system
(Joomla/WordPress) to easily reference
and link to these images.
What you could try to do is create a plugin in Joomla that dinamically creates these references.
For example, you have a plugin that when you enter {dinamic_path path} in an article, it appends 'static.example.com/images' to the path provided. So, everytime you need to change the server path, you just change in the plugin. For the links that are already in the database, you can try to use phpMyAdmin to change them in this structure.
It still loses the WYSIWYG hability in TinyMCE, but is an alternative.
In theory you could create a virtual domain that points directly to the images folder, such as images.example.com. Then in your CMS (hopefully at the theme layer) you could replace any paths that point to the images folder with an absolute path to the subdomain.
The redirects would cause far more network traffic, and far more latency, than simply leaving things as they are.
It would redirect the request but the client would still be sending its cookies to the server, so really you accomplished nothing. You would have to directly access the files from a domain that isn't storing cookies for it to work.
What you really want to do is use staticexample.com/images instead of static.example.com/images so that you don't pick up any cookies on the example.com domain that you may have set. If all you do is server images from that domain with a simple apache server or something then you can configure that server not to return even a session cookie.
The redirects are a very bad idea. Cookies cause some performance hits but round trips to the server such as a redirect would cause are a much more serious performance issue.
I did below and gained success:
<FilesMatch "!\.(gif|jpe?g|png)$">
php_value session.cookie_domain example.com
</FilesMatch>
What it means is that if you do not set images in cookie information.
Then images are cookie-free with server.