Are Cloudflare's warnings about HSTS overblown? - cloudflare

The warnings Cloudflare presents me with about enabling HSTS are both lengthy and full of dire warnings describing a few situations in which my users will not be able to visit my site for up to 6 months (i.e. forever). example here
It seems to me the only way to trigger these things is to disable HTTPS/SSL - Which, it's 2020, why would I/anyone want to do that?
In a world where SSL ought to be enabled everywhere, are these warnings overblown? Assuming I'm not turning off SSL - can I just enable it and be happy?

I’m of the opinion that HSTS is good, and should be used, but I hate these online tutorials that say just turn it on without warning about the consequences of getting it wrong. I’ve a blog post discussing “Dangerous Web Security Features” like this and HPKP and even CSP.
Yes you are right that we are increasingly moving to an HTTPS world, and this should be low risk if you’ve already switched.
However it is possible to miss scenarios and cause issues.
For example if you don’t have SSL enabled everywhere are are only considering your main website. For example you enable it on your TLD (example.com), or preload it, as well as the www version (www.example.com) but you also reuse that domain elsewhere without SSL (e.g. intranet.example.com or dev.example.com or oldapp.example.com), then those will stop working.
Google used to maintain a list of those preloading HSTS who regretted it because they didn’t think it all through or because so web developer thought they were doing good securing the website but broke other things.
So I don’t think it’s overblown to give warnings.
For a brand new site I would use HSTS from the get fo unless very good reason not to.

Related

How to remove Cloudflare’s javascripts slowing my site?

I have a WordPress site at http://biblicomentarios.com, and I use Cloudflare. No matter what I do, I can’t remove two javascript that comes from Cloudflare. I use GTMetrix, and I see them in the waterfall tab blocking my site. Those are email-decode.min.js and rocket-loader.min.js. Of course, I’ve already disabled email obfuscation in the Scrape Shield tab, and I have Rocket Loader disabled. I purged ALL my caches (Cloudflare cache, Autoptimize cache, SuperCache, even the Cpanel cache). But the js’s are pretty persistent, and they insist on appearing in GTMetrix waterfall, as blocking js’s and so slowing my site. Also, I can’t add expires headers to them, so I have more than a reason to want them out of my site. Is there any way to remove them as they are already disabled in the Cloudflare panel?
Please, note
- Rocket Loader is disabled; the Scrape Shields email obfuscation is disabled.
- I have not a “cdn-cgi” directory within my site or server. Typically, this directory is injected by Cloudflare, so both scripts come from Cloudflare.
- I have no “apps” installed through CloudFlare.
- The blocking scripts paths are https://ajax.cloudflare.com/cdn-cgi/scripts/2448a7bd/cloudflare-static/rocket-loader.min.js and http://biblicomentarios.com/cdn-cgi/scripts/5c5dd728/cloudflare-static/email-decode.min.js.
Depending on what Cloudflare plan you're on, you can set up "Page Rules" for your site or subsections of your site.
I'd suggest adding 2 rules -
Disable Security
This should prevent email-decode.min.js from loading.
Disable Performance
This should prevent rocket-loader.min.js from loading.
I think you can have one setting per rule, and 3 page rules if you're using the Free plan.
Go to Scrape Shield
Then Disable Email Address Obfuscation
This will Disable email-decode.min.js
Go to Speed -> optimization
Then Disable Rocket Loader™
This will Disable rocket-loader.min.js
Remember to Clear Cache
Just be careful.
Disable Performance
Will turn off
Auto Minify
Rocket Loader
Mirage
Polish

Recent https (SSL) addition, getting site cannot provide secure connection error page

Recently our website went from http to https. I, and others, are randomly getting "The Site Can't Provide a Secure Connection" page. Upon refresh, the page loads just fine. Why are we getting this initial page randomly?
FYI... We have http to https redirects in place.
Impossible to say without more details, but some things I can suggest are:
You have multiple servers and some are configured correctly and some incorrectly.
You are not including the full certificate chain. Sometimes your browser has the missing intermediary cached and sometimes not (see this answer for more info here: https://serverfault.com/questions/826100/ca-certificate-trouble-with-squid-on-centos7/826321#826321)
A bug in browser/software. I had this issue on Chrome when using Apache HTTP/2. Never did figure it out but a Chrome update fixed it.
Run https://www.ssllabs.com/ssltest/ on your site to confirm not a problem with your https set up and, if that doesn't work, or you don't understand the results it gives, then update your question with more details (what Server and Browser you are using and what version, if you have any proxy in place between your Browser and the site and, ideally the website name) if you want people to help you.
Also be aware this is a programming site and some people don't like these questions here and will suggest other Stack Exchange sites but honestly don't know where this question is best placed: serverfault.com maybe, but is for professional SysAdmins only, Unix and Linux seems a little generic (not even sure if you are using a Linux webserver!), Webmasters is more for content and SEO questions, Information and Security is more for theoretical SSL/TLS questions...

Site functionality diminished over VPN/company network

We currently experience a diminished with one of our customers at our main production site. All subpages and resources seem to be affected as well.
The customer reports a completely broken experience for themselves with the site not working correctly at all, mostly due to assets not loading correctly.
We already started investigating and have found that - so far - nothing seems to be wrong with the site itself.
Quick rundown:
The production site has a Cloudflare layer and almost all of it's assets are delivered either via CDNjs or Amazon's Cloudfront (behind Cloudflare) - all assets are reachable via HTTP as well
The site uses SSL and enforces it (the dynamic cert from Cloudflare)
We could secure a HAR from one of the requests for the request to one of our sites, the request times are extremely long. If you like to try, here is an online HAR viewer, be sure to uncheck validation of the file.
The customer uses Internet Explorer 8 and Chrome (39). While the site is not optimized for IE8. It should run fine in Chrome, in fact, in runs in most browsers above IE9 just fine for all of us.
Notes
We already ruled out:
Virtual delivery problems (there could be physical limitations we are not aware of)
General faultiness of our setup (We tried three different open VPNs to verify this)
Being on the customers blacklist by accident (although we cannot be entirely sure of this)
SSL Server name indication (SNI) problems
(Potentially) a general problem with the customers network, the customer does not report any problems with "the rest of the internet".
The customer will not give access to their VPN/disclose security details so we cannot really test for the situation ourselves. We suspect that the customer uses an internal proxy that might cause the problems described, but we are not sure.
Questions
My questions here are:
Is there any known problem caused by internal networking in conjunction with our setup that can cause this behaviour?.
Are there potential problems on our end that we could have overlooked or things that we do different from other sites?
It seems the connection is being done (or routed) through a low bandwidth high latency link (or a very congested one). Most of the dns lookups and connects seems to be taking ~10s.
In the HAR you can see that it affects fonts.googleapis.com and cdnjs.cloudflare.com. https://www.google-analytics.com/analytics.js has no data captured. To me the affirmation that the customer does not report any problems with "the rest of the internet" seems kind of dubious, seeing that in this HAR it hasn't been able to load the analytics js and access to usual cdns are very slow.
My guesses (pick one or more):
they are testing in a machine different than the one they have no problems with "the rest of the internet"
this machine is very, very slow
it has some kind of content filtering, antivirus, whatever filtering the web (perhaps with a ssl certificate installed in order to forge & inspect https traffic)
the access is done through a congested route, or a low bandwidth high latency link
Two hotspots:
It happens sometime for CDN points to be inconsistent, I spent a lot of time to understand this issue. How? In a live session with the client when I opened each resource loaded one by one I understand there are differences between CDN access points (Mine eastern Europe - His central Europe ). CDN hosting was one of the biggest US player in the world, anyhow we fixed this by invalidating(deleting) all files from CDN as so new/correct ones were loaded.
You need to have CDN that supports serving files over HTTPS, then use that CDN for the SSL requests.

Why are the files from the Google libraries api loaded via HTTPS

Well the main question says it all, why are the files loaded via https. I am just adding some new libraries to the website, and noticed that the links are all https://.
Now from what I understand you use https when there is some sensitive information, and I do not think that is the case with these libraries I guess. I think nobody is interested in getting the content of these files.
Is there any explanation for this ?
People asked for it so they could use the libraries on things like e-commerce sites, which eventually require an SSL connection. They provide links to the https version by default to make it easier for everyone overall (automatically avoids mixed-content warnings), and for most people the slight performance cost won't matter. But if you know you won't have any need for it, just strip it down to a regular http connection:
https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js
http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js
They did actually publish the http URLs at one point, but I'd imagine that the resulting mixed-content warnings etc that came about as a result of people adding SSL connections and not thinking it through just created a bunch of support questions, so it was simpler to default to showing https and let people hack it if they really wanted.

Broken ssl, what to do

I have a site and i implemented ssl there. but when i browse it, the security seals dont come. i asked to godaddy, they replaid:
Thank you for contacting online support. I cannot replicate the issue you have described. The error you described is caused by the way your site has been designed. If you receive this error, you have a combination of secure and non-secure objects on the page. For example, if your secure website was https://www.domain.tld and you added an object (an image, script, flash file, etc.) to that page that was located at http://www.domain.tld/image.jpg, you would break the seal.
You will need to change your design to
link to objects using https (ie
https://www.domain.tld/image.jpg) or
modify your site design to use
relative paths (/image.jpg).
This error can only be corrected by
modifying your site design. Please
contact your web designer or the
manufacturer of your web design
software if you require additional
assistance modifying your site design.
but the problem is i made everything,all my images javascripts are unders https, but the seal still not coming, saying: some content insecure. what is the problem.
Your problem is in line 8 of jqueryslidemenu.js:
var arrowimages={down:['downarrowclass', 'http://lendersutopia.com/images/down.gif', 23], right:['rightarrowclass', 'images/right.gif']}
You should change it to
var arrowimages={down:['downarrowclass', 'images/down.gif', 23], right:['rightarrowclass', 'images/right.gif']}