Will cloudflare cache things by default? - cloudflare

The help section of Caching -> Configuration -> Caching Level says:
Note: By default, Cloudflare does not cache HTML content. You can create a Page Rule to cache static HTML content.
I have no page rules set yet. But when I goto Analytics, I see there are 600+ cached requests. Anybody know what happened here?

As the note says "By default, Cloudflare does not cache HTML content.". That means your .html files are not cached but other assets like images, javascript or fonts are.
Here a list of filetypes that will be cached by default:
https://developers.cloudflare.com/cache/about/default-cache-behavior#default-cached-file-extensions

Related

Cloudflare and caching sections users aren't allowed into

This is just a question out of curiosity.
How is it that Cloudflare is able to cache my admin panel? How do they do it?
By default, Cloudflare only caches files that should be static all the time (images, css, js, etc.). You can use Page Rules to configure caching of HTML; but, by default, Cloudflare should not cache user-specific markup.
More info from Cloudflare

Rewriting static resources url in .htaccess for CDN

I have an existing live application based on php/Mysql/apache stack. A quick performance evaluation revealed that a CDN solution would help us gain a lot of speed. Planning to use cloudfront for the CDN.
The issue is existing code wasnt written with CDN in mind.
At the moment, our html outputs contain statuc resources under link tags and are referenced with "./images/test.png" etc...
is there anyway to identify these links just before sending the output and replace it to load from local CDN url.

React Router + AWS Backend, how to SEO

I am using React and React Router in my single page web application. Since I'm doing client side rendering, I'd like to serve all of my static files (HTML, CSS, JS) with a CDN. I'm using Amazon S3 to host the files and Amazon CloudFront as the CDN.
When the user requests /css/styles.css, the file exists so S3 serves it.
When the user requests /foo/bar, this is a dynamic URL so S3 adds a hashbang: /#!/foo/bar. This will serve index.html. On my client side I remove the hashbang so my URLs are pretty.
This all works great for 100% of my users.
All static files are served through a CDN
A dynamic URL will be routed to /#!/{...} which serves index.html (my single page application)
My client side removes the hashbang so the URLs are pretty again
The problem
The problem is that Google won't crawl my website. Here's why:
Google requests /
They see a bunch of links, e.g. to /foo/bar
Google requests /foo/bar
They get redirected to /#!/foo/bar (302 Found)
They remove the hashbang and request /
Why is the hashbang being removed? My app works great for 100% of my users so why do I need to redesign it in such a way just to get Google to crawl it properly? It's 2016, just follow the hashbang...
</rant>
Am I doing something wrong? Is there a better way to get S3 to serve index.html when it doesn't recognize the path?
Setting up a node server to handle these paths isn't the correct solution because that defeats the entire purpose of having a CDN.
In this thread Michael Jackson, top contributor to React Router, says "Thankfully hashbang is no longer in widespread use." How would you change my set up to not use the hashbang?
You can also check out this trick. You need to setup cloudfront distribution and then alter 404 behaviour in "Error Pages" section of your distribution. That way you can again domain.com/foo/bar links :)
I know this has been a few months old, but for anyone that came across the same problem, you can simply specify "index.html" as the error document in S3. Error document property can be found under bucket Properties => static Website Hosting => Enable website hosting.
Please keep in mind that, taking this approach means you will be responsible for handling Http errors like 404 in your own application along with other http errors.
The Hash bang is not recommended when you want to make SEO friendly website, even if its indexed in Google, the page will display only a little and thin content.
The best way to do your website is by using the latest trend and techniques which is "Progressive web enhancement" search for it on Google and you will find many articles about it.
Mainly you should do a separate link for each page, and when the user clicks on any page he will be redirected to this page using any effect you want or even if it single page website.
In this case, Google will have a unique link for each page and the user will have the fancy effect and the great UX.
EX:
Contact Us

Is a request for the .js files still made if a browser has JavaScript disabled?

When a browser has JavaScript disabled, is the request for the .js files still made to the server (ie the files still end up downloaded on the client, but not parsed) ?
The reason I'm asking is to see if it's worth implementing lazy-loading JavaScript files as to prevent the browser from requesting them if JavaScript is disabled; ie only having a small file that does the lazy-loading which then loads the larger files, so that the large files are not requested if JavaScript is disabled in the browser.
No, it won't.

Why do I have both HTTPS and HTTP links on site, need them all secure!

I am getting the security alert: "You are about to be directed to a connection that is not secure. the information you are sending to the current site might be transmitted to a non-secure site. Do you wish to continue?" when I try to login as a customer on my clients oscommerce website. I noticed the link in the status bar goes from a https prefix to a nonsecure http prefix. The site has a SSL certificate, so how do I ensure the entire store portion of the site directs to the secured site?
It is likely that some parts of the page, most often images or scripts, are loaded non-secure. You'll need to go through them in the browser's "view page source" view one by one and eliminate the reason (most often, a configuration setting pointing to http://).
Some external tools like Google Analytics that you may be embedding on your site can be included through https://, some don't. In that case, you may have to remove those tools from your secure site.
If you can't switch all the settings, try using relative paths
<img src="/images/shop/xyz.gif">
but the first thing is to identify the non-secure elements using the source code view of your browser.
An immediate redirection from a https:// page to a http:/ one would not result in a warning as you describe. Can you specify what's up with that?
Use Fiddler and browse your site, in the listing it should become evident what is using HTTP and HTTPS.
Ensure that the following are included over https:
css files
js files
embedded media (images, videos)
If you're confident none of your own stuff is included over http, check things like tracking pixels and other third-party gadgets.
Edit: Now that you've linked your page, I see that your <base> tag is the problem:
<base href="http://balancedecosolutions.com/products//catalog/">
Change to:
<base href="https://balancedecosolutions.com/products//catalog/">
If the suggestion from Pekka doesn't suit your needs you can try using relative links based on the schema (http or https):
e.g.,
I am a 100% valid link!
The only problem with this technique is that it doesn't work with CSS files in all browsers; though it does work within Javascript and inline CSS. (I could be wrong here; anyone want to check?).
e.g., the following :
<link rel="stylesheet" href="/css/mycss.css" />
<!-- mycss.css contents: -->
...
body{
background-image:url(//static.example.com/background.png);
}
...
...might fail.
A simple Find/Replace on your source code could be easy.
It sounds to me like the HTML form you are submitting is hardcoded to post to a non-secure page.