I've set Cloudflare up and it's working great, the only problem I have is that this keeps coming up in page speed insights
Setting an expiry date or a maximum age in the HTTP headers for static resources instructs the browser to load previously downloaded resources from local disk rather than over the network.
I've set the cache at Cloudflare to be 4 days and PageSpeed is picking up on this, is there a bit of code I'm missing here?
https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.vouchertoday.uk
Related
I'm using Google Lighthouse to calculate a performance score. One of the criteria is caching static assets such as images and scripts.
I don't have control over all of these, but the ones I do have control over the cache has been set to 30 days. However, Lighthouse is still reporting these as an issue. Lighthouse does report these as having a 30d cache, but still reports as an issue.
What do I need to do to rectify this?
Please see screenshot below:
Lighthouse will warn you to serve static assets with an efficient cache policy if your score for that audit is not greater than or equal to 90. It will also list all of your static assets in the details summary (regardless of whether they pass or not).
Since you do not have control over some of your static assets, your score appears to be lower than 90, and therefore, you are still seeing your static assets that pass the audit in the details summary.
You can verify this by saving your results as a JSON file, opening it in any text editor, and searching for the section containing "uses-long-cache-ttl".
The score underneath will likely be less than 90.
You can learn more about this particular audit by visiting this link:
https://developers.google.com/web/tools/lighthouse/audits/cache-policy
I also had a 30 day cache policy and what fixed this for me was adding the public and no-cache values to the Cache-Control header.
I only figured this out as I was testing Firebase hosting vs my old host which was IIS. The IIS hosted site was passing even though it had a shorter max-age value. I checked the network developer tools in chrome and saw it had public and no cache values in my IIS web.config under the Cache-Control header but my firebase.json didn't have those values. Once added I'm passing again!
Now why this passes is a mystery to me, but see if you can add and test again.
In my case to fix the Serve static assets with an efficient cache policy error in Lighthouse, I have to increase the max-age value to 97 days:
Cache-Control: max-age=8380800
My version of Lighthouse is 5.7.0
I have a folder for customer avatar upload and I set up an apache server pointing to this folder to get customer avatar images.
It seem to be a very strange symptom my system get right now as following:
I update an avatar image in the folder
I access to the image by browser, but I see it displays the old one on browser though I refresh (Ctrl+F5) many times.
After a time duration (nearly 1 min), I refresh the above url, and I got the lastest image dislayed.
Is this symptom relating to my Apache configuration? Could anyone help me to figure out which setting affects? Thank you!
First of all I am guessing that you have not enabled any of the apache caching modules. If this is the case then this behavior is due to browser's caching on the client side. You can verify this by opening the url in a browser's private session after updating the image. You can also verify by looking into apache's access.log file and check if you can see any request entry for accessing the image or not. If it's not then it is being directly served from browser's cache.
I have installed the Varnish cache with my Apache web server and configured them correctly. It works OK and I can now access my web pages though Varnish Cache.
The default behavior of varnish is to store copies of the pages served by the web server. The next time the same page is requested, Varnish will serve the copy instead of requesting the page from the Apache server.
And now comes my question: Is it possible to cache my entire website initially after setting up the Varnish cache, without the need to have a page to be accessed then store it on the cache? This is because, after varnish has been setup, the cache is initially empty, and it will require a page to be accessed in order to be available on the cache. Can this be done without having to access each page manually?
What you are looking for is a way of warming up the cache. You could use varnishreplay or a Web crawler, such as Wget or HTTrack to go through your site. Alternatively if you have a sitemap of your pages you could use that as a starting point and warm up the cache by looping over it and issuing requests on the pages using e.g. curl or wget.
Using varnishreplay requires you to first run varnishlog and gather a log of traffic before you can use it later for playing back the traffic and warming up the cache.
Wget, HTTrack etc. can be pointed to your home page and they will crawl their way through your site. Depending on the size and nature of your site this might not be practical though (for example if you use Ajax extensively).
Unless your pages take a very long time to load from the backend server (i.e. Apache), I wouldn't worry too much about warming up the cache. If the TTL for the cached content is high enough most of the visitors will only ever receive cached content anyway.
There is a much better way to do this which employs req.hash_always_miss and works with Varnish 3 and 4 (employs sitemap too). It warms up your cache and refreshes old pages without having to purge the cache. Full diagram, outline of how to configure it and 3 scripts for various use cases are outlined here http://www.htpcguides.com/smart-warm-up-your-wordpress-varnish-cache-scripts/ and are easily adapted for non-Wordpress sites.
I am using CloudFront with mod_pagespeed running on the server.
When updating a CSS or flushing the cache I see problematic behavior, first refresh on the browser returns the original css (this is fine). When I refresh a second time I get the correct manipulated CSS file name but the content of the file from CloudFront is still the original and not the correct manipulated content.
Why would this happen?
Any idea how to fix this?
Update:
For some reason it just stopped happening... I don't know why.
SimonW, since your original post there has been a feature added to pagespeed (in March 2013 in version 1.2.24.1) to deal with this issue directly. The directive is enabled via the following:
Apache:
ModPagespeedRewriteDeadlinePerFlushMs deadline_value_in_milliseconds
Nginx:
pagespeed RewriteDeadlinePerFlushMs deadline_value_in_milliseconds;
The docs describe the directive as follows (emphasis mine):
When PageSpeed attempts to rewrite an uncached (or expired) resource
it will wait for up to 10ms per flush window (by default) for it to
finish and return the optimized resource if it's available. If it has
not completed within that time the original (unoptimized) resource is
returned and the optimizer is moved to the background for future
requests. The following directive can be applied to change the
deadline. Increasing this value will increase page latency, but might
reduce load time (for instance on a bandwidth-constrained link where
it's worth waiting for image compression to complete). Note that a
value less than or equal to zero will cause PageSpeed to wait
indefinitely.
So, if you specify a value of 0 for deadline_value_in_milliseconds you should always get the fully optimized page. I would caution that the latency can be high on this in some cases. I my case, I really wanted this behavior, even with the latency concern, because the content was to be cached on my CDN's edge servers and, thus, I wanted the most optimized version possible to be served to the CDN for caching.
This could happen if you have multiple backend servers and CloudFront is hitting a different one than the HTML request went through. In that case the resource was rewritten on the HTML server, but not on the other server. There is a short timeout and if the other server doesn't finish the rewrite in that time, it will just serve the original content with Cache-Control: private,max-age=300. It's possible CloudFront caches that for a little while (even though obviously it shouldn't), but then eventually re-requests the resource from your backend and gets the correctly rewritten version this time.
I recently deployed a site http://boardlite.com . One of the tester websites http://www.gidnetwork.com/tools/gzip-test.php suggests that gzip is not enabled for my site. YSlow gives an A grade for Gzip but does not mention that gzip is on.
How do I make sure the site properly implements Gzip. I am also going to enable far-future expiry dates for static media. I would like to know if there are any best practices for setting the expiry date.
Static media on the site is served by nginx server while the site itself runs on top of apache, just in case if this information is required.
I'd advise against going too far into the future or you'll make site upgrades a nightmare. I believe a week should be enough since after that you'll still only be serving 302 responses not the whole image.
It looks like Gzip is enabled on your server. You can tell by checking the HTTP response headers for 'Content-Encoding: gzip'.
I can't think of any "best practices" for future expiry dates - other than to make sure they're not in the past ;)
There are many other ways you can optimize your web site. Spriting your CSS background images and using a content delivery network for your static content are a few.
Andrew