Why is Lighthouse failing my cached files? - apache

I'm using Google Lighthouse to calculate a performance score. One of the criteria is caching static assets such as images and scripts.
I don't have control over all of these, but the ones I do have control over the cache has been set to 30 days. However, Lighthouse is still reporting these as an issue. Lighthouse does report these as having a 30d cache, but still reports as an issue.
What do I need to do to rectify this?
Please see screenshot below:

Lighthouse will warn you to serve static assets with an efficient cache policy if your score for that audit is not greater than or equal to 90. It will also list all of your static assets in the details summary (regardless of whether they pass or not).
Since you do not have control over some of your static assets, your score appears to be lower than 90, and therefore, you are still seeing your static assets that pass the audit in the details summary.
You can verify this by saving your results as a JSON file, opening it in any text editor, and searching for the section containing "uses-long-cache-ttl".
The score underneath will likely be less than 90.
You can learn more about this particular audit by visiting this link:
https://developers.google.com/web/tools/lighthouse/audits/cache-policy

I also had a 30 day cache policy and what fixed this for me was adding the public and no-cache values to the Cache-Control header.
I only figured this out as I was testing Firebase hosting vs my old host which was IIS. The IIS hosted site was passing even though it had a shorter max-age value. I checked the network developer tools in chrome and saw it had public and no cache values in my IIS web.config under the Cache-Control header but my firebase.json didn't have those values. Once added I'm passing again!
Now why this passes is a mystery to me, but see if you can add and test again.

In my case to fix the Serve static assets with an efficient cache policy error in Lighthouse, I have to increase the max-age value to 97 days:
Cache-Control: max-age=8380800
My version of Lighthouse is 5.7.0

Related

Azure CDN caching with query string

i am curious about an issue that i am facing at the moment with Azure CDN and i don't have an answer for it. So, i have a CDN profile and endpoint configured to cache some content stored in a storage container. In the cache behavior, i am using default (ignore query strings). So i modified one file in the container, and i was able to retrieve the modified file from the container, but not from the CDN edge since the edge was returning the previous cached version of the file. So i proceed with the purge of the file in the CDN, and after the purge, i was able to get the modified version of the file. But, if i request the file to the cdn edge with any querystring parameter, i get the original version of the file, instead of the modified version of the file.
Example requesting the file via edge:
w/o qs: https://#storage_account#/#file_path#/hh.min.css -> It gives me the modified version
w qs: https://#storage_account#/#file_path#/hh.min.css?v=0.5 -> It gives me the original version
w qs (2): https://#storage_account#/#file_path#/hh.min.css?a=b -> It gives me the original version
Any idea why this is happening?
Thanks.
Most likely what's happening is the use of the query uses a cached asset, as mentioned per the documentation
Ignore query strings: Default mode. In this mode, the CDN point-of-presence (POP) node passes the query strings from the requestor to the origin server on the first request and caches the asset. All subsequent requests for the asset that are served from the POP ignore the query strings until the cached asset expires.
So my guess is the cached asset did not expire yet. To avoid this issue, you should consider bypassing the caching for query strings:
Bypass caching for query strings: In this mode, requests with query strings are not cached at the CDN POP node. The POP node retrieves the asset directly from the origin server and passes it to the requestor with each request.
If the above option results in latency, I'd recommend adjusting the caching rules.

Cloudflare Page Speed HTTP Headers

I've set Cloudflare up and it's working great, the only problem I have is that this keeps coming up in page speed insights
Setting an expiry date or a maximum age in the HTTP headers for static resources instructs the browser to load previously downloaded resources from local disk rather than over the network.
I've set the cache at Cloudflare to be 4 days and PageSpeed is picking up on this, is there a bit of code I'm missing here?
https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.vouchertoday.uk

Does my apache cache my images?

I have a folder for customer avatar upload and I set up an apache server pointing to this folder to get customer avatar images.
It seem to be a very strange symptom my system get right now as following:
I update an avatar image in the folder
I access to the image by browser, but I see it displays the old one on browser though I refresh (Ctrl+F5) many times.
After a time duration (nearly 1 min), I refresh the above url, and I got the lastest image dislayed.
Is this symptom relating to my Apache configuration? Could anyone help me to figure out which setting affects? Thank you!
First of all I am guessing that you have not enabled any of the apache caching modules. If this is the case then this behavior is due to browser's caching on the client side. You can verify this by opening the url in a browser's private session after updating the image. You can also verify by looking into apache's access.log file and check if you can see any request entry for accessing the image or not. If it's not then it is being directly served from browser's cache.

CloudFront and mod_pagespeed - wrong content received

I am using CloudFront with mod_pagespeed running on the server.
When updating a CSS or flushing the cache I see problematic behavior, first refresh on the browser returns the original css (this is fine). When I refresh a second time I get the correct manipulated CSS file name but the content of the file from CloudFront is still the original and not the correct manipulated content.
Why would this happen?
Any idea how to fix this?
Update:
For some reason it just stopped happening... I don't know why.
SimonW, since your original post there has been a feature added to pagespeed (in March 2013 in version 1.2.24.1) to deal with this issue directly. The directive is enabled via the following:
Apache:
ModPagespeedRewriteDeadlinePerFlushMs deadline_value_in_milliseconds
Nginx:
pagespeed RewriteDeadlinePerFlushMs deadline_value_in_milliseconds;
The docs describe the directive as follows (emphasis mine):
When PageSpeed attempts to rewrite an uncached (or expired) resource
it will wait for up to 10ms per flush window (by default) for it to
finish and return the optimized resource if it's available. If it has
not completed within that time the original (unoptimized) resource is
returned and the optimizer is moved to the background for future
requests. The following directive can be applied to change the
deadline. Increasing this value will increase page latency, but might
reduce load time (for instance on a bandwidth-constrained link where
it's worth waiting for image compression to complete). Note that a
value less than or equal to zero will cause PageSpeed to wait
indefinitely.
So, if you specify a value of 0 for deadline_value_in_milliseconds you should always get the fully optimized page. I would caution that the latency can be high on this in some cases. I my case, I really wanted this behavior, even with the latency concern, because the content was to be cached on my CDN's edge servers and, thus, I wanted the most optimized version possible to be served to the CDN for caching.
This could happen if you have multiple backend servers and CloudFront is hitting a different one than the HTML request went through. In that case the resource was rewritten on the HTML server, but not on the other server. There is a short timeout and if the other server doesn't finish the rewrite in that time, it will just serve the original content with Cache-Control: private,max-age=300. It's possible CloudFront caches that for a little while (even though obviously it shouldn't), but then eventually re-requests the resource from your backend and gets the correctly rewritten version this time.

HTTP caching headers settings weblogic

Does anyone know how to modify weblogic settings to set the HTTP cache header to a far future date?
For example in my current setup weblogic sets the http cache headers to expire in 5 hours (as a response of HTTP/1.1 304 Not Modified).
This is the cache header value on a .gif file ... Date: Tue, 16 Mar 2010 20:39:13 GMT.
I have re-checked and it's always 5 hours. There must be some for of settings that I can tweak to change it.
Thanks for your time!
You can use this property :
<wls:container-descriptor>
<wls:resource-reload-check-secs>-1</wls:resource-reload-check-secs>
</wls:container-descriptor>
The element is used to perform metadata caching for cached resources that are found in the resource path in the Web application scope. This parameter identifies how often WebLogic Server checks whether a resource has been modified and if so, it reloads it.
The value -1 means metadata is cached but never checked against the disk for changes. In a production environment, this value is recommended for better performance.
Static content is served by a weblogic.servlet.FileServlet that all web applications have by default but I couldn't find any way to configure HTTP headers. So either replace this servlet with your own servlet or use a Filter.
But the above comment is right, using a web server to serve static content is the "right" way to go: a web server does a better job at this and the application server has other things to do than serving static files.