Cloudflare doesn't respect my origin cache control header - cloudflare

My nginx server return
cache-control: public, max-age=14400, s-maxage=2592000
for every request.
But for html files, it still returns
cf-cache-status: DYNAMIC
for other cachable file extensions, it returns:
cf-cache-status: EXPIRED
I've already purge all cache and refresh my browser for several times.
Something wrong with my settings?

Check cloudfalre cache level settings.

Related

Remove a header based on query param with varnish

I want to remove a cache-control header from URL's with a specific query params. e.g. when the query paramater ajax=1 is present.
e.g
www.domain.com?p=3&scroll=1&ajax=1&scroll=1
These are getting cached by chrome browsers for longer than I would like and I would like to stop that in this specific case. I have tried with .htaccess which works for static files however not in action on the URL's mentioned above.
RewriteEngine on
RewriteCond %{QUERY_STRING} (^|&)ajax=1(&|$)
Header unset "Cache-Control"
I could use a cache buster in the next website release but difficult in production and worried it would unnecessarily cache lots of files in user browsers so would rather achieve server side.
My server has Cloudflare then NGINX terminating SSL to Varnish then Apache with a Magento 2 instance running on there. So thinking i could possibly achieve this with NGINX or Varnish configs, or even Cloudflare. I however couldn't seem to find a way to achieve this with page rules in Cloudflare, or could not find examples for Varnish or Nginx.
I'm assuming you don't want to cache when ajax=1 is part of your URL params?
You can do this in Varnish using the following VCL snippet:
sub vcl_backend_response {
if(bereq.url ~ "\?([^&]*&)*ajax=1(&[^&]*)*$") {
set beresp.http.cache-control = "private, no-cache, no-store";
set beresp.uncacheable = true;
}
}
This snippet will make sure Varnish doesn't cache responses where the URL contains an ajax=1 URL parameter. It will also make sure any caching proxy that sits in front will not cache, because of the Cache-Control: private, no-cache, no-store.
Is this what you're looking for?

ERR_TOO_MANY_REDIRECTS when disable SSL in all pages on Prestashop

I have disabled SSL in all pages at Prestashop and now there is this error (i do not enter https and it puts https):
I tried deleting htaccess and regenerating it, but it didn't work.
This is the Prestashop configuration:
Which is the solution?
EDIT: My configuration of SSL at Prestashop configuration tab
I'm not familiar with Prestashop, however the issue is that at the time being the site enforces the HTTPS. In fact, the non-HTTPS version redirects to the HTTPS, there may be other redirects (that are not enabled now as I can access the redirect target) that may have caused the loop.
➜ ~ curl -I http://runvaspain.com
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Sun, 24 Jan 2016 10:44:09 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Cache-Control: no-cache
Location: https://runvaspain.com/
X-Powered-By: PleskLin
Vary: Accept-Encoding
Strict-Transport-Security: max-age=15768000;includeSubDomains
Moreover, it looks like the site is setting the HSTS header
Strict-Transport-Security: max-age=15768000;includeSubDomains
This header is ignored when served via HTTP, however I suppose this was also served via HTTPS, therefore your browser have probably saved the configuration and it's enforcing the HTTPS locally (given that's what the HSTS is about).
You'll have to manually remove the strict transport configuration for the domain in your browser. However, please note that any user that previously accessed your site will have such setting, therefore they will be forced to use HTTPS for the main site and all subdomains for 6 months (as this is the policy you previous set).
Also note that, since you previously sent that header, HTTPS will be enabled for the entire site (and also subdomains), it's not possible to enable it on a single page (at least for the users who visited it before). The best thing to do is to turn HTTPS on again for the entire site.
To solve the first issue (the redirect to HTTPS) you should contact the Prestashop service. However, please note it will be almost irrelevant if the HSTS header was previously sent.
The http version of the website ( http://runvaspain.com ) send a 301 redirect to the https version.
The https version of the website ( https://runvaspain.com ) use the HSTS header with a max-age of 6 month
That's mean that anyone who visited the website is forced to visit the HTTPS version
It's a security feature added by HSTS.
You have two solutions:
Reactivate the https version (It's my advice, as https add a significant security to your visitors)
Redirect the https to http AND modify the HSTS header (probably in your apache configuration) to send a max-age of 0. You MUST send it for at least 6 month, you can't just remove it! Sending a max-age on 0 indicate to visitors who already visited the website to forget the HSTS settings. If you just remove it, those visitors will not be able to visits the webpage for the next six months !
in prestashop after configure with ssl, admin dashboard works. but public sites doesn't work anymore. because you have to enable SSL for front pages, using admin dashboard like bellow.
Enable SSL - yes
Enable SSL on all pages - yes
change your shop parameters like this
refer the image
I think Chrome just cached the URL. Happens to me all the time during development.
Try
List item
Click Menu (three dots or whatever icon is it now in Chrome in the upper right corner)
Hover over More tools then Developer tools
Click Network tab
Click Disable cache checkbox

How to get mod_python site to allow clients to cache selected image content?

I have a small dynamic site implemented in mod_python. I inherited this, and while I have successfully made relatively minor changes to its content and logic, with HTTP caching I'm out of my depth. The site works fine already, so this isn't "the usual question" about how to disable caching for a dynamic site.
My problem is there is one large banner image on each page (the same image from same URL on each page) which accounts for ~90% of site bandwidth but which so far as I can tell isn't being cached; as I browse the site every time I click to a new page (or back to a previously visited one) there it is downloading it yet again.
If I wget the banner's image URL (to see the headers) I see:
$ wget -S http://example.com/site/gif?name=banner.gif
--2012-04-04 23:02:38-- http://example.com/site/gif?name=banner.gif
Resolving example.com... 127.0.0.1
Connecting to example.com|127.0.0.1|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Wed, 04 Apr 2012 22:02:38 GMT
Server: Apache/2.2.14 (Ubuntu)
Content-Location: gif.py
Vary: negotiate
TCN: choice
Set-Cookie: <blah blah blah>
Connection: close
Content-Type: image/gif
Length: unspecified [image/gif]
Saving to: `gif?name=banner.gif'
and the code which is serving it up isn't much more than
req.content_type = 'image/gif'
req.sendfile(fullname)
where fullname is a file-path munged from the request's name parameter.
My question is: is there some quick fix along the lines of setting an Expires: or Vary: field in the image's request response which will result in clients being less keen to repeatedly download it ?
The site is hosted on Ubuntu 10.04 and doesn't have any non-default apache mods enabled other than rewrite.
I note that most (not all) of the site pages' headers themselves do contain
Pragma: no-cache
Cache-Control: no-cache
Expires: -1
Vary: Accept-Encoding
(and the original site author has clearly thought about this as no-cache is applied selectively to non-static content pages). I don't know enough about caching to know whether this somehow poisons the included .gif IMG into being reloaded every time too though.
I don't know if my answer can help you or not, but I post it anyway.
Instead of serving image files from within python application, you can create another virtualhost within apache (on same server) just to serve static and image file. In your python application, you can embed image likes this
<img src="http://img.yoursite.com/banner.gif" alt="banner" />
With separate virtualhost, you can add various header to various content type using mode header, or add another caching layer for your static file.
Hope this help.

How to Set Varnish Cache-Control Headers

I am hoping someone can advise on the proper method for getting Varnish to send cache-control headers. Currently, my configuration is sending "Cache-Control: no-cache" to clients.
Thanks in advance to anyone who might be able to help...
Your back-end is sending "Cache-Control: no-cache" to Varnish which implies two things:
Varnish will not store the response in the cache (so a next lookup will fail)
Your clients (browsers and intermediate proxies) will not cache responses (and request them over and over).
The solution is simple: remove the cache-control headers after fetching the response from the back-end (and before storing them in the cache).
In your vcl file do:
sub vcl_fetch {
remove beresp.http.Cache-Control;
set beresp.http.Cache-Control = "public";
}
You can choose to only do this for certain urls (wrap it in ( if req.url ~ "" ) logic) and do way more advanced stuff.
Varnish ignores Cache-Control: nocache as per the documentation. Here is evidence confirming that:
http://drupal.org/node/1418908
To get that result, you should detect the header Cache-Control .nocache. from your backend, and then invalidate the cache, set the backend response to not cacheable, or issue max-age: 0 in the other header (I forget the name right now).
[ivy] has good advice, and/but it gets a little complicated when you try to obey a servers intent for end user (browser) caching. I found this resource to be helpful in understanding a way to configure Varnish to hold onto a cache longer than a browser is instructed to...
https://www.varnish-cache.org/trac/wiki/VCLExampleLongerCaching

Prevent Apache from chunking gzipped content

When using mod_deflate in Apache2, Apache will chunk gzipped content, setting the Transfer-encoding: chunked header. While this results in a faster download time, I cannot display a progress bar.
If I handle the compression myself in PHP, I can gzip it completely first and set the Content-length header, so that I can display a progress bar to the user.
Is there any setting that would change Apache's default behavior, and have Apache set a Content-length header instead of chunking the response, so that I don't have to handle the compression myself?
You could maybe play with the sendBufferSize to get a value big enough to contain your response in one chunk.
Then chunked content is part of the HTTP/1.1 protocol, you could force an HTTP/1.0 response (so not chunked: “A server MUST NOT send transfer-codings to an HTTP/1.0 client.”) by setting the force-response-1.0 in your apache configuration. But PHP breaks this settings, it's a long-known-bug of PHP, there's a workaround.
We could try to modify the request on the client side with an header preventing the chunked content, but w3c says: "All HTTP/1.1 applications MUST be able to receive and decode the "chunked" transfer-coding", so I don't think there's any header like 'Accept' and such which can prevent the server from chunking content. You could however try to set your request in HTTP/1.0, it's not really an header of the request, it's the first line, should be possible with jQuery, certainly.
Last thing, HTTP/1.0 lacks one big thing, the 'host' headers is not mandatory, verify your requests in HTTP/1.0 are still using the 'host' header if you work with name based virtualhosts.
edit: by using the technique cited in the workaround you can see that you could tweak Apache env in the PHP code. This can be used to force the 1.0 mode only for your special gzipped content, and you should use it to prevent having you complete application in HTTP/1.0 (or use the request mode to set the HTTP/1.0 for you gzip requests).