I am using Apache server with mod_expires enabled. I have setup headers so that static files expire a week later. File is request via jQuery.get() method, cache set to true.
However, when I refresh the page on browser (Firefox), it always requests the file again. Caching and header field values seem to have no effect.
Below is a screenshot from Firefox developer tools.
How did you come up with the If-None-Match value? I don't see an ETag header with that value. Server will always send you a new one.
Another thing note is browser reload behaviour. By asking a refresh you may be asking for an end to end reload which bypasses all the caches, afaik.
Related
Dig, wget, nslookup and curl commands work perfectly for a specific URL I have pointed to another server less than 24 hours ago.
Problem is, it just refuses to be resolved by the browser (Chrome, Safari and Firefox). The strangest part is that it is being successfully resolved by Postman (by testing the OPTIONS and the GET methods separately), but still doesn't return a proper response on the browser side of things.
DNS checks are returning positive, so this is when I started suspecting that the problem is actually within the headers of the HTTP protocol's requests which are sent - alongside the fact that different responses are being returned for the requests that don't include the default browser headers (being issued through the different command-line tools & Postman) and the ones who do (being issued by the browsers automatically or manually using the dev tools).
After fully flushing the current local system's DNS cache, including the browsers's and even trying another device on another network - I still get still no response on the browser.
Kept going, and attempted to verify that with a VPN (locally - which didn't work), and an online web proxy tool (which did work).
Finally, I extracted the router's default DNS server address, used nslookup to look up the URL again, this time specifically mentioning the desired DNS server (the one stated above), and after getting a successful response with the correct values, I am now pretty much sure the HTTP request is causing the problem.
The URL is hosted on Amazon S3 Static Hosting option, which I used many times before, and didn't have a problem with, with that exact same configuration. Looking up the recent changes/features that were possibly added, pointed out that I may need to explicitly set a CORS policy for the newly created bucket, on top of the usual public access policy that is needed.
After applying that as-well - it still doesn't seem to work.
As a quick change in direction that may possibly make some parts clearer about what's going on (and as I started to think that the browser might not be getting the correct Content-Type header in the response, which should be text/html header as its response, and therefore, possibly doesn't resolve the URL with the expected behavior), I went ahead and applied a 301 redirection on the S3 bucket, instead of the static files hosting, and again, it all works perfectly through the command line tools, but not through the browsers.
Anyway, the browser just doesn't seem to complete any of the requests being sent to the URL.
That might be the OPTIONS pre-flight request failing to respond correctly, and the browser just doesn't continue to issuing the GET request, or the URL is not being found by the DNS route the browser is taking, which is unclear to me currently if that is the option.
Any ideas? (besides the fact that sometimes it just takes longer time for some DNS servers that happen to be on the chosen route to update/refresh their cache, which doesn't appear to be affecting my local machine's DNS route specifically for this case. That, being said with caution, was verified by validating the different parts of DNS configuration and prioritization throughout the different possible parts on my system (Mac OS X), including the fact that the response gets back with the correct address successfully).
Found my answer here:
https://serverfault.com/questions/942030/aws-s3-static-hosting-how-to-debug-connection-timeout
As linked there, more details can be found here:
Non-Authoritative-Reason header field [HTTP]
Solution & Explanation: Because of the nature of the domain extension I have purchased (.dev extension) Chrome was silently using HTTPS because of the URL being part of Chrome's HTTP Strict Transport Security (HSTS), because all .dev domains should be using HTTPS only. Therefore, the issue was still showing up, even when explicitly typing http:// into the URL address bar.
This can be overridden by applying a CloudFront distribution with HTTPS support on top of the S3 Static Hosting, as usual (but still, it should be noted as HSTS listings can cause that for different cases, including this one as part of them, because of the .dev domain extension).
Useful Resources (for debugging purposes)
In addition to what is stated here:
https://gist.github.com/stollcri/7c09bafc97223481920e
You can issue a lookup query (and also add or delete your local set of HSTS listings) through the following Chrome's settings URL:
You can also check the current listings here: https://hstspreload.org/
Edit: I found out my problem. I was using the network inspector wrong (in both Chrome and FF) -- basically I was clicking "refresh"
and watching the network inspector, but it would re-download
everything. What you need to do is go to the URL, then open the network inspector, then go to the URL
again ( Don't "refresh", just re-access the URL a second time). The
network inspector will notify you of which resources were pulled from
cache :)
Original question below:
I am trying to set the image cache settings in Apache. I have the following in .htaccess for 1 week image caching:
FileETag MTime Size
ExpiresActive On
<FilesMatch "\.(gif|jpg|jpeg|png)$">
ExpiresDefault A604800
</FilesMatch>
This looks correct when I check the network tab of the Firefox developer console, but I don't understand why the Request Header says "no-cache"
Note: I removed the lines that do not matter for this question.
I am also serving some images dynamically with PHP. I have caching for those images set for 2 days, but again, the response header says "no-cache". Is this anything to worry about? The images do not appear to be cached when I refresh Firefox. They look like they are being redownloaded:
Any help understanding these headers would be appreciated. If there is an easy way to determine if images are being pulled from cache or not, I'm not seeing it.
The Pragma and Cache-Control request headers mean the same thing, one's from HTTP 1.0 and the other is from 1.1. It's used to tell the server, or a proxy that does caching, that it wants fresh versions of the resource. It's NOT for telling the server that the browser won't cache, or that the browser won't be honoring the cache control the server responds with.
Ultimately, the server can tell a user agent "Here's the resource, cache it for 1 week", but it's still up to the user agent (e.g. browser) to honor that. It could always request the uncached version of the resoure every time instead of not sending the request and loading the locally cached copy.
Having this page Page, I can't seem to find a way to really disable the cache on all sides (server & client) .
What i tryed : disabled the network http cache on firefox from about:config(even cleared cache manually) ; added a timestamp in query string in the css url css/style.css?<?php echo time(); ?> ;
As you can see the style.css is empty but no changes are made to the page (unless i remove link tag , the css request) . I think apache cached the file and it's sending the cached version . How can i tell apache , via htaccess , not to send the cached version of the file and allways send it from the actual source path ?
P.S. : I'm working with a remote server .
Apache, in general, will not cache any content, unless you use mod_proxy_cache or similar.
Your caching is probably happening somewhere else. A few things to try:
see if you are using a proxy server, this can cache content sometimes
doing CTRL+SHIFT+R or CTRL+F5 usually forces the browser to refetch the content even if they already have the file in local cache
use Chrome's Network inspector or Firebug and check exactly which version of the file is being served and if the browser is sending the "If-Modified-Since" header and/or the server is sending the "Expires" header
You can also try setting the Expires directive in the Apache config, to force proxies/browsers to not keep stale copies of the file (https://httpd.apache.org/docs/2.2/mod/mod_expires.html).
If nothing else works, try renaming the file and see if it works. If it doesn't, your problem is somewhere else.
Is there any php.ini setting which could cause isset($_SERVER["CONTENT_LENGTH"]) to never be set on one server, but work fine on another, where the application code is the same and php.ini upload settings are the same? In an uploader library I'm using, the content length check always fails because of this issue. On PHP5.3, CentOS and Apache. Thanks for any help
EDIT: I should add that in the Request Headers, Content-Length:33586 - but when trying to process $_SERVER["CONTENT_LENGTH"], it isn't set.
Content-Length is sent by the server application, it's not part of the HTTP request.
Your application is the one that will be setting that, however you should not be doing that from within PHP as PHP does this automatically.
If you're dealing with input from something like an upload, then you will only get the Content-Length if the HTTP request is not CHUNKED. When sending a chunked request, the data length is not known to the recipient until all the chunks have been sent.
Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me.
I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345
However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas?
Thanks, Brian
The answer depends on information you've not provided here, specifically where are you seeing these headers?
Unless it's from sniffing the network traffic between the browser and client, then you can't be sure if you are looking at a real request to the server or a request which has been satisfied from the cache. Indeed changing the URL as you describe is a very simple way to force a reload from the server rather than a load from the cache.
I don't think its as broken as you seem to. Fire up Wireshark and see for yourself - or just disable caching for these content types.
C.