Internet Explorer 9 ignoring no cache headers - http-headers

I'm tearing my hair out over Internet Explorer 9's caching.
I set a series of cookies from a perl script depending on a query string value. These cookies hold information about various things on the page like banners and colours.
The problem I'm having is that in IE9 it will always, ALWAYS, use the cache instead of using the new values. The sequence of events runs like this:
Visit www.example.com/?color=blue
Perl script sets cookies, I am redirected back to www.example.com
Colours are blue, everything is as expected.
Visit www.example.com/?color=red
Cookies set, redirected, colours set to red, all is normal
Re-visit www.example.com/?color=blue
Perl Script runs, cookies are re-set (I have confirmed this) but! IE9 retreives all resources from the cache, so on redirect all my colours stay red.
So, every time I visit a new URL it gets the resources fresh, but each time I visit a previously visited URL it retrieves them from the cache.
The following meta tags are in the <head> of example.com, which I thought would prevent the cache from being used:
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
<META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
<META HTTP-EQUIV="EXPIRES" CONTENT="0">
For what it's worth - I've also tried <META HTTP-EQUIV="EXPIRES"
CONTENT="-1">
IE9 seems to ignore ALL these directives. The only time I've had success so far in that browser is by using developer tools and ensuring that it is manually set to "Always refresh from server"
Why is IE ignoring my headers, and how can I force it to check the server each time?

Those are not headers. They are <meta> elements, which are an extremely poor substitute for HTTP headers. I suggest you read Mark Nottingham's caching tutorial, it goes into detail about this and about what caching directives are appropriate to use.
Also, ignore anybody telling you to set the caching to private. That enables caching in the browser - it says "this is okay to cache as long as you don't forward it on to another client".

Try sending the following as HTTP Headers (not meta tags):
Cache-Control: private, must-revalidate, max-age=0
Expires: Thu, 01 Jan 1970 00:00:00

I don't know if this will be useful to anybody, but I had a similar problem on my movies website (crosstastemovies.com). Whenever I clicked on the button "get more movies" (which retrieves a new random batch of movies to rate) IE9 would return the exact same page and ignore the server's response... :P
I had to call a random variable in order to keep IE9 from doing this. So instead of calling "index.php?location=rate_movies" I changed it to "index.php?location=rate_movies&rand=RANDOMSTRING".
Everything is ok now.
Cheers

Will just mention that I had a problem looking very like this. But I tried IE9 on a different computer and there was no issue. Then going to Internet Options -> General -> Delete and deleting everything restored correct behaviour. Deleting the cache was not sufficient.

The only items that HTML5 specifies are content-type, default-style and refresh. See the spec.
Anything else that seems to work is only by the grace of the browser and you can't depend on it.

johnstok is correct. Typing in that code will allow content to update from the server and not just refresh the page.
<meta http-equiv="Content-Type" content="text/html; charset=utf-8; Cache-Control: no-cache" />
put this line of code into your section if you need to have it in you asp code and it should work.

Related

Add Expires headers for Cloudflare (Google, Disqus)

I am using DNS management cloudflare.
My site get hosting services on jekyll and github
When I analyze my site at Gtmetrix, I encounter the error "Add Expires headers".
How can I fix this error?
https://www.googletagmanager.com/gtm.js?id=GTM-KX5WC3P
https://cse.google.com/cse.js?cx=partner
https://ahmet123.disqus.com/
https://www.google-analytics.com/analytics.js
https://www.google.com/cse/static/style/look/v2/default.css
https://cse.google.com/adsense/search/async-ads.js
https://www.google.com/uds/css/v2/clear.png
Check out the first and second answers here: Github pages, HTTP headers
You can't change the HTTP headers, but you can do something like:
<meta http-equiv="Expires" content="600" />
inside the <head> section of the layout you use for your pages.
Hello How Fixing This problem after GT METRIX Performance Score
Add Expires headers
There are 9 static components without a far-future expiration date.
https://www.googletagmanager.com/gtag/js?id=UA-195633608-1
https://fonts.googleapis.com/css?family=Roboto+Slab%3A100%2C100italic%2C200%2C200italic%2C300%2C300italic%2C400%2C400italic%2C500%2C500italic%2C600%2C600italic%2C700%2C700italic%2C800%2C800italic%2C900%2C900italic%7CRoboto%3A100%2C100italic%2C200%2C200italic%2C300%2C300italic%2C400%2C400italic%2C500%2C500italic%2C600%2C600italic%2C700%2C700italic%2C800%2C800italic%2C900%2C900italic&display=auto&ver=5.9.2
https://www.googletagmanager.com/gtag/js?id=G-69EDJ9C5J7
https://static.cloudflareinsights.com/beacon.min.js
https://harhalakis.net/cdn-cgi/scripts/5c5dd728/cloudflare-static/email-decode.min.js
https://www.google-analytics.com/analytics.js
https://www.googletagmanager.com/gtag/js?id=G-69EDJ9C5J7&l=dataLayer&cx=c
https://www.googletagmanager.com/gtm.js?id=GTM-W3NQ5KW
https://www.google-analytics.com/plugins/ua/linkid.js
Thank you.
Konstantinos Harhalakis

How can I force a hard refresh if page has been visited before

Is it possible to check if the client has a cached version of a website, and if so, force his browser to apply a hard refresh once?
You can't force a browser to do anything, because you don't know how rigidly a remote client is observing the rules of HTTP.
However you can set HTTP headers which the browser is supposed to obey.
One such is Cache-control. There are a number of values that may meet your needs including no-cache and max-age. There is also the Expires header which specifies a wall-clock expiration time.
It is not ready apparent if the client has a cached version. To tell the client not to use cache you can use these meta tags.
<HEAD>
<TITLE>---</TITLE>
<META HTTP-EQUIV="Pragma" CONTENT="no-cache">
<META HTTP-EQUIV="Expires" CONTENT="-1">
</HEAD>

PJAX: Problems with back button

Some our links are wrapped by PJAX. When a user clicks on a PJAX link the server returns only the required part of the HTML.
If I do the following:
Click PJAX link
Click simple link
Press back button
the browser will display content that was returned by the PJAX request. The HTML will be broken because it's only part of the HTML to be displayed (check this question).
We have tried to fix this by not caching PJAX responses (Cache-Control header). This fixed our problem but raised another one:
When the user presses the back button, WebKit (Chrome 20.0) loads full content from server, then fires popstate event that causes an unnecessary PJAX request.
Is it possible to recreate correct back button behaviour?
To make the Browser aware of the different versions of the HTTP resources depending on the request-headers I added a Vary http header.
Using Vary, you don't need to send no-cache headers anymore and therefore get your page fast again.
In PHP this would look like:
header("Vary: X-PJAX");
Since we sometimes use 3 representations per URL (regular http, pjax and ajax) - because migrate to a PJAX approach in a already sometimes ajaxified app - we actually use:
header("Vary: X-PJAX,X-Requested-With");
In case you need to support old IE (older than IE9) versions you need to make sure that the Vary header is stripped by your webserver, because otherweise old IE will disable caching for all your resources which provide a Vary header.
This could be achieved by the following setting in your .htaccess/vhost config:
BrowserMatch "MSIE" force-no-vary
Edit:
Underlying chrome bug, https://code.google.com/p/chromium/issues/detail?id=94369
This all depends from server caching settings. You browser caches the AJAX response from server and when you click Back button it uses cached version.
To prevent caching set following headers on server:
'Cache-Control' => 'no-cache, no-store, max-age=0, must-revalidate'
'Pragma' => 'no-cache'
If you are using Rails, then definitely try Wiselinks https://github.com/igor-alexandrov/wiselinks. It is a a Swiss Army knife for browser state management. Here are some details: http://igor-alexandrov.github.io/blog/2013/07/11/the-way-to-wiselinks-1-dot-0/.

iweb pages are downloading rather than viewing

I'm having an issue with a friends iWeb website - http://www.africanhopecrafts.org. Rather than pages viewing they want to download instead but they're all html files. I've tried messing with my htaccess file to see if that was affecting it but nothings working.
Thanks so much
Most likely your friend's web site is dishing up the wrong MIME type. The web server might be malconfigured, but the page can override the content-type responde header by adding a <meta> tag to the page's <head> like this:
<meta http-equiv="content-type" content="text/html" charset="ISO-8859-1" />
(where the charset in use reflects that of the actual web page.)
If the page is being served up with the correct content-type, the browser might be malconfigured to not handle that content type. Does the problem occur for everybody, or just you? IS the problem dependent on the browser in use?
You can sniff the content-type by installing Firefox's Tamper Data plug in. Fire up Firefox, start TamperData and fetch the errant web page via Firefox. Examining the response headers for the request should tell you what content-type the page is being served up with.

How do I figure out which parts of a web page are encrypted and which aren't?

I'm working on a webserver that I didn't totally set up and I'm trying to figure out which parts of a web page are being sent encrypted and which aren't. Firefox tells me that parts of the page are encrypted, but I want to know what, specifically, is encrypted.
The problem is not always bad links in your page.
If you link to iresources at an external site using https://, and then the external site does its own HTTP redirect to non-SSL pages, that will break the SSL lock on your page.
BUT, when you viewing the source or the information in the media tab, you will not see any http://, becuase your page is properly using only https:// links.
As suggested above, the firebug Net tab will show this and any other problems. Follow these steps:
Install Firebug add-on into firefox if you don't already have it, and restart FF when prompted.
Open Firebug (F12 or the little insect menu to the right of your search box).
In firebug, choose the "Net" tab. Hit "Enable" (text link) to turn it on
Refresh your problem page without using the cache by hitting Ctrl-Shift-R (or Command-shift-R in OSX). You will see the "Net" tab in firefox fill up with a list of each HTTP request made.
Once the page is done loading, hover your mouse over the left colum of each HTTP request shown in the net tab. A tooltip will appear showing you the actual link used. it will be easy to spot any that are http:// instead of https://.
If any of your links resulted in an HTTP redirect, you will see "301 Moved Permanently" in the HTTP status column, and another HTTP request will be just below for the new location. If the problem was due to an external redirect, that's where the evidence will be - the new location's request will be HTTP.
If your problem is due to redirections from an external site, you will see "301 Moved permanently" status codes for the requests that point them to their new location.
Exapnd any of those 301 relocations with the plus sign at the left, and review the response headers to see what is going on. the Location: header will tell you the new location the external server is requesting browsers use.
Make note of this info in the redirect, then send a friendly polite email to the external site in question and ask them to remove the https:// -> http:// redirects for you. Explain how it's breaking the SSL on your site, and ideally include a link to the page that is broken if possible, so that they can see the error for themselves. (this will spur faster action than if you just tell them about the error).
Here is sample output from Firebug for the the external redirect issue.. In my case I found a page calling https:// data feeds was getting the feeds rewritten by the external server to http://.
I've renamed my site to "mysite.example.com" and the external site to "external.example.com", but otherwise left the heders intact. The request headers are shown at the bottom, below the response headers. Note that I"m requesting an https:// link from my site, but getting redirected to an http:// link, which is what was breaking my SSL lock:
Response Headers
Server nginx/0.8.54
Date Fri, 07 Oct 2011 17:35:16 GMT
Content-Type text/html
Content-Length 185
Connection keep-alive
Location http://external.example.com/embed/?key=t6Qu2&width=940&height=300&interval=week&baseAtZero=false
Request Headers
Host external.example.com
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20100101 Firefox/7.0.1
Accept */*
Accept-Language en-gb,en;q=0.5
Accept-Encoding gzip, deflate
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Connection keep-alive
Referer https://mysite.example.com/real-time-data
Cookie JSESSIONID=B33FF1C1F1B732E7F05A547A9CB76ED3
Pragma no-cache
Cache-Control no-cache
So, the important thing to note is that in the Response Headers (above), you are seeing a Location: that starts with http://, not https://. Your browser will take this into account when figuring out if the lock is valid or not, and report only partially encrypted content! (This is actually an important browser security feature to alert users to a potential XSRF and/or phishing attacks).
The solution in this case is not something you can fix on your site - you have to ask the external site to stop their redirect to http. Often this was done on their side for convenience, without realizing this consequence, and a well written, polite email can get it fixed.
For each element loaded in page, check their scheme:
it starts with HTTPS: it is encrypted.
it starts with HTTP: it's not encrypted.
(you can see a relatively complete list on firefox by right-clicking on the page and selecting "View Page Info" then the "medias"tab.
EDIT: FF only shows images and multimedia elements. They are also javascript files & CSS ones which have to be checked. And Firebug is a good tool to find what you need.
Some elements may not list http or https, in this case whichever was used for the page will be used for these items, i.e. if the page request is under SSL then these images will come encrypted while if the page request is not under SSL then these will come unencrypted. Fiddler in Internet Explorer may also be useful in tracking down some of this information.
Sniff the packets - that'll tell you really quick. WireShark is a good program for such a task.
Can firebug do this?
Edit: Looks like firebug will also do this using the "Net" panel, which also gives you some other interesting statistics.
The best tool I have found for detecting http links on a https connection is Fiddler. It's also great for many other troubleshooting efforts.
I use FF plugin HTTPFox for this.
https://addons.mozilla.org/en-us/firefox/addon/httpfox/