Missing content-encoding in Response Header for Request headers with accept-encoding: gzip, deflate, br (in that precise order) for HTTPS Requests - httprequest

We are using IBM Portal (version 8.0.0.xx??). We have 2 IBM portal servers sitting behind 2 Apache http servers.
In one of our portlets, we have a jsp file that includes the following resources:
<link rel="stylesheet" type="text/css" href="<%=request.getContextPath()%>/include/css/jquery-ui.min.css">
<link rel="stylesheet" type="text/css" href="<%=request.getContextPath()%>/include/css/jquery-ui.structure.min.css">
<link rel="stylesheet" type="text/css" href="<%=request.getContextPath()%>/include/css/jquery-ui.theme.min.css">
<link rel="stylesheet" type="text/css" href="<%=request.getContextPath()%>/include/css/include.css">
<script type="text/javascript" src="<%=request.getContextPath()%>/include/js/jquery-1.11.1.min.js"></script>
<script type="text/javascript" src="<%=request.getContextPath()%>/include/js/jquery-ui.min.js"></script>
When using the latest versions of Chrome and FF, in all cases, the https request header has the following token: Accept-Encoding:gzip, deflate, br
When the https response comes back from the server, the following resources do not come back with a content-encoding token in the response header (i.e content-encoding:gzip)
jquery-ui.structure.min.css
jquery-ui.theme.min.css
include.css
So the response body then shows compressed content since the browser, or receiving entity, assumes the resource has not been compressed and therefore does not apply any uncompression. The resulting content is then ofcourse rendered as garbled text. This is due to the absence of the content-encoding: gzip response header token.
Strangely, the other resources:
jquery-ui.min.css
jquery-1.11.1.min.js
jquery-ui.min.js
All come back with a correct content-encoding: gzip token for secure requests and the response is uncompressed correctly.
All http requests return back fine, its only the https requests that fail for some resources.
What I've done to diagnose:
I’ve hit each portal server directly and there are no problems for
both secure and unsecure requests. Even though the browser sends an
accept-encoding header, the servers both respond with no
content-encoding response which means that they are likely not doing
any compression (I'm guessing?)
I’ve hit each http server that sit in front of the portal servers directly, and both servers show the problems for https as I've described above.
I’ve sent https requests inside and outside the browser sending requests and playing around with the accept-encoding header:
Accept-encoding: gzip works
Accept-encoding: gzip, deflate works
Accept-encoding: gzip, br, deflate works
Accept-encoding: br, deflate, gzip works
Accept-encoding: deflate, br, gzip works
Accept-encoding: gzip, deflate, br doesn’t work
I’ve downgraded my version of Chrome to version 57 and I can see that the accept-encoding header for https was: gzip, deflate, br, sdch which works. Switching back to the latest version, the accept-encoding header for https changes to : gzip, deflate, br, which fails. It seems Google used to use the SDCH data compression algorithm for all requests. Anything different from ‘gzip, deflate, br’ works!
SDCH compression was removed from Google Chrome, and other Chromium
products, in version 59.3. As soon as you go to ‘About Chrome’, the
version auto updates to version 64 which Google Chrome enforces by
default. Since sdch is removed, it leaves the offending
accept-encoding of ‘gzip, deflate, br’ which fails.
FF has the same problem as Chrome. When I inspected the
accept-encoding for http/https, it is the same as Chrome:
For http requests: accept-encoding: gzip, deflate
For https requests: accept-encoding: gzip, deflate, br (br is only
for https)
However, FF allowed me to change the accept-encoding and if I change
the order for https, to anything else i.e) gzip, br, deflate, IT
WORKS!
Also, this all works in IE, because when I inspected the
accept-encoding for https requests, it is ‘gzip, deflate’. The ‘br’ (brotli compression) token is missing and hence it works.

The problem as suspected by #covener, was poisoned cache. All of the portlet/applications within our httpd.conf file had a CacheDisable directive except for the particular portlet that was exhibiting the errors. After including that portlet and restarting apache, the resources all came back with a proper content-encoding response header and subsequently decompressed fine.
Thanks for all the help guys.

Related

Vary: Accept-Encoding Response Header

I am trying to understand the response Header "Vary: Accept-Encoding".
I am noticing the response header "Vary: Accept-Encoding" appears for some of the images in the developer tools for our application, but some images doesn't have this response header.
When i tried to hit same image url in the browser, not seeing this header "Vary: Accept-Encoding".
Why this header appears only for selected images in our application? What could be possibilities?
Either Tomcat or the application could be returning this header. If Tomcat is applying e.g. gzip encoding, then it's essential to respond with a Vary: Accept-Encoding because if the client doesn't specify that it supports gzip then the origin server (web server), a proxy, etc. must respond with a different type of data.
If the client requests /myapp/something and advertises that it is willing to accept only responses with encoding gzip, then /myapp/something should really only return a response in the identity or gzip encodings, or reply with a 412 response.
The Vary header is really for proxies. It tells a proxy that clients on the other side might get a different response if they have different Accept-Encoding values in their request headers. So, if two clients request the same resource, but one says Accept-Encoding: identity,gzip and the other says Accept-Encoding: identity,compress, they will (likely) get two responses, and the proxy should understand that it's not just the URL that is important, but also the Accept-Encoding of the client that should govern any caching that the proxy might want to provide.

How do I know which access-control-allow-headers to allow for CORS?

Given these request headers:
Host: api.example.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:41.0) Gecko/20100101 Firefox/41.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Origin: https://web.example.org
Access-Control-Request-Method: GET
Access-Control-Request-Headers: authorization
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
And these response headers:
Connection: keep-alive
Content-Length: 0
Content-Type: text/plain; charset=utf-8
Date: Tue, 13 Oct 2015 10:57:34 GMT
Server: nginx/1.8.0
access-control-allow-headers: Authorization, Content-Type
access-control-allow-methods: PUT, DELETE, PATCH
access-control-allow-origin: *
This works even though only the Authorization and Content-Type headers are explicitly allowed. Why didn't I have to allow other headers that my browser sends? (like DNT for example)
Update: this MDN page contains an overview of simple headers (default CORS-safelisted request headers):
A simple header (or CORS-safelisted request header) is one of the
following HTTP headers:
Accept
Accept-Language
Content-Language
Content-Type with a MIME type of its parsed value (ignoring parameters) of either application/x-www-form-urlencoded, multipart/form-data, or text/plain.
Or one of these client hint headers:
DPR
Downlink
Save-Data
Viewport-Width
Width
Without seeing your code to generate the headers, or on which system you are serving from, i.e. nginx or apache, the best I can do is refer you to http://client.cors-api.appspot.com/client which will allow you to test your CORS requests. Also, you should look at http://enable-cors.org/server.html for your specific setup. For instance on nginx, you could have something like this
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
There is a set of normal headers, and then the set of headers that you have to explicitly call out. see http://www.html5rocks.com/en/tutorials/cors/#toc-adding-cors-support-to-the-server about setting it up on a server.
The Access-Control-Allow-Headers is attached by backend, you cannot control that header on client side.Access-Control-Allow-Headers should be returned in response object.
So to include other headers into Access-Control-Allow-Headers header in response object - you have to configure your web server or update backend application which serves requests to attach desired value of Access-Control-Allow-Headers to each request.
To allow any headers in your client requests server should add Access-Control-Allow-Origin: * header to each response.
There are a lot of articles and info of how you can setup CORS to work in the way you want. For example that one - Enabling CORS

IIS 7 Dynamic Compression is not consistently compressing responses

I have got Dynamic Compression enabled on IIS for all content types.
Requests to my WCF service are sending the following header:
Accept-Encoding: gzip, deflate
And responses from IIS are nearly always being compressed and show the following content in the header:
Content-Type: application/soap+xml; charset=utf-8
Content-Encoding: gzip
Vary: Accept-Encoding
However, when the response to a request is large e.g. 14Mb, I'm finding that the the request is not being compressed. The request header includes the accept-encoding statement and the content type is the same, however, the response does not include the content-encoding or vary headers.
Is there a limit to the size of IIS request that can be compressed? My application is working correctly but performing slowly when the responses are big as a result of this problem.

Why browser does not send "If-None-Match" header?

I'm trying to download (and hopefully cache) a dynamically loaded image in PHP. Here are the headers sent and received:
Request:
GET /url:resource/Pomegranate/resources/images/logo.png HTTP/1.1
Host: pome.local
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.22 (KHTML, like Gecko) Ubuntu Chromium/25.0.1364.160 Chrome/25.0.1364.160 Safari/537.22
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: PHPSESSID=fb8ghv9ti6v5s3ekkmvtacr9u5
Response:
HTTP/1.1 200 OK
Date: Tue, 09 Apr 2013 11:00:36 GMT
Server: Apache/2.2.22 (Ubuntu)
X-Powered-By: PHP/5.3.14 ZendServer/5.0
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Disposition: inline; filename="logo"
ETag: "1355829295"
Last-Modified: Tue, 18 Dec 2012 14:44:55 Asia/Tehran
Keep-Alive: timeout=5, max=98
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: image/png
When I reload the URL, the exact same headers are sent and received. My question is what should I send in my response to see the If-None-Match header in the consequent request?
NOTE: I believe these headers were doing just fine not long ago, even though I can not be sure but I think browsers are changed not to sent the If-None-Match header anymore (I used to see that header). I'm testing with Chrome and Firefox and both fail to send the header.
Same Problem, Similar Solution
I've been trying to determine why Google Chrome won't send If-None-Match headers when visiting a site that I am developing. (Chrome 46.0.2490.71 m, though I'm not sure how relevant the version is.)
This is a different — though very similar — answer than the OP ultimately cited (in a comment regarding the Accepted Answer), but it addresses the same problem:
The browser does not send the If-None-Match header in subsequent requests "when it should" (i.e., server-side logic, via PHP or similar, has been used to send an ETag or Last-Modified header in the first response).
Prerequisites
Using a self-signed TLS certificate, which turns the lock red in Chrome, changes Chrome's caching behavior. Before attempting to troubleshoot an issue of this nature, install the self-signed certificate in the effective Trusted Root Store, and completely restart the browser, as explained at https://stackoverflow.com/a/19102293 .
1st Epiphany: If-None-Match requires an ETag from the server, first
I came to realize rather quickly that Chrome (and probably most or all other browsers) won't send an If-None-Match header until the server has already sent an ETag header in response to a previous request. Logically, this makes perfect sense; after all, how could Chrome send If-None-Match when it's never been given the value?
This lead me to look at my server-side logic — particularly, how headers are sent when I want the user-agent to cache the response — in an effort to determine for what reason the ETag header is not being sent in response to Chrome's very first request for the resource. I had made a calculated effort to include the ETag header in my application logic.
I happen to be using PHP, so #Mehran's (the OP's) comment jumped-out at me (he/she says that calling header_remove() before sending the desired cache-related headers solves the problem).
Candidly, I was skeptical about this solution, because a) I was pretty sure that PHP wouldn't send any headers of its own by default (and it doesn't, given my configuration); and b) when I called var_dump(headers_list()); just before setting my custom caching headers in PHP, the only header set was one that I was setting intentionally just above:
header('Content-type: application/javascript; charset=utf-8');
So, having nothing to lose, I tried calling header_remove(); just before sending my custom headers. And much to my surprise, PHP began sending the ETag header all of a sudden!
2nd Epiphany: gzipping the response changes its hash
It then me hit me like a bag of bricks: by specifying the Content-type header in PHP, I was telling NGINX (the webserver I'm using) to GZIP the response once PHP hands it back to NGINX! To be clear, the Content-type that I was specifying was on NGINX's list of types to gzip.
For thoroughness, my NGINX GZIP settings are as follows, and PHP is wired-up to NGINX via php-fpm:
gzip on;
gzip_min_length 1;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript image/svg+xml;
gzip_vary on;
I pondered why NGINX might remove the ETag that I had sent in PHP when a "gzippable" content-type is specified, and came up with a now-obvious answer: because NGINX modifies the response body that PHP passes back when NGINX gzips it! This makes perfect sense; there is no point in sending the ETag when it's not going to match the response used to generate it. It's pretty slick that NGINX handles this scenario so intelligently.
I don't know if NGINX has always been smart enough not to compress response bodies that are uncompressed but contain ETag headers, but that seems to be what's happening here.
UPDATE: I found commentary that explains NGINX's behavior in this regard, which in turn cites two valuable discussions regarding this subject:
NGINX forum thread discussing the behavior.
Tangentially-related discussion in a project repository; see the comment Posted on Jun 15, 2013 by Massive Bird.
In the interest of preserving this valuable explanation, should it happen to disappear, I quote from Massive Bird's contribution to the discussion:
Nginx strips the Etag when gzipping a response on the fly. This is
according to spec, as the non-gzipped response is not byte-for-byte
comparable to the gzipped response.
However, NGINX's behavior in this regard might be considered slightly flawed in that the same spec
... also says there is a thing called weak Etags (an Etag value
prefixed with W/), and tells us it can be used to check if a response
is semantically equivalent. In that case, Nginx should not mess with
it. Unfortunately, that check never made it into the source tree [citation is now filled with spam, sadly]."
I'm unsure as to NGINX's current disposition in this regard, and specifically, whether or not it has added support for "weak" Etags.
So, What's the Solution?
So, what's the solution to getting ETag back into the response? Do the gzipping in PHP, so that NGINX sees that the response is already compressed, and simply passes it along while leaving the ETag header intact:
ob_start('ob_gzhandler');
Once I added this call prior to sending the headers and the response body, PHP began sending the ETag value with every response. Yes!
Other Lessons Learned
Here are some interesting tidbits gleaned from my research. This information is rather handy when attempting to test a server-side caching implementation, whether in PHP or another language.
Chrome, and its Developer Tools "Net" panel, behave differently depending on how the request is initiated.
If the request is "made fresh", e.g., by pressing Ctrl+F5, Chrome sends these headers:
Cache-Control: no-cache
Pragma: no-cache
and the server responds 200 OK.
If the request is made with only F5, Chrome sends these headers:
Pragma: no-cache
and the server responds 304 Not Modified.
Lastly, if the request is made by clicking on a link to the page you're already viewing, or placing focus into Chrome's address bar and pressing Enter, Chrome sends these headers:
Cache-Control: no-cache
Pragma: no-cache
and the server responds 200 OK (from cache).
While this behavior a bit confusing at first, if you don't know how it works, it's ideal behavior, because it allows one to test every possible request/response scenario very thoroughly.
Perhaps most confusing is that Chrome automatically inserts the Cache-Control: no-cache and Pragma: no-cache headers in the outgoing request when in fact Chrome is acquiring the responses from its cache (as evidenced in the 200 OK (from cache) response).
This experience was rather informative for me, and I hope others find this analysis of value in the future.
Your response headers include Cache-Control: no-store, no-cache; these prevent caching.
Remove those values (I think must-revalidate, post-check=0, pre-check=0 could/should be kept – they tell the browser to check with the server if there was a change).
And I would stick with Last-Modified alone (if the changes to your resources can be detected using this criterion alone) – ETag is a more complicated thing to handle (especially if you want to deal with it in your PHP script yourself), and Google PageSpeed/YSlow advise against this one too.
Posting this for future me...
I was having a similar problem, I was sending ETag in the response, but the HTTP client wasn't sending a If-None-Match header in subsequent requests (which was strange because it was the day before).
Turns out I was using http://localhost:9000 for development (which didn't use If-None-Match) - by switching to http://127.0.0.1:9000 Chrome1 automatically started sending the If-None-Match header in requests again.
Additionally - ensure Devtools > Network > Disable Cache [ ] is unchecked.
Chrome: Version 71.0.3578.98 (Official Build) (64-bit)
1 I can't find anywhere this is documented - I'm assuming Chrome was responsible for this logic.
Similar problem
I was trying to obtain Conditional GET request with If-None-Match header, having supplied proper Etag header, but to no avail in any browser I tried.
After a lot of trial I realize that browsers treat both GET and POST to the same path as a same cache candidate. Thus having GET with proper Etag was effectively canceled with immediate "POST" to the same path with Cache-Control:"no-cache, private", even though it was supplied by X-Requested-With:"XMLHttpRequest".
Hope this might be helpful to someone.
This was happening to me because I had set the cache size to be too small (via Group Policy).
It didn't happen in Incognito, which is what made me realize this might be the case.
Fixing that resolved the issue.
This happened to me due to 2 reasons:
My server didn't send etag response header. I updated my jetty web.xml to return etag by adding the following:
<init-param>
<param-name>etags</param-name>
<param-value>true</param-value>
</init-param>
The URL I called was for xml file. When I changed it to html file, chrome started sending "if-none-match" header!
I hope it helps someone

mod_proxy_ajp error: renders html as text/plain, prompts user to "save as..."

We have an odd, intermittent error that occurs with mod_proxy_ajp, i.e. using apache as a front end to a tomcat server.
The error
User clicks on a link browser prompts
user to "save as...." (e.g. in
Firefox "You have chosen top open
thread.jsp which is a
application/octet-stream"...What
should firefox do with this file)
User says "Huh?" and presses "Cancel"
User clicks again on the same link
Browser displays the page correctly
This error occurs intermittently, but unfortunately rarely on our test server and frequently on production.
In firefox's LiveHttpHeaders I see the following in the above usecase:
first page download (i.e. click on link) is "text/plain"
second download is "text/html"
I thought the problem may stem from ProxyPassReverse (i.e. muddling up whether to use http or ajp), but all these proxypassreverse settings resulted in the same error:
ProxyPassReverse /ajp://localhost:8080/
ProxyPassReverse /pe http://localhost/pe
ProxyPassReverse /pe http://forumstest.company.com/pe
Additionally, I've checked the apache error logs (set to debug) and see no warnings or errors...
** But it works with mod_proxy_http ?? **
It appears that switching to mod_proxy_http 'solves' the problem. Limited testing, I have not been able to recreate the problem in the test environment.
Because the problem is intermittent, I'm not 100% sure that mod_proxy_http "solves" the problem
Environment
Apache 2.2 Windows
Jboss 4.2.2 back end (tomcat 6)
One other data point
For better or worse, a servlet filter in tomcat gzips the html before sending it to apache. (which means extra work as apache must unzip before it performs ProxyPassReverse's "find and replace"). I don't know if "gzip" messes up.
Questions
anyone seen this before?
what tools help analyze the cause?
thanks
Addendum 1: Here is the LiveHttpHeaders output
Browser Incorrectly sees html as "text/plain"
http://forums.customer.com/pe/action/forums/displaythread?rootPostID=10842016&channelID=1&portalPageId=1002
GET http://forums.customer.com/pe/action/forums/displaythread?rootPostID=10842016&channelID=1&portalPageId=1002 HTTP/1.1
Host: forums.customer.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 ( .NET CLR 3.5.30729)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Proxy-Connection: keep-alive
Cookie: __utma=156962862.829309431.1260304144.1297956514.1297958674.234; __utmz=156962862.1296760237.232.50.utmcsr=forumstest.customer.com|utmccn=(referral)|utmcmd=referral|utmcct=/pe/action/forums/displaythread; s_vi=[CS]v1|258F5B88051D3FC3-40000105C056085F[CE]; inqVital=xd|0^sesMgr|{"sID":4,"lsts":1292598007}^incMgr|{"id":"755563420055418864","group":"CHAT","ltt":1292598006741,"sid":"755563549194447187","igds":"1290627502757","exempt":false}^inq|{"customerID":"755562378269271622"}^saleMgr|{"state":"UNSOLD","qDat":{},"sDat":{}}; inqState=sLnd|1^Lnd|{"c":4,"flt":1274728016,"lldt":17869990,"pgs":{"201198":{"c":1,"flt":1274728016,"lldt":0},"0":{"c":3,"flt":1274845009,"lldt":17752997}},"pq":["0","0","0","201198"],"fsld":1274728016697}; adv_search_results_page=10; ep_beta=1; visitorID=57307059; JSESSIONID=6jXLNdHRDjR9Th3B5gvTVkw1dZLn1zvhvKLR2r4GTLjylHJgjY3Q!683274050; __utmc=156962862; JSESSIONID=6jXLNdHRDjR9Th3B5gvTVkw1dZLn1zvhvKLR2r4GTLjylHJgjY3Q!683274050; TLTHID=5CCA50304DE99E28DB79A7B3267D4231; TLTSID=9DFCDE8045B374AAB752CC98A30E8311; AreCookiesEnabled=1; s_cc=true; SC_LINKS=%5B%5BB%5D%5D; s_sq=%5B%5BB%5D%5D; __utmb=156962862.64.10.1297958674; memberexists=T; ev1=greywolf%20hdtv%20whmx
Cache-Control: max-age=0
HTTP/1.0 200 OK
Date: Thu, 17 Feb 2011 17:38:42 GMT
Content-Type: text/plain
X-Cache: MISS from samus.company.com
X-Cache-Lookup: MISS from samus.company.com:3128
Via: 1.0 samus.company.com:3128 (squid/2.6.STABLE20)
Proxy-Connection: close
----------------------------------------------------------
Browser Correctly sees html as "text/html"
http://forums.customer.com/pe/action/forums/displaythread?rootPostID=10842016&channelID=1&portalPageId=1002
GET http://forums.customer.com/pe/action/forums/displaythread?rootPostID=10842016&channelID=1&portalPageId=1002 HTTP/1.1
Host: forums.customer.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 ( .NET CLR 3.5.30729)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Proxy-Connection: keep-alive
Cookie: __utma=156962862.829309431.1260304144.1297956514.1297958674.234; __utmz=156962862.1296760237.232.50.utmcsr=forumstest.customer.com|utmccn=(referral)|utmcmd=referral|utmcct=/pe/action/forums/displaythread; s_vi=[CS]v1|258F5B88051D3FC3-40000105C056085F[CE]; inqVital=xd|0^sesMgr|{"sID":4,"lsts":1292598007}^incMgr|{"id":"755563420055418864","group":"CHAT","ltt":1292598006741,"sid":"755563549194447187","igds":"1290627502757","exempt":false}^inq|{"customerID":"755562378269271622"}^saleMgr|{"state":"UNSOLD","qDat":{},"sDat":{}}; inqState=sLnd|1^Lnd|{"c":4,"flt":1274728016,"lldt":17869990,"pgs":{"201198":{"c":1,"flt":1274728016,"lldt":0},"0":{"c":3,"flt":1274845009,"lldt":17752997}},"pq":["0","0","0","201198"],"fsld":1274728016697}; adv_search_results_page=10; ep_beta=1; visitorID=57307059; JSESSIONID=6jXLNdHRDjR9Th3B5gvTVkw1dZLn1zvhvKLR2r4GTLjylHJgjY3Q!683274050; __utmc=156962862; JSESSIONID=6jXLNdHRDjR9Th3B5gvTVkw1dZLn1zvhvKLR2r4GTLjylHJgjY3Q!683274050; TLTHID=5CCA50304DE99E28DB79A7B3267D4231; TLTSID=9DFCDE8045B374AAB752CC98A30E8311; AreCookiesEnabled=1; s_cc=true; SC_LINKS=%5B%5BB%5D%5D; s_sq=%5B%5BB%5D%5D; __utmb=156962862.64.10.1297958674; memberexists=T; ev1=greywolf%20hdtv%20whmx
Cache-Control: max-age=0
HTTP/1.0 200 OK
Date: Thu, 17 Feb 2011 17:38:44 GMT
X-Powered-By: Servlet 2.4; JBoss-4.2.1.GA (build: SVNTag=JBoss_4_2_1_GA date=200707131605)/Tomcat-5.5
Content-Encoding: gzip
Content-Type: text/html;charset=UTF-8
Content-Length: 24739
X-Cache: MISS from samus.company.com
X-Cache-Lookup: MISS from samus.company.com:3128
Via: 1.0 samus.company.com:3128 (squid/2.6.STABLE20)
Proxy-Connection: keep-alive
----------------------------------------------------------
Addendum 2: Additional Information
The browser did receive the "gzipped" file. I had earlier clicked "save as..." when a few of these errors occurred. Gunzip successfully processed the files and converted them to html.
Answer is here: How to preserve the Content-Type header of a Tomcat HTTP response sent through an AJP connector to Apache using mod_proxy
set DefaultType to None in apache configuration.
# DefaultType: the default MIME type the server will use for a document
# if it cannot otherwise determine one, such as from filename extensions.
# If your server contains mostly text or HTML documents, "text/plain" is
# a good value. If most of your content is binary, such as applications
# or images, you may want to use "application/octet-stream" instead to
# keep browsers from trying to display binary files as though they are
# text.
#
DefaultType None
The first page download is of mime type "application/octet-stream". May be the zipped stream is being sent back to the browser? (You can confirm by saving the file and looking inside it.)
It would be really helpful to post the Live HTTP Header request / response traces for problematic and normal cases. If the content is same in both cases - the response HTML - may be forcing the mime type to be text/html (using ForceType directive) for that particular context can fix the issue.
If it turns out to be the case that gzipped content is being sent to the browser in the problematic case - then digging deeper would be necessary. Is this browser specific - only happens with Firefox or happens with all browsers?
Ok, based on the new information provided - looks like Squid is caching the problematic response and not sending the right headers back to the client. So the browsers are doing the right thing here - without the Content-Encoding and right Content-Type they cannot do much else.
Missing response headers for the problematic request :
X-Powered-By: Servlet 2.4; JBoss-4.2.1.GA (build: SVNTag=JBoss_4_2_1_GA date=200707131605)/Tomcat-5.5
Content-Encoding: gzip
Content-Type: text/html;charset=UTF-8
X-Powered-By : It sounds like the problematic requests don't actually hit your JBoss server. Can you verify this by checking access logs? Is Squid caching the response and then sending it back as text/plain?
Reconfiguring Squid to not cache that particular URL should help - see http://www.lirmm.fr/doc/Doc/FAQ/FAQ-7.html section 7.8 for example (which is specific to Squid 2 but later versions should have similar capabilities.)
Seems like a content-negotiation problem. Apache is guessing the content type using the "magic" byte and setting the content type incorrectly. That explains why it happens intermittently. Try disabling mod_negotiation and see what happens. See http://httpd.apache.org/docs/2.0/content-negotiation.html for more info.
I saw the following in your settings
ProxyPassReverse /ajp://localhost:8080/
But port 8080 is not ajp port. The default ajp port is 8009. Could this be your problem?
There is most likely something wrong with your web application, not Apache. If your web app sends back the correct Content-Type, Apache will gladly forward it to the client. No content negotiation will be done in that case. If you do not return any Content-Type, Apache will almost surely substitute text/plain, which is not what you want.
Test your web app without Apache in the middle, make sure that it sends back the correct Content-Type.
It uses to be when apache serves secure content in a non secure channel.