CSS is not always gzipped why? - gzip

In my Firefox or Chrome if I check the HTTP header the result are always with Content-Encoding: gzip. But I have customers reporting that they see "transfer-encoding: chunked" instead and the request are not gzipped.
http://www.example.com/public/css/style.min.css
If I or the customer do a gzip compression online check it's confirmed gzip is active.
https://checkgzipcompression.com = gzip!
But if I use a checker like this one. http://onlinecurl.com/
I also get the transfer-encoding: chunked
Request:
GET /style/css.css HTTP/1.1
Host: www.example.com
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
User-Agent: ...
Accept: /
Referer: http://www.example.com/
Accept-Encoding: gzip, deflate
Accept-Language: ...
Cookie: ...
Response:
HTTP/1.1 200 OK
Age: 532948
cache-control: public, max-age=604800
Content-Type: text/css
Date: Wed, 28 Jun 2017 12:35:07 GMT
ETag: "5349e8d595dfd21:0"
Last-Modified: Wed, 07 Jun 2017 13:56:17 GMT
Server: Microsoft-IIS/7.5
Vary: X-UA,Accept-Encoding, User-Agent
X-Cache: HIT
X-Cache-Hits: 6327
X-CacheReason: Static-js-css.
X-Powered-By: ASP.NET
X-Served-By: ip-xxx-xxx-xxx-xx.name.xxx
x-stale: true
X-UA-Device: pc
X-Varnish: 993020034 905795837
X-Varnish-beresp-grace: 43200.000
X-Varnish-beresp-status: 200
X-Varnish-beresp-ttl: 604800.000
transfer-encoding: chunked
Connection: keep-alive
Why are some requests not gzipped, when it should, this is my Varnish config (the part relevant for gzip):
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|flv|swf)$") {
# No point in compressing these
remove req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate") {
set req.http.Accept-Encoding = "deflate";
} else {
# unkown algorithm
remove req.http.Accept-Encoding;
}
}
# Enabling GZIP
if (beresp.http.Content-Type ~ "(text/css|application/x-javascript|application/javascript)") {
set beresp.do_gzip = true;
}
if (beresp.http.Content-Encoding ~ "gzip" ) {
if (beresp.http.Content-Length == "0") {
unset beresp.http.Content-Encoding;
}
}
set beresp.http.Vary = regsub(beresp.http.Vary, "(?i)^(.*?)X-Forwarded-URI,?(.*)$", "\1\2");
set beresp.http.Vary = regsub(beresp.http.Vary, "(?i)^(.*?)User-Agent,?(.*)$", "\1\2");
set beresp.http.Vary = regsub(beresp.http.Vary, "^(.*?),?$", "X-UA,\1");
set beresp.http.Vary = regsub(beresp.http.Vary, "^(.*?),?$", "\1");
Any ideas, thank you.

Responses will only be gzipped if the request indicates that it can accept a gzipped response. This is indicated by the Accept-Encoding header in the request. So perhaps your online curl is not sending that header. It may be the same for your clients who are seeing this. You really have customers who are reporting that they are not getting responses gzipped?
Update
Ah, I see what you're doing now. Are you using a recent version of Varnish? There's no need to do all this yourself now. Varnish handles it all natively. All you need to do is set do_gzip to on for the content types where you want it, and Varnish takes care of the rest, including the Accept-Encoding header. See the documentation here.
So just remove all of your gzip/encoding related code except the part directly under # Enabling GZIP:
# Enabling GZIP
if (beresp.http.Content-Type ~ "(text/css|application/x-javascript|application/javascript)") {
set beresp.do_gzip = true;
}
And that will probably get everything working. It works fine for me that way. The best amount of VCL is as little as possible, Varnish is very good at handling things itself. Don't forget to restart Varnish or otherwise clear the cache for this site after making the change.
In case it's useful, I use the following VCL for this:
if (
beresp.status == 200
&& beresp.http.content-type ~ "\b((text/(html|plain|css|javascript|xml|xsl))|(application/(javascript|xml|xhtml\+xml)))\b"
) {
set beresp.do_gzip = true;
}
Which checks for more content types that can benefit from compression, including HTML. I don't bother with application/x-javascript as it's ancient and not used.
On another note, are you sure you need to be modifying the Vary header in the way that you are doing there?

Related

Google safe browsing API not returning threat URLs

I'm sending requests to the Google safe browsing API. I believe I'm following their documentation correctly. I've tried regenerating my key.
I'm sending the request below
POST https://safebrowsing.googleapis.com/v4/threatMatches:find?key=AIxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1
User-Agent: Fiddler
Host: safebrowsing.googleapis.com
Content-Length: 511
{
"client": {
"clientId": "yourcompanyname",
"clientVersion": "1.5.2"
},
"threatInfo": {
"threatTypes": ["MALWARE", "SOCIAL_ENGINEERING"],
"platformTypes": ["WINDOWS"],
"threatEntryTypes": ["URL"],
"threatEntries": [
{"url": "http://www.urltocheck1.org/"},
{"url": "http://malware.testing.google.test"},
{"url": "http://www.urltocheck2.org/"},
{"url": "http://www.urltocheck3.com/"}
]
}
}
And getting back an empty response which is not what I'm expecting with the URLs supplied and following their example.
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Date: Wed, 08 Sep 2021 15:05:59 GMT
Server: scaffolding on HTTPServer2
Cache-Control: private
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
Accept-Ranges: none
Vary: Accept-Encoding
Content-Length: 3
{}
https://transparencyreport.google.com/safe-browsing/search?url=malware.testing.google.test
https://developers.google.com/safe-browsing/v4/lookup-api
You need to pass API key
You need to pass MALWARE url": "http://www.urltocheck1.org/"
if it is not malware it will show empty. try the following url
https://testsafebrowsing.appspot.com/s/malware.html with your code. please search and test with other maleware site

Why does alps (or profile) path in spring-data-rest return json body with non matching header Content-Type: text/html?

Right now the default Content-Type of my spring-data-rest (spring-boot 1.4.3.RELEASE) provided controllers are application/hal+json which makes sense. If I use chrome I get application/hal+json for the root of my application for instance since chrome uses an Accept header of "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8". However, the /profile (formally /alps) URLs provide text/html even though the response body is json (making the Content-Type not match the body). If you specifically ask for only application/json then you get the correct response header.
Here is the incorrectly working case (returns text/html when the document/body returned is NOT text/html):
$ http --verbose "http://localhost:8080/v1/profile/eldEvents" "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
GET /v1/profile/eldEvents HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/0.9.2
HTTP/1.1 200
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, Location, X-Auth, Authorization
Access-Control-Allow-Methods: GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS, PATCH
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Location
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: text/html;charset=UTF-8
Date: Fri, 03 Feb 2017 01:16:14 GMT
Expires: 0
Pragma: no-cache
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
Transfer-Encoding: chunked
X-Application-Context: application
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
{
"alps" : {
"version" : "1.0",
"descriptors" : [ {
"id" : "eldEvent-representation",
"href" : "http://localhost:8080/v1/profile/eldEvents",
"descriptors" : [ {
"name" : "sequenceId",
"type" : "SEMANTIC"
}, {
...
Cut out the rest of the response, you can see from above it is json data.
I believe the correct Content-Type for the above request should be something similar to "application/json".
If this is still relevant for you: I've solved this by manually overriding all requests against /profile/* with no content-type defined.
#Component
public class ProfileContentTypeFilter extends OncePerRequestFilter
{
private static final AntPathMatcher matcher = new AntPathMatcher();
#Override
protected void doFilterInternal (HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException
{
if (request.getContentType() == null && matcher.match("/profile/*", request.getRequestURI()))
{
// Override response content type for unspecified requests on profile endpoints
response.setContentType(MediaType.APPLICATION_JSON_VALUE);
}
filterChain.doFilter(request, response);
}
}

How to correctly handle multiple Set-Cookie headers in Hyper?

I'm using Hyper to send HTTP requests, but when multiple cookies are included in the response, Hyper will combine them to one which then fails the parsing procedure.
For example, here's a simple PHP script
<?php
setcookie("hello", "world");
setcookie("foo", "bar");
Response using curl:
$ curl -sLD - http://local.example.com/test.php
HTTP/1.1 200 OK
Date: Sat, 24 Dec 2016 09:24:04 GMT
Server: Apache/2.4.25 (Unix) PHP/7.0.14
X-Powered-By: PHP/7.0.14
Set-Cookie: hello=world
Set-Cookie: foo=bar
Content-Length: 0
Content-Type: text/html; charset=UTF-8
However for the following Rust code:
let client = Client::new();
let response = client.get("http://local.example.com/test.php")
.send()
.unwrap();
println!("{:?}", response);
for header in response.headers.iter() {
println!("{}: {}", header.name(), header.value_string());
}
...the output will be:
Response { status: Ok, headers: Headers { Date: Sat, 24 Dec 2016 09:31:54 GMT, Server: Apache/2.4.25 (Unix) PHP/7.0.14, X-Powered-By: PHP/7.0.14, Set-Cookie: hello=worldfoo=bar, Content-Length: 0, Content-Type: text/html; charset=UTF-8, }, version: Http11, url: "http://local.example.com/test.php", status_raw: RawStatus(200, "OK"), message: Http11Message { is_proxied: false, method: None, stream: Wrapper { obj: Some(Reading(SizedReader(remaining=0))) } } }
Date: Sat, 24 Dec 2016 09:31:54 GMT
Server: Apache/2.4.25 (Unix) PHP/7.0.14
X-Powered-By: PHP/7.0.14
Set-Cookie: hello=worldfoo=bar
Content-Length: 0
Content-Type: text/html; charset=UTF-8
This seems to be really weird to me. I used Wireshark to capture the response and there're two Set-Cookie headers in it. I also checked the Hyper documentation but got no clue...
I noticed Hyper internally uses a VecMap<HeaderName, Item> to store the headers. So they concatenate the them to one? Then how should I divide them into individual cookies afterwards?
I think that Hyper prefers to keep the cookies together in order to make it easier do some extra stuff with them, like checking a cryptographic signature with CookieJar (cf. this implementation outline).
Another reason might be to keep the API simple. Headers in Hyper are indexed by type and you can only get a single instance of that type with Headers::get.
In Hyper, you'd usually access a header by using a corresponding type. In this case the type is SetCookie. For example:
if let Some (&SetCookie (ref cookies)) = response.headers.get() {
for cookie in cookies.iter() {
println! ("Got a cookie. Name: {}. Value: {}.", cookie.name, cookie.value);
}
}
Accessing the raw header value of Set-Cookie makes less sense, because then you'll have to reimplement a proper parsing of quotes and cookie attributes (cf. RFC 6265, 4.1).
P.S. Note that in Hyper 10 the cookie is no longer parsed, because the crate that was used for the parsing triggers the openssl dependency hell.

jquery.ajax() POST receives empty response with IE10 on Nginx/PHP-FPM but works on Apache

I use a very simple jquery.ajax() call to fetch some HTML snippet from a server:
// Init add lines button
$('body').on('click', '.add-lines', function(e) {
$.ajax({
type : 'POST',
url : $(this).attr('href')+'?ajax=1&addlines=1',
data : $('#quickorder').serialize(),
success : function(data,x,y) {
$('#directorderform').replaceWith(data);
},
dataType : 'html'
});
e.preventDefault();
});
On the PHP side i basically echo out a HTML string. The jQuery version is 1.8.3.
The problem is in IE10: While it works fine there on Server A which runs on Apache it fails on Server B which runs on Nginx + PHP-FPM: If i debug the success handler on Server B I get a undefined for data. In the Network tab of the IE developer tools I can see the full response and all headers. It may affect other IE versions, but i could only test IE10 so far.
Here are the two response headers:
Server A, Apache (works):
HTTP/1.1 200 OK
Date: Thu, 25 Apr 2013 13:28:08 GMT
Server: Apache
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Vary: Accept-Encoding,User-Agent
Content-Encoding: gzip
Content-Length: 1268
Keep-Alive: timeout=2, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
Server B, Nginx + PHP-FPM (fails):
HTTP/1.1 200 OK
Server: nginx/1.1.19
Date: Thu, 25 Apr 2013 13:41:43 GMT
Content-Type: text/html; charset=utf8
Transfer-Encoding: chunked
Connection: keep-alive
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Encoding: gzip
The body part looks the same in both cases.
Any idea what could cause this issue?
Please also check the Content-Type Header, since Apache and Nginx are sending different values:
Content-Type: text/html; charset=UTF-8
vs.
Content-Type: text/html; charset=utf8
Update your Nginx config, add this line:
charset UTF-8;

Browser cache persists when using Varnish

I think this seems related to Varnish.
After I log out, user status should change. But it didn't. I have use "CTRL + F5" to force cache refreshing.
So I am little confused. I forced refreshing of the browser cache or Varnish cache.
If Varnish is caching the correct page ( user is not logged in), why didn't the browser display it, instead, it persists with the old page when the user still logged in.
Any clue?
VCL
backend testserver {
.host = "127.0.0.1";
.port = "8080";
}
acl purge {
"localhost";
"127.0.0.1";
"192.168.3.0"/24;
}
sub vcl_recv {
if (req.request == "PURGE") {
if (!client.ip ~ purge) {
error 405 "Not allowed.";
}
return(lookup);
}
remove req.http.X-Forwarded-For;
set req.http.X-Forwarded-For = client.ip;
// Remove has_js and Google Analytics cookies
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_[_a-z]+|has_js)=[^;]*","");
// remove a ";" prefix, if present
set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
// remove empty cookies.
if (req.http.Cookie ~ "^\s*$") {
unset req.http.Cookie;
}
// Skip the Vanish cache for install, update, and cron
if (req.url ~ "install\.php|update\.php|cron\.php") {
return (pass);
}
# Normalize Accept-Encoding to get better cache coherency
if (req.http.Accept-Encoding) {
# No point in compressing media that is already compressed
if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") {
remove req.http.Accept-Encoding;
# MSIE 6 JS bug workaround
} elsif(req.http.User-Agent ~ "MSIE 6") {
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate") {
set req.http.Accept-Encoding = "deflate";
} else {
# unkown algorithm
remove req.http.Accept-Encoding;
}
}
# ... other vcl_recv rules here ...
# Don't serve cached content to logged-in users
# Don't cache Drupal logged-in user sessions
# LOGGED_IN is the cookie that earlier version of Pressflow sets
# VARNISH is the cookie which the varnish.module sets
if (req.http.Cookie ~ "(VARNISH|DRUPAL_UID|LOGGED_IN)") {
return (pass);
}
// Let's have a little grace
// When backend cannot generate refreshed content
// this time will allow expired content to stay longer in grace
set req.grace = 0s;
if (req.http.host ~ "^www.test.com") {
set req.backend = testserver;
if (req.request != "GET" && req.request != "HEAD") {
return(pipe);
}
else {
return(lookup);
}
}elsif (req.http.host ~ "^www.test2.com") {
set req.backend = testserver;
if (req.request != "GET" && req.request != "HEAD") {
return(pipe);
}
else {
return(lookup);
}
}
else {
error 404 "test Cache Server IS Out of Order";
return(lookup);
}
# Drupal js/css doesn't need cookies, cache them
if (req.url ~ "^/modules/.*\.(js|css)\?") {
unset req.http.Cookie;
}
## Pass cron jobs and server-status
if (req.url ~ "cron.php") {
return (pass);
}
if (req.url ~ ".*/server-status$") {
return (pass);
}
}
sub vcl_hit {
if (req.request == "PURGE") {
set obj.ttl = 0s;
error 200 "Purged.";
}
}
sub vcl_miss {
if (req.request == "PURGE") {
error 404 "Not in cache.";
}
}
sub vcl_fetch {
if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") {
unset beresp.http.set-cookie;
}
#if (beresp.http.Pragma ~ "nocache") {
# return(pass);
#}
if (req.request == "GET" && req.url ~ "\.(txt|js)$") {
set beresp.ttl = 3600s;
}
else {
set beresp.ttl = 30d;
}
}
sub vcl_error {
set obj.http.Content-Type = "text/html; charset=utf-8";
set obj.http.Retry-After = "5";
synthetic {"<?xml version="1.0" encoding="utf-8"?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html><head><title>"} obj.status " " obj.response {"</title></head><body><h1>Error "} obj.status " " obj.response {"</h1><p>"} obj.response {"</p><h3>Guru Meditation:</h3><p>XID: "} req.xid {"</p><hr><p>Varnish cache server</p></body></html>"};
return (deliver);
}
sub vcl_pipe {
# http://www.varnish-cache.org/ticket/451
# This forces every pipe request to be the first one.
set bereq.http.connection = "close";
}
Headers
After Log In
Response Headers view source
Cache-Control store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection close
Content-Type text/html; charset=utf-8
Date Tue, 21 Feb 2012 04:09:09 GMT
Expires Sun, 11 Mar 1984 12:00:00 GMT
Last-Modified Tue, 21 Feb 2012 04:09:07 GMT
Location http://www.test.com/frontpage_empty
Server nginx/1.0.0
Set-Cookie SESSe3202baa92dbab78a8d1785ee17b05a0=deleted; expires=Mon, 21-Feb-2011 04:09:08 GMT; path=/ SESSe3202baa92dbab78a8d1785ee17b05a0=67d001b0720c9f5a74e5b671fae74d76; expires=Fri, 09-Mar-2012 12:49:09 GMT; path=/; domain=.test.com LOGGED_IN=Y; expires=Fri, 09-Mar-2012 12:49:07 GMT; path=/
Transfer-Encoding chunked
X-Powered-By PHP/5.2.17
Request Headers view source
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding gzip, deflate
Accept-Language en-us,en;q=0.5
Authorization Basic amFtZXM6MTIzMTIz
Connection keep-alive
Cookie OAID=e171ed7b31967c95a09c70646433d7b1; has_js=1; SESSe3202baa92dbab78a8d1785ee17b05a0=054b6fa52ce9009198a2160800d04456; __utma=256091342.2121990614.1327109315.1329792135.1329797585.41; __utmz=256091342.1327109315.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); OAID=e171ed7b31967c95a09c70646433d7b1; SESSa395c7767e83fe1b8cd4bf8229e072c3=2bfb1adba208cf29bf17921ce9946bd5; has_js=1; __utmc=256091342; __utmb=256091342.1.10.1329797585
Host www.test.com
Referer http://www.test.com/user/login?destination=frontpage_empty
User-Agent Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20100101 Firefox/10.0.2
Response Headers From Cache
Cache-Control store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection close
Content-Type text/html; charset=utf-8
Date Tue, 21 Feb 2012 04:09:09 GMT
Expires Sun, 11 Mar 1984 12:00:00 GMT
Last-Modified Tue, 21 Feb 2012 04:09:07 GMT
Location http://www.test.com/frontpage_empty
Server nginx/1.0.0
Set-Cookie SESSe3202baa92dbab78a8d1785ee17b05a0=deleted; expires=Mon, 21-Feb-2011 04:09:08 GMT; path=/ SESSe3202baa92dbab78a8d1785ee17b05a0=67d001b0720c9f5a74e5b671fae74d76; expires=Fri, 09-Mar-2012 12:49:09 GMT; path=/; domain=.test.com LOGGED_IN=Y; expires=Fri, 09-Mar-2012 12:49:07 GMT; path=/
Transfer-Encoding chunked
X-Powered-By PHP/5.2.17
Request Headers From Upload Stream
Content-Length 61
Content-Type application/x-www-form-urlencoded
After log out
Response Headers view source
Cache-Control store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection close
Content-Type text/html; charset=utf-8
Date Tue, 21 Feb 2012 09:10:29 GMT
Expires Sun, 11 Mar 1984 12:00:00 GMT
Last-Modified Tue, 21 Feb 2012 09:10:27 GMT
Location http://www.test.com/
Server nginx/1.0.0
Set-Cookie LOGGED_IN=deleted; expires=Mon, 21-Feb-2011 09:10:28 GMT; path=/
Transfer-Encoding chunked
X-Powered-By PHP/5.2.17
Request Headers view source
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding gzip, deflate
Accept-Language en-us,en;q=0.5
Authorization Basic amFtZXM6MTIzMTIz
Connection keep-alive
Cookie SESSe3202baa92dbab78a8d1785ee17b05a0=67d001b0720c9f5a74e5b671fae74d76; __utma=256091342.2121990614.1327109315.1329792135.1329797585.41; __utmz=256091342.1327109315.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); OAID=e171ed7b31967c95a09c70646433d7b1; SESSa395c7767e83fe1b8cd4bf8229e072c3=2bfb1adba208cf29bf17921ce9946bd5; has_js=1; __utmc=256091342; LOGGED_IN=Y
Host www.test.com
If-Modified-Since Tue, 21 Feb 2012 03:32:36 GMT
Referer http://www.test.com/
User-Agent Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20100101 Firefox/10.0.2
What your VCL is currently doing is removing Cookie from the request header and caching all requests. This causes the exact behavior you describe.