Apache2: Change response headers - http-headers

I'm running on my ubuntu 12.04 system apache2 and playing around with response headers. I want to change the behavior of http response headers, especially the Content-Length header. I've tried adding following lines in my apache2.conf in the IfModule mod_headers.c section:
Header set Static-Header "Static Content with nonsense"
Header set Content-Length "1338"
If I run curl -I localhost I get the expected header field Content-Length: 1338 (curl -I performs a HEAD request).
If I run curl -i the Content-Length is correctly calculated.
In RFC2616, section 9.4 is described that the HEAD request SHOULD be identical to the information sent in response to a GET request.
Can someone explain me this behavior?!

Apache2 always calculates the content-length from scratch when it actually does deliver content. You'll experience that same behavior if you change that header using PHP. This is necessary to make sure the Content-Length matches the length of the content that is sent after the server applied, for example, compression (if mod_deflate is active).
Because of this, in any request that sends content, your change to that header is nullified. But as Apache doesn't even look at the content in an head-request (only it's metadata), it does not calculate content-length. This is valid, as HEAD-requests don't have any body, so content-length is always zero.
Therefore, you should:
a) not modify the content-length header in the first place
b) not send one for HEAD requests

Related

exclude headers on different path

I am running haproxy 1.6.3 and I have the X-Frame-Origin headers set on the frontent. I just come across the situation when the site is loaded in a iframe and the content is blocked because of that header. I have tried to setting an acl rule which looks as the following:
acl is_embeded path_beg /?embeded=1
http-response set-header x-frame-options "SAMEORIGIN" if !is_embeded
when I run haproxy -f /etc/haproxy/haproxy.conf -c I for the following error:
[WARNING] 316/145915 (23701) : parsing [/etc/haproxy/haproxy.cfg:42] : acl 'is_embeded' will never match because it only involves keywords that are incompatible with 'frontend http-response header rule'
Is there a way to get this work?
Because you are using a request acl in response stage.
You need to stroe the url like this:
http-request set-var(txn.urlEmbeded) url
acl is_embeded var(txn.urlEmbeded) -m beg /?embeded=1
http-response set-header x-frame-options "SAMEORIGIN" if !is_embeded
also you are using path, it does not include the query. you might need to use url or query(embeded) with found method. You get the idea.
There are actually two problems with what you are doing.
First, the path fetch is only available during request processing -- not response processing. This is the reason for the warning. The path isn't allocated a buffer of its own -- the fetch just extracts it from the pending request buffer whenever it's evaluated, and that pending request buffer is released as soon as the request had been sent to the server.
Second, everything beginning with ? is not part of the path. That's the query string.
The capture.req.uri is the correct fetch to use, since it includes both the path and the query string, and since a memory buffer is allocated for it, it persists during request processing.
acl is_embeded capture.req.uri -m beg /?embeded=1
capture.req.uri
This extracts the request's URI, which starts at the first slash and ends before the first space in the request (without the host part). Unlike path and url, it can be used in both request and response because it's allocated.
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.6-capture.req.uri
Also note the correct spelling for the word embedded.

Sending Server Push with Continuation frame in Apache

I'm testing my HTTP/2 parser and currently I'm having trouble testing push promise with continuation. I'm using Apache as the HTTP/2 server. I managed to push a resource using either Location's Header add link or H2PushResource. But when I tried to check push promise with continuation I couldn't modify the headers sent in the pushed request.
I wanted to add a few long headers for the pushed request but the commands I found didn't affect the pushed request:
RequestHeader modifies the request headers before the content is handled - This means that the header is modified inside Apache's HTTP parser, it doesn't affect the sent pushed request
Header modifies the response headers sent from the server - This command adds a header to the response, not the request
Edit:
I noticed that the user-agent header is also sent in the pushed request, so I sent a really long user-agent header in my request but then I got a 431 response (Request header field too large).
Any other idea?
Edit 2:
Here are my HTTP/2 configuration lines:
<Location /push.html>
H2PushResource add "/push.png"
</Location>
Header set MyRespHeader "Testing response"
RequestHeader add MyReqHeader "Testing request"
When I receive a response from Apache I get the header myrespheader but the pushed request doesn't send the header myreqheader or myrespheader

Apache Httpd LogFormat can log "trailer lines"?

I have read the documentation for configuring a custom LogFormat for the Apache HTTPD server located here http://httpd.apache.org/docs/current/mod/mod_log_config.html#formats
In this table these two entries exist:
%{VARNAME}^ti The contents of VARNAME: trailer line(s) in the request sent to the server.
%{VARNAME}^to The contents of VARNAME: trailer line(s) in the response sent from the server.
I've tried to figure out what these two mean and so far I have been unlucky. What do these two mean? What will be logged from the request/response?
It's technically possible for certain types of HTTP requests or responses to have a "trailer" -- that is, a header that is included at the end of the message, instead of at the beginning. For example:
HTTP/1.1 200 OK
Trailer: Expires
<response content>
Expires: <date>
The %{}^ti and %{}^to log formats can be used to log those trailers.
Not sure what this is for? Don't worry, you're not alone. Most HTTP clients and servers -- including web browsers -- don't support or use trailers. Unless your application specifically uses HTTP trailers, you can safely ignore this.

How to get mod_python site to allow clients to cache selected image content?

I have a small dynamic site implemented in mod_python. I inherited this, and while I have successfully made relatively minor changes to its content and logic, with HTTP caching I'm out of my depth. The site works fine already, so this isn't "the usual question" about how to disable caching for a dynamic site.
My problem is there is one large banner image on each page (the same image from same URL on each page) which accounts for ~90% of site bandwidth but which so far as I can tell isn't being cached; as I browse the site every time I click to a new page (or back to a previously visited one) there it is downloading it yet again.
If I wget the banner's image URL (to see the headers) I see:
$ wget -S http://example.com/site/gif?name=banner.gif
--2012-04-04 23:02:38-- http://example.com/site/gif?name=banner.gif
Resolving example.com... 127.0.0.1
Connecting to example.com|127.0.0.1|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Wed, 04 Apr 2012 22:02:38 GMT
Server: Apache/2.2.14 (Ubuntu)
Content-Location: gif.py
Vary: negotiate
TCN: choice
Set-Cookie: <blah blah blah>
Connection: close
Content-Type: image/gif
Length: unspecified [image/gif]
Saving to: `gif?name=banner.gif'
and the code which is serving it up isn't much more than
req.content_type = 'image/gif'
req.sendfile(fullname)
where fullname is a file-path munged from the request's name parameter.
My question is: is there some quick fix along the lines of setting an Expires: or Vary: field in the image's request response which will result in clients being less keen to repeatedly download it ?
The site is hosted on Ubuntu 10.04 and doesn't have any non-default apache mods enabled other than rewrite.
I note that most (not all) of the site pages' headers themselves do contain
Pragma: no-cache
Cache-Control: no-cache
Expires: -1
Vary: Accept-Encoding
(and the original site author has clearly thought about this as no-cache is applied selectively to non-static content pages). I don't know enough about caching to know whether this somehow poisons the included .gif IMG into being reloaded every time too though.
I don't know if my answer can help you or not, but I post it anyway.
Instead of serving image files from within python application, you can create another virtualhost within apache (on same server) just to serve static and image file. In your python application, you can embed image likes this
<img src="http://img.yoursite.com/banner.gif" alt="banner" />
With separate virtualhost, you can add various header to various content type using mode header, or add another caching layer for your static file.
Hope this help.

Prevent Apache from chunking gzipped content

When using mod_deflate in Apache2, Apache will chunk gzipped content, setting the Transfer-encoding: chunked header. While this results in a faster download time, I cannot display a progress bar.
If I handle the compression myself in PHP, I can gzip it completely first and set the Content-length header, so that I can display a progress bar to the user.
Is there any setting that would change Apache's default behavior, and have Apache set a Content-length header instead of chunking the response, so that I don't have to handle the compression myself?
You could maybe play with the sendBufferSize to get a value big enough to contain your response in one chunk.
Then chunked content is part of the HTTP/1.1 protocol, you could force an HTTP/1.0 response (so not chunked: “A server MUST NOT send transfer-codings to an HTTP/1.0 client.”) by setting the force-response-1.0 in your apache configuration. But PHP breaks this settings, it's a long-known-bug of PHP, there's a workaround.
We could try to modify the request on the client side with an header preventing the chunked content, but w3c says: "All HTTP/1.1 applications MUST be able to receive and decode the "chunked" transfer-coding", so I don't think there's any header like 'Accept' and such which can prevent the server from chunking content. You could however try to set your request in HTTP/1.0, it's not really an header of the request, it's the first line, should be possible with jQuery, certainly.
Last thing, HTTP/1.0 lacks one big thing, the 'host' headers is not mandatory, verify your requests in HTTP/1.0 are still using the 'host' header if you work with name based virtualhosts.
edit: by using the technique cited in the workaround you can see that you could tweak Apache env in the PHP code. This can be used to force the 1.0 mode only for your special gzipped content, and you should use it to prevent having you complete application in HTTP/1.0 (or use the request mode to set the HTTP/1.0 for you gzip requests).