My ids is generating so many HTTP 414 requests i.r content Uri is too large, the interesting part is, all these requests are sending from my server to and external IP. I look into Apache and error logs , I couldn't find anything related to 414 status code. even my ids doesn't give much info other than the fallowing info
HTTP/1.1 414 Request-URI Too Large
Server: Apache/2.2.22 (Debian) PHP/5.4.4-14+deb7u8 mod_ssl/2.2.22 OpenSSL/
1.0.1e
Vary: Accept-Encoding
Content-Type: text/html; charset=iso-8859-1
Accept-Ranges: bytes
Date: Thu, 08 Jan 2015 23:02:47 GMT
X-Varnish: 213113893
Age: 0
Via: 1.1 varnish
Connection: keep-alive
X-Cache: MISS
Content-Length: 250
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>414 Request-URI Too Large</title>
</head><body>
<h1>Request-URI Too Large</h1>
<p>The requested URL's length exceeds the capacity
limit for this server.<br />
</p>
</body></html>
I just want know what my server wants to send? Is it a configuration issue?
Are you certain your server generates these requests? If it does, you should also be able to figure out what the target server is. Does it hit many external IP's randomly? This should tell you a lot more.
This could be the behavior of a worm that infected your server, and is attempting to find other broken servers. Request-URI too large makes it likely that it's attempting to trigger some buffer overflow based on the URI. I believe there were several security problems related to that some time ago.
At the very least I've seen a lot of attempts such as those on my own servers in my access log. But then it was other infected servers trying to break into mine, not my servers reaching out.
Related
does anyone know how to serve a web bundle so that it loads, rather than just downloading as a file?
Some disambiguation: There is a format called WebPackage (not to be confused with webpack), also called a Web Bundle. Files typically have the .wbn suffix. It contains html and js files and can be used to view websites offline. Useful for e.g. archiving websites or making websites that work well with intermittent network access. Download the file once, and you have all the assets you need for at last basic operation of the site.
The standard on how to serve a .wbn file is here:
https://wicg.github.io/webpackage/draft-yasskin-wpack-bundled-exchanges.html
However when I add the required headers in the web server, the .wbn file is just downloaded. If I drag the downloaded file onto my browser (google-chrome), the file is displayed as the website it contains, so unless there is some very subtle bug in there I believe that the format of the bundle is OK.
Here is a sample request:
Request URL: http://localhost/bundle/www-signed.wbn
Request Method: GET
Status Code: 200 OK
Remote Address: [::1]:80
Referrer Policy: strict-origin-when-cross-origin
and the server response:
Accept-Ranges: bytes
Connection: keep-alive
Content-Length: 4300
Content-Type: application/webbundle <-- Required by the standard
Date: Thu, 02 Sep 2021 12:00:24 GMT
ETag: "612ef7cb-10cc"
Last-Modified: Wed, 01 Sep 2021 03:47:23 GMT
Server: nginx/1.18.0 (Ubuntu)
X-Content-Type-Options: nosniff <-- required by the standard
If anyone has this working on a website or knows how to do it, I would love to have a look.
I had the same problem that the wbn file was just downloaded instead of executed.
I had to enable the web bundles feature even though my chrome version is 96+
Apache seems to be sending back a 400 Bad Request for a simple non-existing collection resource.
I have a resource /test/junit/test.bin. I want to check if the collection /test/junit/test.bin/ exists (i.e. a collection of the same name)---according to RFC 2518, a collection (with a slash) and a non-collection are distinct. When I issue a PROPFIND on /test/junit/test.bin/, Apache responds with a 400 Bad Request.
Now, I understand that many people and implementation have blurred the lines between collections and non-collections---that is, whether a collection has to have an ending slash. But whatever the case, the collection /test/junit/test.bin/ does not exist---issuing a PROPFIND on a collection that does not exist is not a "bad request". Shouldn't Apache simply issue a standard 404 Not Found or 410 Gone? What was "bad" about my request?
PROPFIND /test/junit/test.bin/ HTTP/1.1
depth: 1
content-length: 102
authorization: BASIC XXXXX
host: example.com
<?xml version="1.0" encoding="UTF-8"?>
<D:propfind xmlns:D="DAV:">
<D:allprop />
</D:propfind>
HTTP/1.1 400 Bad Request
Date: Mon, 23 Jan 2012 15:30:37 GMT
Server: Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 SVN/1.7.2 mod_jk/1.2.28
Content-Length: 226
Connection: close
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
</body></html>
Here's what Apache puts in the logs:
[Mon Jan 23 14:31:09 2012] [error] [client XX.XXX.XX.XXX] Could not fetch resource information. [400, #0]
[Mon Jan 23 14:31:09 2012] [error] [client XX.XXX.XX.XXX] (20)Not a directory: The URL contains extraneous path components. The resource could not be identified. [400, #0]
Yes, I understand that a resource of the same name exists and I'm asking for properties of a collection. So we can say "that's why Apache is doing this". But that doesn't explain anything---it is simply a prediction of what Apache will do. I want to know why Apache thinks it more appropriate to send back a 400 rather than a 404?
I was getting same error with Apache 2.4 running as a Webdav server on Windows 2012 and resolved it disabling "mod_negotiation.so":
#LoadModule negotiation_module modules/mod_negotiation.so
Here's guessing:
Apache will actually allow sub-paths to be sent along to resources. An example with PHP:
http://example.org/index.php/foobar
Foo bar will be sent as PATH_INFO along to index.php. My guess is that it's the same functionality that now incorrectly sends back the HTTP/1.1 400.
An appropriate response would indeed be 404 Not Found, although because it's just an added slash, I would personally probably just map /test.bin/ to /test.bin.
A redirect to /test.bin would imho also be fine.
Just so you know I'm not just anyone, I spend 90% of my professional time on HTTP and WebDAV, CalDAV, etc.
Please note: This is not a complain about a shoddy CMS.
Just toying with Apache Bench and got terrible results with our custom CMS, more exactly i got:
Requests per second: 0.37 [#/sec] (mean)
When i run another test with a plain php file i got:
Requests per second: 4786.07 [#/sec] (mean)
Another test with a previous version of the CMS:
Requests per second: 6068.66 [#/sec] (mean)
The website(s) are working fine, no problems detected, Google's Webmaster Tools reports our sites as faster than 80% of the pages which is fine, i think.
The test was:
ab -t 30 -c 10 http://example.com/
Maybe some kind of Apache problem? Bad .htaccess config, or similar?
Update:
Just ran a simple test with sockets and the results are similar. Page loads very, very slowly. If i ran my script with another website everything is fine.
Also, there's a small hint about a chunk length problem. (Bad Apache Headers, or line endings?)
The site is gzipped, and when verbose logging turned on, i see these lines in the response:
LOG: Response code = 200
LOG: header received:
HTTP/1.1 200 OK
Date: Tue, 04 Oct 2011 13:10:49 GMT
Server: Apache
Set-Cookie: PHPSESSID=ibnfoqir9fee2koirfl5mhm633; path=/
Expires: Sat, 26 Jul 1997 05:00:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Cache-Control: post-check=0, pre-check=0
Vary: Accept-Encoding
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
2ef6
Always at the same place, in the middle of the HTML-source, then <!DOCTYPE HTML> again.
Please, help.
Update #2:
Just checked my HTTP headers with Rex Swain's HTTP Viewer and got these results:
HTTP/1.1·200·OK(CR)(LF)
Date:·Wed,·05·Oct·2011·08:33:51·GMT(CR)(LF)
Server:·Apache(CR)(LF)
Set-Cookie:·PHPSESSID=n88g3qcvv9p6irm1fo0qfse8m2;·path=/(CR)(LF)
Expires:·Sat,·26·Jul·1997·05:00:00·GMT(CR)(LF)
Cache-Control:·no-store,·no-cache,·must-revalidate(CR)(LF)
Pragma:·no-cache(CR)(LF)
Cache-Control:·post-check=0,·pre-check=0(CR)(LF)
Vary:·Accept-Encoding(CR)(LF)
Connection:·close(CR)(LF)
Transfer-Encoding:·chunked(CR)(LF)
Content-Type:·text/html;·charset=UTF-8(CR)(LF)
(CR)(LF)
Do you notice anything unusual?
If it works well with ordinary web browsers (as you mentioned in the comments) the CMS handle the requests from Apache Benchmark differently.
A quick checklist:
AFAIK Apache Benchmark just send simple requests without any cookie handling, so try to set -C with a valid cookie (copy the values from a web browser).
Try to send exactly the same headers to the CMS as the web browser sends. Save a dump of a valid request with netcat, HttpFox or a packet sniffer and set the missing headers with -H.
Profile the CMS on the server while you're sending to it a request with Apache Benchmark. Maybe you found the bottleneck. Two poor man's error_log calls with a timestamp in the first and the last line of the index.php (or the tested script's entry point) could show how fast is the PHP script and help to calculate the overhead of the Apache HTTP Server and network.
If you run socket tests and browser tests from different machines it's could be a DNS issue (turn off HostnameLookups in Apache). Try to run them from the same machine.
Try ab -k ... or ab -H "Connection: close" ....
I guess the CMS does some costly initialization when it initializes the session and it's happens when it processes the first request. Since Apache Benchmark does not send the cookies back the CMS it creates a new session for every request and it's the cause of the slow answers.
A second guess is that the CMS handle the incoming http headers differently and the headers which was sent (or the lack of them) by Apache Benchmark trigger some costly/slow processing. It looks more appropriate since the report of the Google's Webmaster Tools.
Apache Benchmark sends HTTP 1.0 request, for example:
GET / HTTP/1.0
Host: localhost:9100
User-Agent: ApacheBench/2.3
Accept: */*
It looks to me that your server does not send any http header about Keep-Alive settings but it assumes that the client uses keep-alive when the client uses HTTP 1.0. It's not an RFC compliant behaviour:
From RFC 2616, 19.6.2 Compatibility with HTTP/1.0 Persistent Connections:
Some clients and servers might wish to be compatible with some
previous implementations of persistent connections in HTTP/1.0
clients and servers. Persistent connections in HTTP/1.0 are
explicitly negotiated as they are not the default behavior.
By default Apache Benchmark doesn't use keep-alive so it waits when the response arrives for the closing of the socket. The server closes it after 15 seconds idle. Downloading the main page with wget also takes 15 seconds. Wget also uses HTTP 1.0 in the request.
I think it's a bug in the PHP code of the CMS since ab works well on the same server with a plain php file. Anyway, you can workaround it with using keep-alive connections (-k):
ab -k -t 30 -c 10 http://example.com/
or with explicitly disabling persistent connections:
ab -H "Connection: close" -t 30 -c 10 http://example.com/
but it's still a server side issue and your original ab commands is right.
Please note that this bug probably affects only HTTP 1.0 clients (like Apache Benchmark, wget) and clients with regular browsers will not notice it.
We have an Tomcat server where we're trying to log the HTTP version which the response is sent with. We've seen a few times that it seems to be HTTP/0.9, which kills the content (not supported I guess?). We would like to get some stats on this by using the access log in apache. However, since the header line for this isn't prefixed by anything, we cannot use the %{xxx}o logging.
Is there a way to get this?
An example:
Response is:
HTTP/1.1 503 This application is not currently available
Server: Apache-Coyote/1.1
Content-Type: text/html;charset=utf-8
Content-Length: 1090
Date: Wed, 12 May 2010 12:53:16 GMT
Connection: close
And we'd like the catch HTTP/1.1 (alternatively, HTTP/1.1 503 This application is not currently available.
Is this possible? We do not have access to the application being served, so we need to do this either as a Java filter, or in the tomcat access log - Preferably in the access log.
Enabling the <Valve className="org.apache.catalina.valves.RequestDumperValve"/> in server.xml writes out the request and response headers for each request.
Example:
19-May-2010 12:26:18 org.apache.catalina.valves.RequestDumperValve invoke
INFO: protocol=HTTP/1.1
I've come across an issue where a web application has managed to create a cookie on the client, which, when submitted by the client to Apache, causes Apache to return the following:
HTTP/1.1 400 Bad Request
Date: Mon, 08 Mar 2010 21:21:21 GMT
Server: Apache/2.2.3 (Red Hat)
Content-Length: 7274
Connection: close
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
Size of a request header field exceeds server limit.<br />
<pre>
Cookie: ::: A REALLY LONG COOKIE ::: </pre>
</p>
<hr>
<address>Apache/2.2.3 (Red Hat) Server at www.foobar.com Port 80</address>
</body></html>
After looking into the issue, it would appear that the web application has managed to create a really long cookie, over 7000 characters. Now, don't ask me how the web application was able to do this, I was under the impression browsers were supposed to prevent this from happening. I've managed to come up with a solution to prevent the cookies from growing out of control again.
The issue I'm trying to tackle is how do I reset the large cookie on the client if every time the client tries to submit a request to Apache, Apache returns a 400 client error? I've tried using the ErrorDocument directive, but it appears that Apache bails on the request before reaching any custom error handling.
Oh dear! I think you'll have to at increase the LimitRequestFieldSize configuration option in Apache's httpd.conf to go any further, so you can get as far as running the server-side script. Make sure it cleans up the cookies as quickly as possible before they start to grow again!