Apache mod_dav 400 Bad Request for non-existent collection resource - apache

Apache seems to be sending back a 400 Bad Request for a simple non-existing collection resource.
I have a resource /test/junit/test.bin. I want to check if the collection /test/junit/test.bin/ exists (i.e. a collection of the same name)---according to RFC 2518, a collection (with a slash) and a non-collection are distinct. When I issue a PROPFIND on /test/junit/test.bin/, Apache responds with a 400 Bad Request.
Now, I understand that many people and implementation have blurred the lines between collections and non-collections---that is, whether a collection has to have an ending slash. But whatever the case, the collection /test/junit/test.bin/ does not exist---issuing a PROPFIND on a collection that does not exist is not a "bad request". Shouldn't Apache simply issue a standard 404 Not Found or 410 Gone? What was "bad" about my request?
PROPFIND /test/junit/test.bin/ HTTP/1.1
depth: 1
content-length: 102
authorization: BASIC XXXXX
host: example.com
<?xml version="1.0" encoding="UTF-8"?>
<D:propfind xmlns:D="DAV:">
<D:allprop />
</D:propfind>
HTTP/1.1 400 Bad Request
Date: Mon, 23 Jan 2012 15:30:37 GMT
Server: Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 SVN/1.7.2 mod_jk/1.2.28
Content-Length: 226
Connection: close
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
</body></html>
Here's what Apache puts in the logs:
[Mon Jan 23 14:31:09 2012] [error] [client XX.XXX.XX.XXX] Could not fetch resource information. [400, #0]
[Mon Jan 23 14:31:09 2012] [error] [client XX.XXX.XX.XXX] (20)Not a directory: The URL contains extraneous path components. The resource could not be identified. [400, #0]
Yes, I understand that a resource of the same name exists and I'm asking for properties of a collection. So we can say "that's why Apache is doing this". But that doesn't explain anything---it is simply a prediction of what Apache will do. I want to know why Apache thinks it more appropriate to send back a 400 rather than a 404?

I was getting same error with Apache 2.4 running as a Webdav server on Windows 2012 and resolved it disabling "mod_negotiation.so":
#LoadModule negotiation_module modules/mod_negotiation.so

Here's guessing:
Apache will actually allow sub-paths to be sent along to resources. An example with PHP:
http://example.org/index.php/foobar
Foo bar will be sent as PATH_INFO along to index.php. My guess is that it's the same functionality that now incorrectly sends back the HTTP/1.1 400.
An appropriate response would indeed be 404 Not Found, although because it's just an added slash, I would personally probably just map /test.bin/ to /test.bin.
A redirect to /test.bin would imho also be fine.
Just so you know I'm not just anyone, I spend 90% of my professional time on HTTP and WebDAV, CalDAV, etc.

Related

Serving static files in apache returns 500 if file doesn't exist

I am running on Centos 7 and having a strange but very minor issue, if a static file such as css or image is missing on the filesystem, Apache will return 500 instead of a 404.
I've tried a few things such as temporarily disabling selinux, mod_security, and mod_pagespeed to narrow down the issue, and the logs are not giving me any indication as to what rule would be causing it to return 500 instead of gracefully returning a 404.
Does anyone have any ideas for ways to find the cause of the 500 errors being thrown?
Edit (add log samples):
modsec_audit.log
--f9f2f74b-F--
HTTP/1.0 500 Internal Server Error
Content-Length: 0
Connection: close
Content-Type: text/html; charset=UTF-8
access_log
[09/Jun/2015:19:38:37 -0400] "GET /subfolder/images/BTN_red_bullet.gif HTTP/1.1" 500 - "http://example.com/subfolder/" "Serf/1.1.0 mod_pagespeed/1.9.32.3-4448"
error_log
[Tue Jun 09 19:09:28.612497 2015] [pagespeed:warn] [pid 18574] [mod_pagespeed 1.9.32.3-4448 #18574] Fetch timed out: http://example.com/subfolder/images/BTN_red_bullet.gif (connecting to:x.x.x.x) (1) waiting for 50 ms

Strange HTTP 414 request captured by IDS

My ids is generating so many HTTP 414 requests i.r content Uri is too large, the interesting part is, all these requests are sending from my server to and external IP. I look into Apache and error logs , I couldn't find anything related to 414 status code. even my ids doesn't give much info other than the fallowing info
HTTP/1.1 414 Request-URI Too Large
Server: Apache/2.2.22 (Debian) PHP/5.4.4-14+deb7u8 mod_ssl/2.2.22 OpenSSL/
1.0.1e
Vary: Accept-Encoding
Content-Type: text/html; charset=iso-8859-1
Accept-Ranges: bytes
Date: Thu, 08 Jan 2015 23:02:47 GMT
X-Varnish: 213113893
Age: 0
Via: 1.1 varnish
Connection: keep-alive
X-Cache: MISS
Content-Length: 250
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>414 Request-URI Too Large</title>
</head><body>
<h1>Request-URI Too Large</h1>
<p>The requested URL's length exceeds the capacity
limit for this server.<br />
</p>
</body></html>
I just want know what my server wants to send? Is it a configuration issue?
Are you certain your server generates these requests? If it does, you should also be able to figure out what the target server is. Does it hit many external IP's randomly? This should tell you a lot more.
This could be the behavior of a worm that infected your server, and is attempting to find other broken servers. Request-URI too large makes it likely that it's attempting to trigger some buffer overflow based on the URI. I believe there were several security problems related to that some time ago.
At the very least I've seen a lot of attempts such as those on my own servers in my access log. But then it was other infected servers trying to break into mine, not my servers reaching out.

LOCK PUT UNLOCK with cURL / WebDAV

My idea is to LOCK a file on an apache/WebDAV server, PUT an updated version of it on the server and UNLOCK it afterwards.
I just tried the following with cadaver:
create a file A.txt with content a file
GET file A.txt which yields a file
edit A.txt to be updated file and save it (in cadaver)
GET file A.txt which yields still yields a file
close edit (VIM) in cadaver
GET file A.txt which yields updated file
I guess internally cadaver LOCKs the file, GETs it and changes it locally. Then it PUTs it and UNLOCKs it.
QUESTION: how can I do this with curl?
PROBLEM: When then connection is slow and I do a PUT for a file, that is not yet completely uploaded, I only get the yet uploaded part. I would like to get the old one as long as the new one isn't complete.
TRIED: I tried the following to LOCK the file by hand (i.e. with cURL):
curl -v -X LOCK --user "user:password" http://myServer/newFile
What I get is:
* About to connect() to myServer port 80 (#0)
* Trying xx.xx.xxx.xxx... connected
* Connected to myServer (xx.xx.xxx.xxx) port 80 (#0)
* Server auth using Basic with user 'user'
> LOCK /newFile HTTP/1.1
> Authorization: Basic xxxxxxxxxxxxxxxxx
> User-Agent: curl/7.21.6 (x86_64-pc-linux-gnu) libcurl/7.21.6 OpenSSL/1.0.0e zlib/1.2.3.4 libidn/1.22 librtmp/2.3
> Host: myServer
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Date: Wed, 02 May 2012 15:20:55 GMT
< Server: Apache/2.2.3 (CentOS)
< Content-Length: 226
< Connection: close
< Content-Type: text/html; charset=iso-8859-1
<
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
</body></html>
* Closing connection #0
Looking at the apache log file I find:
[Wed May 02 15:20:55 2012] [error] [client xx.xx.xxx.xxx] The lock refresh for /newFile failed because no lock tokens were specified in an "If:" header. [400, #0]
[Wed May 02 15:20:55 2012] [error] [client xx.xx.xxx.xxx] (20)Not a directory: No locktokens were specified in the "If:" header, so the refresh could not be performed. [400, #103]
Thanks for any hint!!
UPDATE: I added my problem description.. Cheers!
The LOCK method requires a body which contains an XML description of the lock you want to take out. Your cURL test didn't include this body, hence the 400 error response.
But if I understand your question correctly, you want to:
LOCK
PUT
UNLOCK
If that's true, why would you bother with the LOCK and UNLOCK? Just do the PUT! Locks would only be useful if you want to carry out multiple operations while you are holding the lock and avoid having another client see the object in its partially modified state or (perhaps worse) modify the object concurrently with you.
A typical case where locking can be useful is a read-modify-write cycle: you want to GET the object, modify it locally, and PUT it back, but disallow another client from making a competing change between the time you GET it and the time you PUT it. However, for dealing with this specific case, HTTP offers a different method of resolving the issue, without using locks (which are ill-suited for a stateless protocol like HTTP):
GET the object
Modify it locally
PUT the object back with an If-Match header that contains the original ETag returned in step 1
If the PUT results in a 412 error, go back to step 1. Otherwise, you are done.
UPDATE: based on your updated question, I see that you are see a partial truncated or half-uploaded version of the new file if you do a GET concurrent with a PUT. This is unfortunate. The server should treat the PUT as atomic with respect to other requests. Other clients should either see the old version or the new version, never a state in between. There's nothing you should need to do from the client end to make this true. It should be fixed in the server.

Is it possible to log the first line of the response in apache?

We have an Tomcat server where we're trying to log the HTTP version which the response is sent with. We've seen a few times that it seems to be HTTP/0.9, which kills the content (not supported I guess?). We would like to get some stats on this by using the access log in apache. However, since the header line for this isn't prefixed by anything, we cannot use the %{xxx}o logging.
Is there a way to get this?
An example:
Response is:
HTTP/1.1 503 This application is not currently available
Server: Apache-Coyote/1.1
Content-Type: text/html;charset=utf-8
Content-Length: 1090
Date: Wed, 12 May 2010 12:53:16 GMT
Connection: close
And we'd like the catch HTTP/1.1 (alternatively, HTTP/1.1 503 This application is not currently available.
Is this possible? We do not have access to the application being served, so we need to do this either as a Java filter, or in the tomcat access log - Preferably in the access log.
Enabling the <Valve className="org.apache.catalina.valves.RequestDumperValve"/> in server.xml writes out the request and response headers for each request.
Example:
19-May-2010 12:26:18 org.apache.catalina.valves.RequestDumperValve invoke
INFO: protocol=HTTP/1.1

How to delete a large cookie that causes Apache to 400

I've come across an issue where a web application has managed to create a cookie on the client, which, when submitted by the client to Apache, causes Apache to return the following:
HTTP/1.1 400 Bad Request
Date: Mon, 08 Mar 2010 21:21:21 GMT
Server: Apache/2.2.3 (Red Hat)
Content-Length: 7274
Connection: close
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
Size of a request header field exceeds server limit.<br />
<pre>
Cookie: ::: A REALLY LONG COOKIE ::: </pre>
</p>
<hr>
<address>Apache/2.2.3 (Red Hat) Server at www.foobar.com Port 80</address>
</body></html>
After looking into the issue, it would appear that the web application has managed to create a really long cookie, over 7000 characters. Now, don't ask me how the web application was able to do this, I was under the impression browsers were supposed to prevent this from happening. I've managed to come up with a solution to prevent the cookies from growing out of control again.
The issue I'm trying to tackle is how do I reset the large cookie on the client if every time the client tries to submit a request to Apache, Apache returns a 400 client error? I've tried using the ErrorDocument directive, but it appears that Apache bails on the request before reaching any custom error handling.
Oh dear! I think you'll have to at increase the LimitRequestFieldSize configuration option in Apache's httpd.conf to go any further, so you can get as far as running the server-side script. Make sure it cleans up the cookies as quickly as possible before they start to grow again!