My idea is to LOCK a file on an apache/WebDAV server, PUT an updated version of it on the server and UNLOCK it afterwards.
I just tried the following with cadaver:
create a file A.txt with content a file
GET file A.txt which yields a file
edit A.txt to be updated file and save it (in cadaver)
GET file A.txt which yields still yields a file
close edit (VIM) in cadaver
GET file A.txt which yields updated file
I guess internally cadaver LOCKs the file, GETs it and changes it locally. Then it PUTs it and UNLOCKs it.
QUESTION: how can I do this with curl?
PROBLEM: When then connection is slow and I do a PUT for a file, that is not yet completely uploaded, I only get the yet uploaded part. I would like to get the old one as long as the new one isn't complete.
TRIED: I tried the following to LOCK the file by hand (i.e. with cURL):
curl -v -X LOCK --user "user:password" http://myServer/newFile
What I get is:
* About to connect() to myServer port 80 (#0)
* Trying xx.xx.xxx.xxx... connected
* Connected to myServer (xx.xx.xxx.xxx) port 80 (#0)
* Server auth using Basic with user 'user'
> LOCK /newFile HTTP/1.1
> Authorization: Basic xxxxxxxxxxxxxxxxx
> User-Agent: curl/7.21.6 (x86_64-pc-linux-gnu) libcurl/7.21.6 OpenSSL/1.0.0e zlib/1.2.3.4 libidn/1.22 librtmp/2.3
> Host: myServer
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Date: Wed, 02 May 2012 15:20:55 GMT
< Server: Apache/2.2.3 (CentOS)
< Content-Length: 226
< Connection: close
< Content-Type: text/html; charset=iso-8859-1
<
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
</body></html>
* Closing connection #0
Looking at the apache log file I find:
[Wed May 02 15:20:55 2012] [error] [client xx.xx.xxx.xxx] The lock refresh for /newFile failed because no lock tokens were specified in an "If:" header. [400, #0]
[Wed May 02 15:20:55 2012] [error] [client xx.xx.xxx.xxx] (20)Not a directory: No locktokens were specified in the "If:" header, so the refresh could not be performed. [400, #103]
Thanks for any hint!!
UPDATE: I added my problem description.. Cheers!
The LOCK method requires a body which contains an XML description of the lock you want to take out. Your cURL test didn't include this body, hence the 400 error response.
But if I understand your question correctly, you want to:
LOCK
PUT
UNLOCK
If that's true, why would you bother with the LOCK and UNLOCK? Just do the PUT! Locks would only be useful if you want to carry out multiple operations while you are holding the lock and avoid having another client see the object in its partially modified state or (perhaps worse) modify the object concurrently with you.
A typical case where locking can be useful is a read-modify-write cycle: you want to GET the object, modify it locally, and PUT it back, but disallow another client from making a competing change between the time you GET it and the time you PUT it. However, for dealing with this specific case, HTTP offers a different method of resolving the issue, without using locks (which are ill-suited for a stateless protocol like HTTP):
GET the object
Modify it locally
PUT the object back with an If-Match header that contains the original ETag returned in step 1
If the PUT results in a 412 error, go back to step 1. Otherwise, you are done.
UPDATE: based on your updated question, I see that you are see a partial truncated or half-uploaded version of the new file if you do a GET concurrent with a PUT. This is unfortunate. The server should treat the PUT as atomic with respect to other requests. Other clients should either see the old version or the new version, never a state in between. There's nothing you should need to do from the client end to make this true. It should be fixed in the server.
Related
I cant access a specific filetype on my customer server (production).
Here are the results with cURL:
curl "http://domain.tld/fonts/glyphicons-halflings-regular.eot" -I
HTTP/1.1 200 OK
Date: Tue, 28 Jul 2015 12:06:23 GMT
Server: Apache/2.2.15 (Red Hat)
Last-Modified: Tue, 19 May 2015 15:32:20 GMT
ETag: "14023-4f42-516710421e900"
Accept-Ranges: bytes
Content-Length: 20290
Connection: close
Content-Type: application/vnd.ms-fontobject
The file is here.
But when I try to get the file content:
curl "http://domain.tld/fonts/glyphicons-halflings-regular.eot"
curl: (56) Recv failure: Connection was reset
I can't (yet) access the customer server, so I'm trying to guess what's wrong here.
What is working so far:
curl "https://domain.tld/fonts/glyphicons-halflings-regular.eot" --insecure
It is working in HTTPS, even if there is no certificate (which is why I use --insecure). I get the file content.
The customer can get the file if he accesses the file from a local URL.
I can access all other files on the server, even in the fonts directory.
I can't access all .eot files, even in other directories.
So I think it is one of those 2 problems:
- Apache configuration / .htaccess problem.
- Proxy / reverse proxy problem.
What do you think about it?
What kind of other test should I do?
What information should I ask to the customer?
Thanks.
Ok, here is the cause:
The customer firewall blocks .eot file content.
A vulnerability in Embedded Web Fonts Could Allow Remote Code Execution.
http://www.checkpoint.com/defense/advisories/public/2006/cpai-2006-010.html
As the .eot files are used by IE8 and lower, and those browser versions are not required by the customer, I've simply removed all references to .eot files.
Another solution would be to ask for the customer firewall admins to add an exception, as the severity is low.
I can download a test.pdf file using this get command below with a code that I've in C for a microcontroller to communicate with a server:
GET /TestFolder/test.pdf HTTP/1.1\r\n Host: www.xyz.com\r\n\r\n
the file: test.pdf is located in folder: TestFolder at Host: xyz.com
I wanted to test the program on Amazon S3; so I created an account and uploaded the data and made the file and folder public, added policy to the S3 bucket so objects can be accessed. When I send the GET command above to the S3 host: s3-us-west-2.amazonaws.com I get an error after the socket is connected and I get the server's IP, error message from S3 says:
Response error: HTTP/1.1 400 Bad Request
Transfer-Encoding: chunked
Date: Mon, .. 2015 04:15:02 GMT
Connection: close
Server: AmazonS3
I thought to remove extra \r\n from the get command, and sent this command to s3
GET /TestFolder/test.pdf HTTP/1.1\r\n Host: s3-us-west-2.amazonaws.com\r\n
This time, the requests hangs with no response. I don't get any error message, the socket is connected as usual and I see the server's IP.
I'll appreciate any suggestion or input where the problem could be. Obviously the GET command works for file download when the file is public at other sites, has anyone encountered this kind of issue with http/1.1 GET command? I can access the file from AWS S3 in my browser by typing the link.
You need \r\n twice at the end of the request, otherwise the server thinks it's waiting for more headers. The problem, here, is that you are not specifying the bucket, which you can do in the path or in the Host: header... you have to do it in one, or the other, but not both.
GET /your-bucket-name/TestFolder/test.pdf HTTP/1.1\r\n
Host: s3-us-west-2.amazonaws.com\r\n\r\n
...or...
GET /TestFolder/test.pdf HTTP/1.1\r\n
Host: your-bucket-name.s3-us-west-2.amazonaws.com\r\n\r\n
Please note: This is not a complain about a shoddy CMS.
Just toying with Apache Bench and got terrible results with our custom CMS, more exactly i got:
Requests per second: 0.37 [#/sec] (mean)
When i run another test with a plain php file i got:
Requests per second: 4786.07 [#/sec] (mean)
Another test with a previous version of the CMS:
Requests per second: 6068.66 [#/sec] (mean)
The website(s) are working fine, no problems detected, Google's Webmaster Tools reports our sites as faster than 80% of the pages which is fine, i think.
The test was:
ab -t 30 -c 10 http://example.com/
Maybe some kind of Apache problem? Bad .htaccess config, or similar?
Update:
Just ran a simple test with sockets and the results are similar. Page loads very, very slowly. If i ran my script with another website everything is fine.
Also, there's a small hint about a chunk length problem. (Bad Apache Headers, or line endings?)
The site is gzipped, and when verbose logging turned on, i see these lines in the response:
LOG: Response code = 200
LOG: header received:
HTTP/1.1 200 OK
Date: Tue, 04 Oct 2011 13:10:49 GMT
Server: Apache
Set-Cookie: PHPSESSID=ibnfoqir9fee2koirfl5mhm633; path=/
Expires: Sat, 26 Jul 1997 05:00:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Cache-Control: post-check=0, pre-check=0
Vary: Accept-Encoding
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
2ef6
Always at the same place, in the middle of the HTML-source, then <!DOCTYPE HTML> again.
Please, help.
Update #2:
Just checked my HTTP headers with Rex Swain's HTTP Viewer and got these results:
HTTP/1.1·200·OK(CR)(LF)
Date:·Wed,·05·Oct·2011·08:33:51·GMT(CR)(LF)
Server:·Apache(CR)(LF)
Set-Cookie:·PHPSESSID=n88g3qcvv9p6irm1fo0qfse8m2;·path=/(CR)(LF)
Expires:·Sat,·26·Jul·1997·05:00:00·GMT(CR)(LF)
Cache-Control:·no-store,·no-cache,·must-revalidate(CR)(LF)
Pragma:·no-cache(CR)(LF)
Cache-Control:·post-check=0,·pre-check=0(CR)(LF)
Vary:·Accept-Encoding(CR)(LF)
Connection:·close(CR)(LF)
Transfer-Encoding:·chunked(CR)(LF)
Content-Type:·text/html;·charset=UTF-8(CR)(LF)
(CR)(LF)
Do you notice anything unusual?
If it works well with ordinary web browsers (as you mentioned in the comments) the CMS handle the requests from Apache Benchmark differently.
A quick checklist:
AFAIK Apache Benchmark just send simple requests without any cookie handling, so try to set -C with a valid cookie (copy the values from a web browser).
Try to send exactly the same headers to the CMS as the web browser sends. Save a dump of a valid request with netcat, HttpFox or a packet sniffer and set the missing headers with -H.
Profile the CMS on the server while you're sending to it a request with Apache Benchmark. Maybe you found the bottleneck. Two poor man's error_log calls with a timestamp in the first and the last line of the index.php (or the tested script's entry point) could show how fast is the PHP script and help to calculate the overhead of the Apache HTTP Server and network.
If you run socket tests and browser tests from different machines it's could be a DNS issue (turn off HostnameLookups in Apache). Try to run them from the same machine.
Try ab -k ... or ab -H "Connection: close" ....
I guess the CMS does some costly initialization when it initializes the session and it's happens when it processes the first request. Since Apache Benchmark does not send the cookies back the CMS it creates a new session for every request and it's the cause of the slow answers.
A second guess is that the CMS handle the incoming http headers differently and the headers which was sent (or the lack of them) by Apache Benchmark trigger some costly/slow processing. It looks more appropriate since the report of the Google's Webmaster Tools.
Apache Benchmark sends HTTP 1.0 request, for example:
GET / HTTP/1.0
Host: localhost:9100
User-Agent: ApacheBench/2.3
Accept: */*
It looks to me that your server does not send any http header about Keep-Alive settings but it assumes that the client uses keep-alive when the client uses HTTP 1.0. It's not an RFC compliant behaviour:
From RFC 2616, 19.6.2 Compatibility with HTTP/1.0 Persistent Connections:
Some clients and servers might wish to be compatible with some
previous implementations of persistent connections in HTTP/1.0
clients and servers. Persistent connections in HTTP/1.0 are
explicitly negotiated as they are not the default behavior.
By default Apache Benchmark doesn't use keep-alive so it waits when the response arrives for the closing of the socket. The server closes it after 15 seconds idle. Downloading the main page with wget also takes 15 seconds. Wget also uses HTTP 1.0 in the request.
I think it's a bug in the PHP code of the CMS since ab works well on the same server with a plain php file. Anyway, you can workaround it with using keep-alive connections (-k):
ab -k -t 30 -c 10 http://example.com/
or with explicitly disabling persistent connections:
ab -H "Connection: close" -t 30 -c 10 http://example.com/
but it's still a server side issue and your original ab commands is right.
Please note that this bug probably affects only HTTP 1.0 clients (like Apache Benchmark, wget) and clients with regular browsers will not notice it.
We have an Tomcat server where we're trying to log the HTTP version which the response is sent with. We've seen a few times that it seems to be HTTP/0.9, which kills the content (not supported I guess?). We would like to get some stats on this by using the access log in apache. However, since the header line for this isn't prefixed by anything, we cannot use the %{xxx}o logging.
Is there a way to get this?
An example:
Response is:
HTTP/1.1 503 This application is not currently available
Server: Apache-Coyote/1.1
Content-Type: text/html;charset=utf-8
Content-Length: 1090
Date: Wed, 12 May 2010 12:53:16 GMT
Connection: close
And we'd like the catch HTTP/1.1 (alternatively, HTTP/1.1 503 This application is not currently available.
Is this possible? We do not have access to the application being served, so we need to do this either as a Java filter, or in the tomcat access log - Preferably in the access log.
Enabling the <Valve className="org.apache.catalina.valves.RequestDumperValve"/> in server.xml writes out the request and response headers for each request.
Example:
19-May-2010 12:26:18 org.apache.catalina.valves.RequestDumperValve invoke
INFO: protocol=HTTP/1.1
I need to either find a file in which the version is encoded or a way of polling it across the web so it reveals its version. The server is running at a host who will not provide me command line access, although I can browse the install location via FTP.
I have tried HEAD and do not get a version number reported.
If I try a missing page to get a 404 it is intercepted, and a stock page is returned which has no server information on it. I guess that points to the server being hardened.
Still no closer...
I put a PHP file up as suggested, but I can't browse to it and can't quite figure out the URL path that would load it. In any case I am getting plenty of access denied messages and the same stock 404 page. I am taking some comfort from knowing that the server is quite robustly protected.
The method
Connect to port 80 on the host and send it
HEAD / HTTP/1.0
This needs to be followed by carriage-return + line-feed twice
You'll get back something like this
HTTP/1.1 200 OK
Date: Fri, 03 Oct 2008 12:39:43 GMT
Server: Apache/2.2.9 (Ubuntu) DAV/2 SVN/1.5.0 PHP/5.2.6-1ubuntu4 with Suhosin-Patch mod_perl/2.0.4 Perl/v5.10.0
Last-Modified: Thu, 02 Aug 2007 20:50:09 GMT
ETag: "438118-197-436bd96872240"
Accept-Ranges: bytes
Content-Length: 407
Connection: close
Content-Type: text/html; charset=UTF-8
You can then extract the apache version from the Server: header
Typical tools you can use
You could use the HEAD utility which comes with a full install of Perl's LWP library, e.g.
HEAD http://your.webserver.com/
Or, use the curl utility, e.g.
curl --head http://your.webserver.com/
You could also use a browser extension which lets you view server headers, such as Live HTTP Headers or Firebug for Firefox, or Fiddler for IE
Stuck with Windows?
Finally. if you're on Windows, and have nothing else at your disposal, open a command prompt (Start Menu->Run, type "cmd" and press return), and then type this
telnet your.webserver.com 80
Then type (carefully, your characters won't be echoed back)
HEAD / HTTP/1.0
Press return twice and you'll see the server headers.
Other methods
As mentioned by cfeduke and Veynom, the server may be set to return limited information in the Server: header. Try and upload a PHP script to your host with this in it
<?php phpinfo() ?>
Request the page with a web browser and you should see the Apache version reported there.
You could also try and use PHPShell to have a poke around, try a command like
/usr/sbin/apache2 -V
httpd -v will give you the version of Apache running on your server (if you have SSH/shell access).
The output should be something like this:
Server version: Apache/2.2.3
Server built: Oct 20 2011 17:00:12
As has been suggested you can also do apachectl -v which will give you the same output, but will be supported by more flavours of Linux.
Warning, some Apache servers do not always send their version number when using HEAD, like in this case:
HTTP/1.1 200 OK
Date: Fri, 03 Oct 2008 13:09:45 GMT
Server: Apache
X-Powered-By: PHP/5.2.6RC4-pl0-gentoo
Set-Cookie: PHPSESSID=a97a60f86539b5502ad1109f6759585c; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Connection: close
Content-Type: text/html
Connection to host lost.
If PHP is installed then indeed, just use the php info command:
<?php phpinfo(); ?>
Rarely, a hardened HTTP server is configured to give no server information or misleading server information. In those scenarios if the server has PHP enabled you can add:
<?php phpinfo(); ?>
in a file and browse to it and look for the
_SERVER["SERVER_SOFTWARE"]
entry. This is susceptible to the same hardening lack of information/misleading though I would imagine that it's not altered often, because this method first requires access to the machine to create the PHP file.
The level of version information given out by an Apache server can be configured by the ServerTokens setting in its configuration.
I believe there is also a setting that controls whether the version appears in server error pages, although I can't remember what it is off the top of my head. If you don't have direct access to the server, and the server administrator is competent and doesn't want you to know the version they're running... I think you may be SOL.
Telnet to the host at port 80.
Type:
get / http1.1
::enter::
::enter::
It is kind of an HTTP request, but it's not valid so the 500 error it gives you will probably give you the information you want. The blank lines at the end are important otherwise it will just seem to hang.
If they have error pages enabled, you can go to a non-existent page and look at the bottom of the 404 page.
Your best option is through PHP:
All version requests from the client side cannot be trusted since your Apache could be configured with ServerTokens Prod and ServerSignature Off. See: http://www.petefreitag.com/item/419.cfm
In the default installation, call a page that doesn't exist and you get an error with the version at the end:
Object not found!
The requested URL was not found on this server. If you entered the URL manually please
check your spelling and try again.
If you think this is a server error, please contact the webmaster.
Error 404
localhost
10/03/08 14:41:45
Apache/2.2.8 (Win32) DAV/2 mod_ssl/2.2.8 OpenSSL/0.9.8g mod_autoindex_color PHP/5.2.5
Simply use something like the following - the string should be there already:
<?php
if(isset($_SERVER['SERVER_SOFTWARE'])){
echo $_SERVER['SERVER_SOFTWARE'];
}
?>
Use this PHP script:
$version = apache_get_version();
echo "$version\n";
Se apache_get_version.