Apache mod_cache_disk and AH00717: Premature end of cache headers - apache

I'm using apache-2.4.53 and having problems with caching. Perodically I see errors related to "premature end of cache headers" and don't know how to troubleshoot it. This is on fedora34. The site is using Cloudflare.
[Wed Jul 06 04:23:49.577237 2022] [cache_disk:error] [pid 3202400:tid 3202451] (70014)End of file found: [client 162.158.190.138:47866] AH00717: Premature end of cache headers.
[Wed Jul 06 04:23:49.577247 2022] [cache_disk:debug] [pid 3202400:tid 3202451] mod_cache_disk.c(883): [client 162.158.190.138:47866] AH02987: Error reading response headers from /var/cache/httpd/W_#/Ro6/7ihAG5M_Eyw0t7jA.header.vary/#Iu/98u/9#ot3lTARaKl3p8g.header for https://example.com:443/index.php?
The 162.158.190.138 is a cloudflare address.
There seems to be a ongoing related apache bug related to this issue since 2016, but I don't know that it's the same thing. I don't know how to reproduce it. Where do I start to look?
I can correlate the lines from the error_log with the access_log based on time, but I can't be sure they're directly related. There were three requests during that same second, all of which were bots. One was a 200, one was a 301 and one was a 404 for a file that was never there.
The file the error_log references is there on the filesystem:
find . -name \*7ihAG5M_Eyw0t7jA\*
./W_#/Ro6/7ihAG5M_Eyw0t7jA.header.vary
./W_#/Ro6/7ihAG5M_Eyw0t7jA.header
Here is the bug report from 2016.
https://bz.apache.org/bugzilla/show_bug.cgi?id=59744
Here are the cache options from the virtual host config:
CacheQuickHandler off
CacheLock on
CacheLockPath /tmp/mod_cache-lock
CacheLockMaxAge 5
CacheIgnoreHeaders Set-Cookie
CacheRoot "/var/cache/httpd"
# Enable the X-Cache-Detail header
CacheDetailHeader on
CacheEnable disk "/"
CacheHeader on
CacheDefaultExpire 800
CacheMaxExpire 64000
CacheIgnoreNoLastMod On
CacheDirLevels 2
CacheDirLength 3
I also notice the cache directory (/var/cache/httpd) grows boundlessly. At one time htcacheclean was running from systemd, but that doesn't look to be the case any longer.
Should I be investigating the HTTP cache control headers? Is that related or helpful?
Do you have any recommendations for optimal disk cache sizes?

Related

How to remove "allowmethods:error" entry in apache error_log

I have only allowed GET, POST methods in my apache server. It shows lot of times error like below which is of no use to me. How can I block these errors to come in apache error log
[Mon Aug 22 18:43:27.232168 2016] [allowmethods:error] [pid 19314:tid 139797637039872] [demowebsite.com] [client 224.0.0.0:80] AH01623: client method denied by server configuration: 'PURGE' to /var/www/demowebsite/
I also want to know what is causing it. I am using apache 2.4 + php 5.5 + mod_pagespeed + varnish.
Please help me.
Since you seem to be using Apache 2.4.X
Just by setting:
LogLevel allowmethods:crit
you will be rising the level necessary to log to error log to critical level in that module so they won't show up for errors.

Seemingly random header errors with mod_perl - bad requests or something else?

We're running mod_perl on Apache 2 and get seemingly random header related errors that we just can't figure out. Due to the nature of the site we get hit by a ton of bots, so I'm thinking these are caused by bad or malformed requests from bots, but I'd like to figure it out for sure one way or another so I know where to go from here. Here's an example of the 2 most common errors we see in the logs:
[Thu Nov 13 21:40:48 2014] [warn] /whatever did not send an HTTP header
[Thu Nov 13 21:40:48 2014] [error] [client x] malformed header from script. Bad header=\x86z\x03d\x86z\x03d\x86z\x03d\x86z\x03d\x86z\x03d\x86z\x03d\x86z\x03d\x86z: index.cgi
[Fri Nov 14 00:04:17 2014] [warn] /whatever did not send an HTTP header
[Fri Nov 14 00:04:17 2014] [error] [client x] Premature end of script headers: index.cgi
We get 100s of 1,000s of requests to these same URLs daily, and they work fine 99.999% of the time. I don't believe it's our scripts - we always output correct headers. No real users have ever complained about any errors on our site, etc. so I'm hoping this is just caused by some bad requests from bots.
And if so, what if anything can we do to make these stop? It's a real pain because these errors trip our monitoring systems and my techie gets about 20-30 fake error alerts every day.
Turns out it was a problem with Safari browsers and mod_deflate compression.
The simple solution:
BrowserMatch Safari gzip-only-text/html

Magento .htaccess file RewriteBase setting?

I've resolved a problem I got when I set up a staging environment for my existing live Magento store. But I don't understand why it worked & why I didn't have the problem on my live site.
The was the error I was getting, whenever I tried to navigate off my sites staging homepage I got a 500 Internal Server Error.
In the error logs I got this:
[Tue Dec 17 01:12:52 2013] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
[Tue Dec 17 12:56:17 2013] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://localhost.mysite.com/
With a little research online, I solved it by changing the .htaccess file RewriteBase setting to RewriteBase /. On my live site this setting is commented out as #RewriteBase /magento/.
Why is this setting only needed in my staging environment?
Should it be on the live environment too or should it be avoided
entirely?
I'm running the site locally on an Apache2 server on an Ubuntu
machine, maybe it has something to do with my local server set up?
Why is this setting only needed in my staging environment?
Probably because your staging environment is in a directory called /magento/ inside your document root. When you have checks like:
RewriteCond %{REQUEST_FILENAME} !-f
and the base is wrong, the check to see if a file exists fails. The base is used to append to the beginning of relative URL-paths. So if your files are in /magento/ then without the proper base, your checks will fail and your rules will loop indefinitely (or until the internal recursion limit is reached). On your production environment, the files are probably in your document root, so the base isn't completely necessary, since the rules are in the same directory as the files you are rewriting to.
As for the other 2 questions, can't answer without looking at all your rules and all of your setup.

mod_proxy 502 Proxy Error when upload a file

I'm trying to configure the following environment: a VPS running apache and mod_proxy to proxy another server running at home (the backend). I'm able to download files but when I try to upload files the POST request fails with this error:
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request POST /upload/upload.php.
Reason: Error reading from remote server
What I don't understand is why it works for files as low as 500 bytes. And it does quickly! However, when I try to upload a tiny 4kb file, it takes forever until the error is reached. As expected, the upload works flawlessly when the backend is accessed directly, without the VPS. I tried many configurations for both sides, also tried to increase the timeout but I don't think this is the way to go. The backend has mod_access installed and it doesn't log anything when the file upload fails.
The apache logs the following:
[Thu Nov 07 22:26:03.044309 2013] [proxy_http:error] [pid 9173] (70007)The timeout specified has expired: [client 177.148.252.99:54097] AH01102: error reading status line from remote server myhome.com, referer: http://frontend.com/upload/
[Thu Nov 07 22:26:03.044423 2013] [proxy:error] [pid 9173] [client 177.148.252.99:54097] AH00898: Error reading from remote server returned by /upload/upload.php, referer: http://frontend.com/upload/
The VPS is running Apache 2.4.6 and the server running at home is a Lighttpd 1.4.32 with SSL.
The virtual host redirecting to the backend is configured as follows:
<VirtualHost *:80>
ServerAdmin webmaster#frontend.com
ServerName frontend.com
ProxyPass / http://backend.com/
ProxyPassReverse / http://backend.com/
</VirtualHost>
Front-end:
http://frontend.com/upload/
Back-end:
http://backend.com/upload/
Do you have any ideas?
The error you're seeing is due to a timeout of the proxy connection to the back-end system. You need to set the ProxyTimeout value to something larger than the default. I would recommend that you start with a value of 60 seconds and see how that works.
ProxyTimeout 60
In addition, I agree with Varghese that you want to set the environment variable in order to configure the connection to send the data in chunks. Unfortunately, there is some confusion over whether the correct setting should be, so you can try either of these:
SetEnv proxy-sendchunked 1
or
SetEnv proxy-sendchunks 1
Good luck. It's a frustrating problem.
Environment variables availables in mod_proxy:
https://httpd.apache.org/docs/2.4/mod/mod_proxy_http.html
I have faced the similar kind of issue and got resolved with the statement, I put after the ProxyPassReverse statement.
The command you have to use is : SetEnv proxy-sendchunks 1

How do I fix this apache error log issue? Mod Deflate

I'm getting the following errors in my erorr.log file on every request
[Fri Jan 29 14:44:17 2010] [debug] mod_deflate.c(619): [client 10.128.99.99] Zlib: Compressed 6025 to 1847 : URL
about 2 gigs worth (high load server)
any idea what this error is referring to?
Make sure you only have LogLevel specified once, or that you're changing it for the correct virtual host. And you'll need to kick apache of course.
doh! just found it... someone had set a specific error log for this particular virtual host and the loglevel was set to debug.