Apache 2 503 Error MaxRequestWorkers - apache2.4

It looks like my site is down... I'm getting a 503 error.
Apache2 Error Logs say:
Sun Apr 17 01:07:21.301617 2016] [mpm_event:error] [pid 11504:tid 140299448981376] AH00485: scoreboard is full, not at MaxRequestWorkers
I found not setting in apache2.conf to change this.
Thank you a lot for any help.
Regards,

This might help you: https://stackoverflow.com/a/46273227/4435083
The solution might be pretty simple. Just set MaxConnectionsPerChild to 0 or comment it out (mpm_event.conf).

Related

Apache mod_cache_disk and AH00717: Premature end of cache headers

I'm using apache-2.4.53 and having problems with caching. Perodically I see errors related to "premature end of cache headers" and don't know how to troubleshoot it. This is on fedora34. The site is using Cloudflare.
[Wed Jul 06 04:23:49.577237 2022] [cache_disk:error] [pid 3202400:tid 3202451] (70014)End of file found: [client 162.158.190.138:47866] AH00717: Premature end of cache headers.
[Wed Jul 06 04:23:49.577247 2022] [cache_disk:debug] [pid 3202400:tid 3202451] mod_cache_disk.c(883): [client 162.158.190.138:47866] AH02987: Error reading response headers from /var/cache/httpd/W_#/Ro6/7ihAG5M_Eyw0t7jA.header.vary/#Iu/98u/9#ot3lTARaKl3p8g.header for https://example.com:443/index.php?
The 162.158.190.138 is a cloudflare address.
There seems to be a ongoing related apache bug related to this issue since 2016, but I don't know that it's the same thing. I don't know how to reproduce it. Where do I start to look?
I can correlate the lines from the error_log with the access_log based on time, but I can't be sure they're directly related. There were three requests during that same second, all of which were bots. One was a 200, one was a 301 and one was a 404 for a file that was never there.
The file the error_log references is there on the filesystem:
find . -name \*7ihAG5M_Eyw0t7jA\*
./W_#/Ro6/7ihAG5M_Eyw0t7jA.header.vary
./W_#/Ro6/7ihAG5M_Eyw0t7jA.header
Here is the bug report from 2016.
https://bz.apache.org/bugzilla/show_bug.cgi?id=59744
Here are the cache options from the virtual host config:
CacheQuickHandler off
CacheLock on
CacheLockPath /tmp/mod_cache-lock
CacheLockMaxAge 5
CacheIgnoreHeaders Set-Cookie
CacheRoot "/var/cache/httpd"
# Enable the X-Cache-Detail header
CacheDetailHeader on
CacheEnable disk "/"
CacheHeader on
CacheDefaultExpire 800
CacheMaxExpire 64000
CacheIgnoreNoLastMod On
CacheDirLevels 2
CacheDirLength 3
I also notice the cache directory (/var/cache/httpd) grows boundlessly. At one time htcacheclean was running from systemd, but that doesn't look to be the case any longer.
Should I be investigating the HTTP cache control headers? Is that related or helpful?
Do you have any recommendations for optimal disk cache sizes?

Using apache mod_wsgi test error ---windows environment

I have seen many solutions similar to my problem, but there is no one that suits me. Please help me solve it.
First, I want to test my apache's mod_wsgi.
httd.conf:
LoadFile "e:/zt_6.27/python/python35.dll"
LoadModule wsgi_module "e:/zt_6.27/python/lib/site-
packages/mod_wsgi/server/mod_wsgi.cp35-win_amd64.pyd"
WSGIPythonHome "e:/zt_6.27/python"
<VirtualHost *:80>
WSGIScriptAlias /myapp E:\zt_6.27\Apache24\htdocs\myapp.wsgi
<Directory 'E:\zt_6.27\Apache24\htdocs'>
Require all granted
Require host ip
Allow from all
</Directory>
</VirtualHost>
myapp.wsgi:
def application(environ,start_response):
status='200 OK'
output='Hello wsgi!'
response_headers=[('Content-type','text/plain'),('Content-Length',str(len(output)))]
start_response(status,response_headers)
return[output]
I restart my apache service,call "http://localhost/myapp".
It return 500 Internal Server Error.
I open the error.log of apache.
it show:
[Fri Aug 10 10:33:35.803119 2018] [wsgi:error] [pid 9252:tid 1408] [client
::1:57261] mod_wsgi (pid=9252): Exception occurred processing WSGI script
'E:/zt_6.27/Apache24/htdocs/myapp.wsgi'.
[Fri Aug 10 10:33:35.803119 2018] [wsgi:error] [pid 9252:tid 1408] [client
::1:57261] TypeError: sequence of byte string values expected, value of type
str found\r
I checked the code carefully and it is no problem.
What did I do wrong?
Other than this,I successfully ran this code yesterday, it went wrong today.
help me please!!
If using Python 3, you must use:
def application(environ,start_response):
status = '200 OK'
output = b'Hello wsgi!'
response_headers = [('Content-type','text/plain'),('Content-Length',str(len(output)))]
start_response(status,response_headers)
return[output]
That is, change to:
output = b'Hello wsgi!'
The list returned, must have byte strings in it, not unicode strings.
To return unicode strings you would need to convert them to byte strings. I cheated here by just marking it as byte string to start with, but you should encode them to byte strings as UTF-8 instead.
I would suggest using a micro framework like Flask as it will handle these sorts of details for you.

How to remove "allowmethods:error" entry in apache error_log

I have only allowed GET, POST methods in my apache server. It shows lot of times error like below which is of no use to me. How can I block these errors to come in apache error log
[Mon Aug 22 18:43:27.232168 2016] [allowmethods:error] [pid 19314:tid 139797637039872] [demowebsite.com] [client 224.0.0.0:80] AH01623: client method denied by server configuration: 'PURGE' to /var/www/demowebsite/
I also want to know what is causing it. I am using apache 2.4 + php 5.5 + mod_pagespeed + varnish.
Please help me.
Since you seem to be using Apache 2.4.X
Just by setting:
LogLevel allowmethods:crit
you will be rising the level necessary to log to error log to critical level in that module so they won't show up for errors.

Seemingly random header errors with mod_perl - bad requests or something else?

We're running mod_perl on Apache 2 and get seemingly random header related errors that we just can't figure out. Due to the nature of the site we get hit by a ton of bots, so I'm thinking these are caused by bad or malformed requests from bots, but I'd like to figure it out for sure one way or another so I know where to go from here. Here's an example of the 2 most common errors we see in the logs:
[Thu Nov 13 21:40:48 2014] [warn] /whatever did not send an HTTP header
[Thu Nov 13 21:40:48 2014] [error] [client x] malformed header from script. Bad header=\x86z\x03d\x86z\x03d\x86z\x03d\x86z\x03d\x86z\x03d\x86z\x03d\x86z\x03d\x86z: index.cgi
[Fri Nov 14 00:04:17 2014] [warn] /whatever did not send an HTTP header
[Fri Nov 14 00:04:17 2014] [error] [client x] Premature end of script headers: index.cgi
We get 100s of 1,000s of requests to these same URLs daily, and they work fine 99.999% of the time. I don't believe it's our scripts - we always output correct headers. No real users have ever complained about any errors on our site, etc. so I'm hoping this is just caused by some bad requests from bots.
And if so, what if anything can we do to make these stop? It's a real pain because these errors trip our monitoring systems and my techie gets about 20-30 fake error alerts every day.
Turns out it was a problem with Safari browsers and mod_deflate compression.
The simple solution:
BrowserMatch Safari gzip-only-text/html

How do I fix this apache error log issue? Mod Deflate

I'm getting the following errors in my erorr.log file on every request
[Fri Jan 29 14:44:17 2010] [debug] mod_deflate.c(619): [client 10.128.99.99] Zlib: Compressed 6025 to 1847 : URL
about 2 gigs worth (high load server)
any idea what this error is referring to?
Make sure you only have LogLevel specified once, or that you're changing it for the correct virtual host. And you'll need to kick apache of course.
doh! just found it... someone had set a specific error log for this particular virtual host and the loglevel was set to debug.