httpd memory release is slow - apache

I am using httpd with mod jk 1.2.28 on RHEL5.
Behind httpd, I do have tomcat running. Connection from httpd to tomcat is made via ajp connector using modjk.
I am just load testing the httpd - Using JMeter, when I create 2000 users in 120 seconds from two different JVMs simultaneously, the httpd memory usage goes high and when all the connections are released slowly, the httpd memory release is very slow. Sometime I do need to restart httpd process. When I restart httpd, memory usage immediately goes down.
What should I do to speed up httpd memory release without restarting httpd process ?
Is there any attribute for achieving this in workers.properties and httpd.conf ?
Please provide some help.
Thanks in advance :)

One interesting observation of mine -
With low value say 200 of maxThread in tomcat server.xml, httpd becomes unresponsive and I need to restart httpd after running my jmeter load test. Memory is not released automatically after the test completes
connections on 8009 are CLOSE_WAIT
With high value say 2000 of maxThread in tomcat server.xml, no need to restart httpd after running JMeter load test, As Memory is released automatically after the test completes
Connections on 8009 are LAST_ACK
My httpd.conf is as follows. I do not understand which of these two MPMs is used. Accordingly I need to modify MaxClient value. Please help I do not want to restart httpd and I dont want high value for maxThread.
<IfModule prefork.c>
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 4000
</IfModule>
<IfModule worker.c>
StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 0
</IfModule>

Related

Issues with Apache proxy after Amazon Linux 2 upgrade

We have a micro services architecture using AWS Elastic Beanstalk. With the deprecation of Amazon Linux, we upgraded our beanstalk EC2 instances from Amazon Linux to Amazon Linux 2. We have been using Apache on Amazon Linux 1 with default config with mpm worker module.
<IfModule worker.c>
StartServers 10
MinSpareThreads 240
MaxSpareThreads 240
ServerLimit 10
MaxRequestWorkers 250
MaxConnectionsPerChild 1000000
</IfModule>
With Amazon Linux 2 the default out of the box mpm module is mpm event and since the default config didn't work, it was changed to
<IfModule mpm_event_module>
ServerLimit 16
StartServers 10
MinSpareThreads 75
MaxSpareThreads 250
ThreadLimit 64
ThreadsPerChild 32
MaxRequestWorkers 512
MaxConnectionsPerChild 10000
</IfModule>
This setting was applied on c5.large EC2 instance using AWS solution stack :: 64bit Amazon Linux 2 v4.2.15 running Tomcat 8.5 Corretto 8
We are using c5.large instances for the Amazon Linux 2 with Application Load Balancer and c4.large for Amazon Linux EC2 instances with classic load balancer. Amazon Linux his able to handle the load very well but not Amazon Linux 2. Even after updating the configuration we are getting high CPU due to apache on Amazon Linux 2 instances and is able to handle only 30% of the traffic what Amazon Linux can handle. Contacted AWS support and was of no help.
One note worthy observation is that even though Amazon Linux 2 has mpm_event module loaded the default config for apache reads like this ::
<IfModule worker.c>
StartServers 10
MinSpareThreads 240
MaxSpareThreads 240
ServerLimit 10
MaxRequestWorkers 250
MaxConnectionsPerChild 1000000
</IfModule>
which is the setting for for mpm_worker module. I verified that these values are applied to Apache on runtime by using the command ::
sudo apachectl -DDUMP_CONFIG | grep -vE "^[ ]*#[ ]*[0-9]+:$"
Is the issue because of
apache config setting.
The Application load balancer. (Which I doubt)
Anything else ?
Any one experienced similar issue during Amazon Linux upgrade ? Any insight into resolving this issue is really appreciated.

apache httpd high cpu 100% on centos 6.8

I'm using apache 2.2 on centos 6.8. Sometime some httpd process used 100% as image below:
Top monitoring
I already checked all website's source code and all of them is ok ( no loop, no error, ... ).
and httpd.conf:
Timeout 30
KeepAlive Off
<IfModule worker.c>
StartServers 8
MaxClients 1000
MinSpareThreads 8
MaxSpareThreads 100
ThreadsPerChild 25
MaxRequestsPerChild 0
</IfModule>
RLimitCPU 60 80
but I can not find any reason about that issue, pls help, thank so much ^^.

Tuning AWS Apache Server

I am trying to tune my AWS apache server. I have looked in:
$: httpd/conf/httpd.conf
And I cannot find this section to edit:
<IfModule prefork.c>
StartServers 10
MinSpareServers 10
MaxSpareServers 25
ServerLimit 128
MaxClients 128
MaxRequestsPerChild 0
</IfModule>
Where will I find the above settings if they are not in: httpd/conf/httpd.conf
Be sure the prefork module is enabled ( Unless you disabled it, it's enabled by default ).
The section you're looking for is in the prefork configuration file, which is in this case, in the enabled modules directory:
/etc/apache2/mods-enabled/mpm_prefork.conf
It depends on how apache was compiled. If apache was compiled with mod_event instead of prefork that section might not exist in the apache httpd.conf file. On redhat/centos distros the default location for httpd.conf file is in /etc/httpd/conf. On debian/ubuntru distros you will find httpd.conf in /etc/apache2
There are multiple factors which impact apache performance.
Tune JVM
Log rotation policy
Tune linux kernal parameters
Tune apache MPM
I assumed you have done first 3 steps and want to understand about 4th.
Step 4. check which MPM your are using with command:
[root#mohitm ~]# apachectl -V|grep "Server MPM:"
Server MPM: prefork
[root#mohitm ~]#
To locate the httpd config file
$ /usr/sbin/apache2 -V | grep SERVER_CONFIG_FILE
-D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf"
#
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxClients for the lifetime of the server
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule prefork.c>
StartServers 16
MinSpareServers 10
MaxSpareServers 20
ServerLimit 5024
MaxClients 5024
MaxRequestsPerChild 10000
</IfModule>
#
The Apache main configuration file on Amazon Linux:
/etc/httpd/conf/httpd.conf
Other configuration loaded files:
- conf.modules.d/*.conf
- conf.d/*.conf
I could not find section, then i´ve added to the main file and solve my problem.

Update maxclients setting in apache configuration file of AWS elastic bean stalk instance

I have updated /etc/httpd/conf/httpd.conf file of elastic beanstalk instance with this.
<IfModule prefork.c>
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 1000
MaxClients 1000
MaxRequestsPerChild 4000
</IfModule>
After that i restarted my httpd service using
sudo service httpd restart
Now if i have 300 clients running at a time, then apache is still throwing this error.
[error] server reached MaxClients setting, consider raising the MaxClients setting.
What should i do to update maxclients setting effectively?
Try this or this or this. I have not yet tried this practically . you can try it out .

apache mpm in Fedora Linux

What is ideal configuration of apache - Prefork or Worker in Fedora Linux run on 8 core cpu with 60gb RAM for heavy traffic sites, Currently configured in httpd.conf -
<IfModule prefork.c>
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 3500
MaxClients 3500
MaxRequestsPerChild 40000
</IfModule>
<IfModule worker.c>
StartServers 4
MaxClients 3500
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 40000
</IfModule>
Currently prefork is active.
Webservers do not scale well. Unless you have a very unusual workload you will get better throughput and capacity and much, much more resillience out of 2 servers with 2 cores and 8Gb of RAM than this box - but a lot depends on what your workload looks like and what else is running on the machine.
If you have a look around the site you'll see that questions about capacity planning are closed as being too broad, or moved to serverfault (where they are closed as duplicates to a question where there is no definitive answer).
Hence the answer is to measure what traffic your hardware can cope with and set the limits accordingly.