How do I spawn a specific number of processes in Apache when using mod_wsgi with the WSGIDaemonProcess setting?
I have my VirtualHost setup with the following (as a test):
WSGIDaemonProcess webserver user=webserver group=webserver processes=24 threads=8 display-name=%{GROUP} python-path=/var/virtualenv/lib/python2.6/site-packages
While my httpd.conf is setup as follows:
<IfModule prefork.c>
StartServers 8
MinSpareServers 1
MaxSpareServers 8
ServerLimit 24
MaxClients 24
MaxRequestsPerChild 4000
</IfModule>
Note, that I'm running a very constrained 256MB server with PostgreSQL database installed as well.
However, system shows far more than 24 processes for apache (more than 30). I expected that if I set the ServerLimit to the same as processes in WSGIDaemonProcess it would run at the constant 24. However, there seems to be a bunch of spare processes running for unknown reasons?
The ServerLimit directive has got nothing to do with mod_wsgi daemon mode. The 'processes' option to WSGIDaemonProcess is what specifies how many daemon processes mod_wsgi will create. It is a static number and not a dynamic number so just set it to how many you need. For that number of threads per process, there is no point setting it to more that 'processes=3' to start with as you are limited to 24 concurrent requests in the Apache child worker processes which proxy requests to the mod_wsgi daemon processes, so not possible to handle any more requests than that.
In general, if you are running in a memory constrained environment, then you should not be using prefork MPM but worker MPM. Is there a reason you must, such as needing to run PHP code as well? If not, change the MPM used.
How else you could configure things really depends on your code, response times and request throughput, which only you know.
Related
Use Apache/2.4.12(Unix) and mod_wsgi-4.4.11 and blow configuration of apache/conf/extra:
//httpd-mpm.conf
<IfModule mpm_worker_module>
StartServers 3
MinSpareThreads 75
MaxSpareThreads 250
ThreadsPerChild 25
MaxRequestWorkers 400
MaxConnectionsPerChild 0
</IfModule>
//httpd-vhosts.conf
WSGIRestrictEmbedded On
<VirtualHost *:443>
ServerName form.xxx.com
WSGIScriptAlias / /usr/local/apache/services/form/form.wsgi
WSGIDaemonProcess paymentform user=test processes=10 threads=5 display-name=%{GROUP} maximum-requests=100
WSGIApplicationGroup %{RESOURCE}
WSGIProcessGroup form
DocumentRoot /usr/local/apache/services/form
SSLEngine On
//any certification files
<Directory /usr/local/apache/services/form>
Require all granted
</Directory>
</VirtualHost>
In this configuration, I use Apache jmeter for testing.
GET : form.xxx.com //only return "index" string
Number of Threads(users):100
Ramp-up Period : 0
Loop count : 10
But result is..
samples: 1000
Average: 3069
Min : 13
Max : 22426
Std.Dev: 6671.693614549157
Error %: 10.0%
Throughput : 24.1/sec
KB/sec : 10.06/sec
AvgBytes : 428.5
During testing, raise connection refused or connection timeout and stop receving requests in 400~500 requests. Server cpu or memory is not full.
How to improve performance?
fix mpm worker configuration? or fix WSGI configuration in httpd-vhosts?
I modify httpd-mpm.conf below, but no difference.
<IfModule mpm_worker_module>
StartServers 10
ServerLimit 32
MinSpareThreads 75
MaxSpareThreads 250
ThreadsPerChild 25
MaxRequestWorkers 800
MaxConnectionsPerChild 0
</IfModule>
You have a number of things which are wrong in your configuration. One may be a cut and paste error. Another is a potential security issue. And one will badly affect performance.
The first is that you have:
WSGIProcessGroup form
If that is really want you have, then the web request wouldn't even be getting to the WSGI application and should return a 500 error response. If it isn't giving an error, then your request is being delegated to a mod_wsgi daemon process group not even mentioned in the above configuration. This would all come about as the value to WSGIProcessGroup doesn't match the name of the defined daemon process group specified by the WSGIDaemonProcess directive.
What you would have to have is:
WSGIProcessGroup paymentform
I suspect you have simply mucked up the configuration when you pasted it in to the question.
A related issue with delegation is that you have:
WSGIApplicationGroup %{RESOURCE}
This is what the default is anyway. There would usually never be a need to set it explicitly. What one would normally use if only delegating one WSGI application to a daemon process group is:
WSGIApplicationGroup %{GLOBAL}
This particular value forces the use of the main Python interpreter context of each process which avoids problems with some third party extension modules that will not work properly in sub interpreter contexts.
The second issue is a potential security issue. You have:
DocumentRoot /usr/local/apache/services/form
When using WSGIScriptAlias directive, there is no need to set DocumentRoot to be a parent directory of where your WSGI script file or source code for your application is.
The danger in doing this is that if WSGIScriptAlias was accidentally disabled, or changed to a sub URL, all your source code then becomes downloadable.
In short, let DocumentRoot default to the empty default directory for the whole server, or create an empty directory just for the VirtualHost and set it to that.
The final thing and which would drastically affect your performance is the use of maximum-requests option to WSGIDaemonProcess. You should never use maximum-requests in a production system unless you understand the implications and have a specific temporary need.
Setting this value and to a low value, means that the daemon processes will be killed off and restarted every 100 requests. Under a high volume of requests as with a benchmark, you would be constantly restarting your application processes.
The result of this would be increased CPU load and much slower response times, with potential for backlogging to the extent of very long response times due to overloading the server due to everything restarting all the time.
So, absolute first thing you should do is remove maximum-requests and you should see some immediate improvement.
You also have issues with process restarts in your Apache MPM settings. It is not as major as this only affects the Apache worker processes which are proxying requests, but it will also cause extra CPU usage, plus a potential need for a higher number of worker processes being required.
I have talked about the issue of Apache process churn due to MPM settings before in:
http://lanyrd.com/2013/pycon/scdyzk/
One final problem with your benchmarking is that your test, if all it is returning is the 'index' string from some simple hello world type program, is that it bears no relationship to your real world application.
Real applications are not usually so simple and time within the WSGI application is going to be much more due to template rendering, database access etc etc. This means the performance profile of a real application is going to be completely different and changes how you should configure the server.
In other words, testing with a hello world program is going to give you the completely wrong idea of what you need to do to configure the server appropriately. You really need to understand what the real performance profile of your application is under normal traffic loads and work from there. That is, hammering the server to the point of breaking is also wrong and not realistic.
I have been blogging on my blog site recently about how typical hello world tests people use are wrong, and give some examples of specific tests which show out how the performance of different WSGI servers and configurations can be markedly different. The point of that is to show that you can't base things off one simple test and you do need to understand what your WSGI application is doing.
In all of this, to really truly understand what is going on and how to tune the server properly, you need to use a performance monitoring solution which is built into the WSGI server and so can give insights into the different aspects of how it works and therefore what knobs to adjust. The blog posts are covering this also.
I encountered a similar problem as ash84 described, I used jmeter to test the performance and found the error % becomes non-zero when the jmeter thread number is set beyond some value (50 in my case).
After I watched Graham Dumpleton's talk, I realized it happens mainly because there are not enough spare MPM threads prepared for serving the upcoming burst jmeter requests. In this case, some jmeter requests are not served in the beginning, even though the number of MPM threads catched up later.
In short, setting MinSpareThreads to a larger value fixed my problem, I raised jmeter threads from 50 to 100 and get 0% error.
MinSpareThreads 120
MaxSpareThreads 150
MaxRequestWorkers 200
The number of WSGIDaemonProcess processes times the number of WSGIDaemonProcess threads doesn't have to be greater than the number of jmeter threads. But you may need to set them to higher values to make sure WSGIDaemonProcess could handle the requests quickly enough.
I have fronted Tomcat6 with Apache2.
On an Ubuntu instance I have Apache2 running with 8GB RAM, so decided to have following apache2.conf configurations.
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 550
ServerLimit 550
MaxRequestsPerChild 0
</IfModule>
Above configuration was done using below parameters and this blog post(how to configure apache MPM).
Apache Memory Usage (MB): 611.719
Average Proccess Size (MB): 8.26647
On another instance I have a Tomcat6 running with 8GB RAM. In the Tomcat6 server.xml following configuration is used.
<Connector port="8009" protocol="AJP/1.3" redirectPort="8080" maxThreads="500"/>
My problems are,
What's the process/steps to calculate/decide the number maxThreads parameter in Tomcat6?
How should be memory allocation should be done?
Tomcat6 is a Java application and as such memory allocation is done by the JVM. I suppose that you are willing to proxy tomcat through apache, if so usually 1 apache client will end up as 1 apache thread, so having a lower number of threads in tomcat than MaxClients directive in apache is advisable. This said, to calculate the maxThreads parameter might be difficult, depending on your application each thread may vary its memory usage, having an average might be useful but you have to take also into account other JVM memory spaces, eden, permgen, ...
Take a look at JVM memory settings, per thread stack settings, ...I think this is what you might be looking for.
we have a very strong server (32-cores cpu, 96GB ram) and have apache running in prefork mode. our apache2.conf file includes such settings :
<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 5
MaxSpareServers 20
ServerLimit 3000
MaxClients 3000
MaxRequestsPerChild 1000
</IfModule>
the problem is, when our website is under heavy load (when apache process count reaches 1000 to be precise) (or when setting StartServers beyond 1000), apache2 freezes and needs to be restarted. Yet there is still plenty of ram, cpu is underused and apache process count is far beyond maxclients.
My question is, what should i do to allow apache to reach the maxclients configured in the conf file ?
please consider we have already played with /etc/security/limits.conf to set max opened files and nprocs to 5000 (ulimit -a showed these values were well taken into account).
No errors are shown in /var/log/apache2/error.log
Your Apache server may have a compiled in hard limit. To change it you need to recompile your webserver. The default is 200000 which should be high enough - but packages from your linux distribution may differ.
I would rather recommend to get of static file serving from your webserver. Put an nginx or lighttp server in front of your apache. Let it serve static content (images, css, javascript, etc.) and forward dynamic request to your apache.
I am running Apache2/PHP on my CentOS 5.8 VPS server (2GB RAM, 2GHz processor) and I tried to do a basic load test. Since I am using the standard installation of Apache I assume that prefork model is the one being used. Here is the config:
<IfModule prefork.c>
StartServers 20
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 4000
</IfModule>
I did a short test with ab:
ab -q -c 100 -n 10000 mysite.com
In the same time I was checking mysite.com/server-status and I've never seen the number of requests currently being processed exceeding 10. How is this possible ?
According to my calculations the number of concurrent request should have been more than 10, ideally 100. Am I missing something here or Apache 2 server-status is reporting wrong ?
Thank you all.
You are correct that you could see more than 10 requests. You could in fact get more 256 concurrent requests being processed.
However, there's not enough information here to say why you didn't see more than 10 connections. Here are some possibilities:
A slow connection to the host being tested could reduce the number of parallel connections.
A slow application behind the URL being tested could reduce the number of parallel connections
Limitations in the /client/ could limit the number of the parallel connections. "ab" should provide some reporting on what level of concurrency it was able to achieve.
You could accidentally be setting "MaxClients" to a lower value elsewhere in your Apache configuration
... or you could have some other Apache configuration problem.
To provide a more specific answer, you could consider posting complete-as-possible copies of your "ab" output and the entire Apache configuration.
I am using the following Apache config to forward requests to a Tomcat server:
ProxyPass /myapp ajp://localhost:8009/myapp max=2
This is a simplified config, but is enough to reproduce the issue, which is that the max parameter has no effect. If I through 10 concurrent requests to Apache, all 10 are forwarded to Tomcat at the same time, while I would like to have them forwarded 2 by 2. Should I use something other than the max parameter for this?
The max=2 failed to limit the number of requests concurrently forwarded to Tomcat because I was running this on UNIX, and my Apache came preconfigured with prefork MPM, which creates one process per request. The max applies per process, hence doesn't have the desired effect.
If you are in this situation and need to limit the number concurrent request forwarded to Tomcat, then you'll need to replace your Apache with a worker or event MPM Apache, in the config set ServerLimit to 1, and ThreadsPerChild and MaxClients to the same value, which will be the total number of concurrent connections your Apache will be able to process. You can find more information about this in this section documenting the recommended Apache configuration for Orbeon Forms.
service apache2 restart