Many php-fpm: pool www processes running but Apache isn't - apache

I'm using CentOS 6.4 (x86) VPS with Nginx.
In Webmin Running processes table I found up to 8 "php-fpm: pool www" running processes that "Apache" is the owner, but Apache isn't running!
This consumes a lot of RAM memory.
It is necessary for the nginx jobs or not? Sorry for this (stupid?) question but I'm newbie about Server management.
Thank you in advance.

The processing running will be needed and won't be being wasted.
One of the first things that should be defined in your PHP-FPM config file is what user and group PHP-FPM should be running under.
Presumably your config file says to run PHP-FPM under the user 'Apache'. You can change this to whatever you like, so long as you get the file permission right for PHP-FPM to access your php files.
However if PHP-FPM is taking up a lot of memory then you should tweak the values for the number of pools and how much memory each one can use. In particular you could reduce the settings:
pm.start_servers = 4
pm.min_spare_servers = 2
To not have as many PHP-FPM processes sitting around idle when there is no load.

PHP-FPM has it's own separate process manager and really isn't connected to anything other than itself. Other software will connect to it, IE: nginx / apache. You probably see the "Apache" user running the process because of the pool configuration you have. You can easily change the configuration and then restart the FPM Process.
If you do not wish to have stale processes running while they are not used, then I would recommend that you change the PM option in the pool configuration from Static/Dynamic to ondemand. This way, FPM will only spool up when it is needed.
Many people use the Static/Dynamic options when they need specific variations for the processes they are running, IE: a site that receives a lot of constant traffic.
Depending on your FPM installation you'll normally find the configurations in /etc/php. I keep my configurations in /usr/local/etc/php-fpm/ or /usr/local/etc/fpm.d/

Related

Apache mod_wsgi python variable persistance

For some time I have been using mod_wsgi with global variables. The global vars have been persistant across sessions in the past. Suddenly now they are not persistant. Each request loads a fresh instance and persistence is lost.
I want to enforce wsgi (for now) remembering the variables from previous requests. Is there an Apache config option such as daemon option or middleware that can enforce the behavior I had going previously?
It sounds like the issue may be that you were using daemon mode of mod_wsgi previously, with default of a single process, and then mucked up the Apache/mod_wsgi configuration and have fallen back to use embedded mode of mod_wsgi, which means you are subject to whatever Apache configuration is. The Apache configuration is generally multiprocess though.
See:
http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html
So confirm whether you are working in embedded mode or daemon mode.
http://modwsgi.readthedocs.io/en/develop/user-guides/checking-your-installation.html#embedded-or-daemon-mode
You can also verify whether in multi process or multi threaded configuration.
http://modwsgi.readthedocs.io/en/develop/user-guides/checking-your-installation.html#single-or-multi-threaded
You can also do:
import mod_wsgi
print(mod_wsgi.maximum_processes)
print(mod_wsgi.threads_per_process)
to confirm what configuration you are running under.

mass-restarting httpd on lots of EC2 instances

I am running a variable number of EC2 instances (CentOS 64) that contain an apache web server that caches a bunch of code in production mode.
Now every time I make some changes to the code (generally on a weekly basis) I have to log into each one of them instances and do a "su" then "service httpd restart"
Is there a way to automate this so that I can run a single command on one of the instances it would connect to all others and restart it? Getting really time consuming especially when the application has spawned some 20-30 instances on its own (happens on some days when we get high traffic)
Thanks!
Dancer's shell, dsh, is provided specifically to do this. No 'scripting' required. As #tix3 suggests, you should probably also convince sudo on those machines (configure /etc/sudoers using visudo) to configure them to accept your restart command.

What are the most effective tools to manage multiple apache httpd instances?

We have many Apache instances all over our intranet. Some instances run on the same machine. Some instances run on different machines.
I need a tool that can manage these instances from one central location.
Get CPU stats
Get Connection stats
Stop/start Apache instances
Get access to error log
I looked at webmin, but the documentation isn't too clear how it works. Without installing it I'd have trouble getting it to go.
Any recommendations?
I've never used it myself, but I've seen people with monitoring requirements be very happy with Cacti. Besides general health monitoring like CPU stats it has an extremely simple Apache stats plugin that might do what you need:
Script to get the requests per second and the requests currently being processed from
an Apache webserver.
maybe you can put something together with that.

WSGI / Apache clarification

I have a Pyramid application running on apache with mod_wsgi.
What exactly is the lifeline of my application when a request is made?
Does my application get created (which entails loading the configuration, creating the database engine) every time a request comes in? When using paste serve, this isn't the case. But with mod_wsgi - how does it work? When does the application "terminate"?
For a start, read:
http://blog.dscpl.com.au/2009/03/python-interpreter-is-not-created-for.html
http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usage.html
http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading
Initialisation is not done on a per request basis. In generally the application should persist in memory between requests. In the case of embedded mode then you may be at the mercy of Apache as to when it recycles processes.

apache low memory error

mod_python(?) is eating a lot of ram (about 9mb per worker process). If i open several TRAC pages at once many of them will have an error due to no ram (64mb virtual limit). if i limit the worker threads to 3 i can get by alright. Problem is if no one is accessing TRAC i have A LOT of ram being unused.
Is there a way i can either
Limit the amount of worker process that can use python?
Limit the amount of worker process in my trac path?
Have apache spawn as many worker process or threads it wants but have it only spawn when X amount or ram is free (or when X amount or below is in use by apache)
Something else ?
You could configure a second mod_python apache with minimal worker threads to run only on the local interface and with a different port, i.e. http://127.0.0.1:9000/. Then for your public apache instance on port 80, disable mod_python and tune for optimal ram utilization. Proxy all trac and other python app requests to the local mod_python instance.
If the public facing apache is left only to serve static content, then consider replacing it with something lightweight such as nginx or lighttpd.