mod_python(?) is eating a lot of ram (about 9mb per worker process). If i open several TRAC pages at once many of them will have an error due to no ram (64mb virtual limit). if i limit the worker threads to 3 i can get by alright. Problem is if no one is accessing TRAC i have A LOT of ram being unused.
Is there a way i can either
Limit the amount of worker process that can use python?
Limit the amount of worker process in my trac path?
Have apache spawn as many worker process or threads it wants but have it only spawn when X amount or ram is free (or when X amount or below is in use by apache)
Something else ?
You could configure a second mod_python apache with minimal worker threads to run only on the local interface and with a different port, i.e. http://127.0.0.1:9000/. Then for your public apache instance on port 80, disable mod_python and tune for optimal ram utilization. Proxy all trac and other python app requests to the local mod_python instance.
If the public facing apache is left only to serve static content, then consider replacing it with something lightweight such as nginx or lighttpd.
Related
I am doing load test to tune my apache to server maximum concurrent https request. Below is the details of my test.
System
I dockerized my httpd and deployed in openshift with pod configuration is 4CPU, 8GB RAM.
Running load from Jmeter with 200 thread, 600sec ramup time, loop is for infinite. duration is long run (Jmeter is running in same network with VM configuration 16CPU, 32GB RAM ).
I compiled by setting module with worker and deployed in openshift.
Issue
Httpd is not scaling more than 90TPS, even after tried multiple mpm worker configuration (no difference with default and higher configuration)
2.Issue which i'am facing after 90TPS, average time is increasing and TPS is dropping.
Please let me know what could be the issue, if any information is required further suggestions.
I don't have the answer, but I do have questions.
1/ What does your Dockerfile look like?
2/ What does your OpenShift cluster look like? How many nodes? Separate control plane and workers? What version?
2b/ Specifically, how is traffic entering the pod (if you are going in via a route, you'll want to look at your load balancer; if you want to exclude OpenShift from the equation then for the short term, expose a NodePort and have Jmeter hit that directly)
3/ Do I read correctly that your single pod was assigned 8G ram limit? Did you mean the worker node has 8G ram?
4/ How did you deploy the app -- raw pod, deployment config? Any cpu/memory limits set, or assumed? Assuming a deployment, how many pods does it spawn? What happens if you double it? Doubled TPS or not - that'll help point to whether the problem is inside httpd or inside the ingress route.
5/ What's the nature of the test request? Does it make use of any files stored on the network, or "local" files provisioned in a network PV.
And,
6/ What are you looking to achieve? Maximum concurrent requests in one container, or maximum requests in the cluster? If you've not already look to divide and conquer -- more pods on more nodes.
Most likely you have run into a bottleneck/limitation at the SUT. See the following post for a detailed answer:
JMeter load is not increasing when we increase the threads count
Will a web server (WS) (like apache2 or nginix (or container like tomcat(TC)) create a new process to handle incoming request. My concern is about servers that support high number of parallel users (say 20K+ parallel users).
I think load balancing happens on the other side of web server (if it is used to front Tomcat etc). So in theory, a single web server should be accepting all the (20K+)incoming request before it can distribute the load to other servers backing it.
So, the questions is: Does Web Server (WS) handle all these requests in a single process or it smartly spawns other process to help share the work (i know the "client - server" binding happens though - client_host:random_port plus server_host:fixed_port).
Reference: Prior to reading this article:Fronting Tomcat with Apache I was thinking it is a single process doing all the smart work. But in this article there is mentioning of MPM (Multi-Processing Module)
It combines the best from two worlds, having a set of child processes each having a set of separate threads. There are sites that are running 10K+ concurrent connections using this technology.
And as it goes, it is getting more sophisticated as threads also being spawned like mentioned above. (these are not the tomcat threads that serve each individual request by calling the service method, but these are threads on Apache WS to handle request and distribute them to nodes for processing).
If any one used MPM. Little further explanation of how all this works will be great.
Questions like -
(1) As child processes are spawned what is it exact role. Is the child process just for mediating the request to tomcat or any thing more. If so, then after the child process gets response from TC, does the child process forward the response to parent process or directly to the client (since it can know the client_host:random_port from parent process. I am not sure if this is allowed in theory, though the child process can not accept any new request as the fixed_port which can bind to only one process is already tied to parent process.
(2) What kind of load is shared to thread by the child or parent process. Again it must almost be same as in (1). But what I am not sure is that even in theory if a thread can directly send the request to client.
Apache historically use prefork model of processing. In this model each request == separate operation system (OS) process. It's calling "prefork" because Apache fork some spare processes and process request within. If number of preforked processes not enough - Apache fork new. Pros: process can execute other modules or processes and not care that they do; cons: each request = one process, too much memory used and OS fork also can be slow for your requests.
Other model of Apache - worker MPM. Almost same as prefork, but using not OS processes but OS threads. Thread - it's like lightweight process. One OS process can run many threads using one memory space. Worker MPM used much less memory and new threads created fast. Cons: modules need to support thread, crash of module can crash all threads of all OS process (but this it not important for you because you are using apache as reverse proxy only). Other cons: CPU switching context when switching between threads.
So yes, worker much better than prefork in your case, but...
But we have Nginx :) Nginx using other model (btw, Apache has event MPM too). In this case you has only one process (well, can be few processes, see below). How it works. New request rising special event, OS process waking up, receive request, prepare answer, write answer and gone sleep.
You can say "wow, but this is not multitasking" and will be right. But one big difference between this model and simple sequentially request processing. What happens if you need write big data to slow client? In synchronous way your process need to wait acknowledging about data receiving and only after - process new request. Nginx and Apache event model use asynchronous model. Nginx tell to OS to send some piece of data write this data to OS buffer and... gone sleep, or process new requests. When OS will send piece of data - special event will be sent to nginx. So, main difference - Nginx do not wait I/O (like connect, read, write), Nginx tell to OS that he want and OS send event to Nginx than this task ready (socket connected, data written or new data ready to read in local buffer). Also, modern OS can work asynchronously with HDD (read/write) and even can send files from HDD to tcp socket directly.
Sure, all math operations in this Nginx process will block this process and its stop to process new and existing requests. But when main workflow is work with network (reverse proxy, forward requests to FastCGI or other backend server) plus send static files (asynchronous too) - Nginx can serve thousands simultaneous requests in one OS process! Also, because one process of OS (and one thread) - CPU will execute it in one context.
How I told before - Nginx can start few OS processes and each of this process will be assigned by OS to separate CPU core. Almost no reasons to fork more Nginx OS processes (there is only one reason to do it: if you need to do some blocking operations, but simple reverse proxy with backend balancing - not this case)
So, pros: less CPU context switching, less memory (comparing with worker MPM too), fast connection processing. More pros: Nginx created as HTTP load balancer and have lot of options for it (and even more in commercial Nginx Plus). Cons: If you need some hard math inside OS process, this process will be blocked (but all you math in Tomcat, so Nginx only balancer).
PS: typo fix will come later, out of time. Also, my English bad, so fixes always welcome :)
PPS: Answer question about number of TC thread, asked in comments (was too long for post as comment):
Best way to know it - test it using stress loading tools. Because this number depend on application profile. Response time is not good enough to help answer. Because, for example, big difference between 200ms of 100% math (100% cpu bound) vs 50ms of math + 150ms of sleep waiting database answer.
If application is 100% CPU bound - probably one thread per one core, but in real cases all applications also spent some time in I/O (receive request, send answer to client).
If application work with I/O and need to wait for answers from other services (database, for example), this application spends some time in sleep state and CPU can process other tasks.
So best solution to create number of requests close to real load and run stress test increasing number of concurrent requests (and number of TC workers for sure). Find acceptable response time and fix this number of threads. Sure, need to check before that it is not database fault.
Sure, here I'm talking about dynamic content only, requests for static files from disk must be processed before tomcat (by Nginx, for example).
I have about 4-6 files (per user), mostly static ones being requested from an AWS instance. Peak traffic is about 1K visitors at a time (via Google Analytics Real Time).
The instance is powerful enough and it rarely peaks at 100% CPU, has free RAM. mpm_prefork is adjusted for 1480 simultaneous connections. After restarting Apache2, there are around 200 processes running during peaks.
However, over a couple of days the amount of processes seems to swell up and requires me to restart apache in order to negate the timeout errors for users.
Am I missing anything? Is this expected?
You could set the MaxSpareServers directive to something very low like 10 or 20. When instances are not needed anymore (not serving anything or in a keep-alive state), you should only get 10 or 20 idle workers.
If the workers number remains the same even when serving no client, then you've got some problem. Check your keep-alive timeout setting, or disable keep-alive altogether and see it has some effect. I've notice keep-alive is sometimes counterproductive, especially in busy environments.
I'm using CentOS 6.4 (x86) VPS with Nginx.
In Webmin Running processes table I found up to 8 "php-fpm: pool www" running processes that "Apache" is the owner, but Apache isn't running!
This consumes a lot of RAM memory.
It is necessary for the nginx jobs or not? Sorry for this (stupid?) question but I'm newbie about Server management.
Thank you in advance.
The processing running will be needed and won't be being wasted.
One of the first things that should be defined in your PHP-FPM config file is what user and group PHP-FPM should be running under.
Presumably your config file says to run PHP-FPM under the user 'Apache'. You can change this to whatever you like, so long as you get the file permission right for PHP-FPM to access your php files.
However if PHP-FPM is taking up a lot of memory then you should tweak the values for the number of pools and how much memory each one can use. In particular you could reduce the settings:
pm.start_servers = 4
pm.min_spare_servers = 2
To not have as many PHP-FPM processes sitting around idle when there is no load.
PHP-FPM has it's own separate process manager and really isn't connected to anything other than itself. Other software will connect to it, IE: nginx / apache. You probably see the "Apache" user running the process because of the pool configuration you have. You can easily change the configuration and then restart the FPM Process.
If you do not wish to have stale processes running while they are not used, then I would recommend that you change the PM option in the pool configuration from Static/Dynamic to ondemand. This way, FPM will only spool up when it is needed.
Many people use the Static/Dynamic options when they need specific variations for the processes they are running, IE: a site that receives a lot of constant traffic.
Depending on your FPM installation you'll normally find the configurations in /etc/php. I keep my configurations in /usr/local/etc/php-fpm/ or /usr/local/etc/fpm.d/
I have been googling this question for some time but got no answers. What's the Apache process model?
By process model, I mean how Apache manage process or thread to handling HTTP request.
Does it fork one process for each HTTP request?
Does it have process/thread pool?
Can we config it?
Is there any online doc for such Apache details?
This depends on your system and configuration : see Core Features and Multi-Processing Modules : you could use, for instance :
Apache MPM winnt on windows -- that one uses threads
Or Apache MPM prefork -- that one uses processes
Or even Apache MPM worker -- which uses both several processes and threads.
Quoting the page of the last one, Apache MPM worker :
This Multi-Processing Module (MPM)
implements a hybrid multi-process
multi-threaded server. By using
threads to serve requests, it is able
to serve a large number of requests
with fewer system resources than a
process-based server. However, it
retains much of the stability of a
process-based server by keeping
multiple processes available, each
with many threads.