Please, Can you help me for best Apache Configuration
I own the servers for files download, Download files by direct links
ex: domain.com/files.rar
Without programming or php function
The problem: Sometimes I having a high load or stop servers
For this can you help me for best Apache Configuration
Such as:
Server Limit
Max Clients
Max Requests Per Child
Keep-Alive
Keep-Alive Timeout
Max Keep-Alive Requests
Etc.
My servers with 4GB RAM and HDD drives, and 100Mb-ps and 1GBMb-ps
Thanks.
Separate Static and Dynamic Content
Use separate servers for static and dynamic content. Apache processes serving dynamic content will carry overhead and swell to the size of the content being served, never decreasing in size. Each process will incur the size of any loaded PHP or Perl libraries. A 6MB-30MB process size [or 10% of server's memory] is not unusual, and becomes a waist of resources for serving static content.
For a more efficient use of system memory, either use mod_proxy to pass specific requests onto another Apache Server, or use a lightweight server to handle static requests:
Nginx
lighttpd
Or use a front-end caching proxy such as Squid-Cache or Varnish-Cache
The Server handling the static content goes up front.
Note that configuration settings will be quite different between a dynamic content Server and a static content Server.
mod_deflate
Reduce bandwidth by 75% and improve response time by using mod_deflate.
LoadModule deflate_module modules/mod_deflate.so
<Location />
AddOutputFilterByType DEFLATE text/html text/plain text/css text/xml application/x-javascript
</Location>
Loaded Modules
Reduce memory footprint by loading only the required modules.
Some also advise to statically compile in the needed modules, over building DSOs (Dynamic Shared Objects). Very bad advice. You will need to manually rebuild Apache every time a new version or security advisory for a module is put out, creating more work, more build related headaches, and more downtime.
mod_expires
Include mod_expires for the ability to set expiration dates for specific content; utilizing the 'If-Modified-Since' header cache control sent by the user's browser/proxy. Will save bandwidth and drastically speed up your site for [repeat] visitors.
Note that this can also be implemented with mod_headers.
KeepAlive
Enable HTTP persistent connections to improve latency times and reduce server load significantly [25% of original load is not uncommon].
prefork MPM:
KeepAlive On
KeepAliveTimeout 2
MaxKeepAliveRequests 100
worker and winnt MPMs:
KeepAlive On
KeepAliveTimeout 15
MaxKeepAliveRequests 100
With the prefork MPM, it is recommended to set 'KeepAlive' to 'Off'. Otherwise, a client will tie up an entire process for that span of time. Though in my experience, it is more useful to simply set the 'KeepAliveTimeout' value to something very low [2 seconds seems to be the ideal value]. This is not a problem with the worker MPM [thread-based], or under Windows [which only has the thread-based winnt MPM].
With the worker and winnt MPMs, the default 15 second timeout is setup to keep the connection open for the next page request; to better handle a client going from link to link. Check logs to see how long a client remains on each page before moving on to another link. Set value appropriately [do not set higher than 60 seconds].
SymLinks
Make sure 'Options +FollowSymLinks -SymLinksIfOwnerMatch' is set for all directories. Otherwise, Apache will issue an extra system call per filename component to substantiate that the filename is NOT a symlink; and more system calls to match an owner.
<Directory />
Options FollowSymLinks
</Directory>
AllowOverride
Set a default 'AllowOverride None' for your filesystem. Otherwise, for a given URL to path translation, Apache will attempt to detect an .htaccess file under every directory level of the given path.
<Directory />
AllowOverride None
</Directory>
ExtendedStatus
If mod_status is included, make sure that directive 'ExtendedStatus' is set to 'Off'. Otherwise, Apache will issue several extra time-related system calls on every request made.
ExtendedStatus Off
Timeout
Lower the amount of time the server will wait before failing a request.
Timeout 45
If you are having load-problems with your apache setup, you could also consider migrating to another system. From my personal experience I would suggest you to try nginx to serve static files.
Related
Use Apache/2.4.12(Unix) and mod_wsgi-4.4.11 and blow configuration of apache/conf/extra:
//httpd-mpm.conf
<IfModule mpm_worker_module>
StartServers 3
MinSpareThreads 75
MaxSpareThreads 250
ThreadsPerChild 25
MaxRequestWorkers 400
MaxConnectionsPerChild 0
</IfModule>
//httpd-vhosts.conf
WSGIRestrictEmbedded On
<VirtualHost *:443>
ServerName form.xxx.com
WSGIScriptAlias / /usr/local/apache/services/form/form.wsgi
WSGIDaemonProcess paymentform user=test processes=10 threads=5 display-name=%{GROUP} maximum-requests=100
WSGIApplicationGroup %{RESOURCE}
WSGIProcessGroup form
DocumentRoot /usr/local/apache/services/form
SSLEngine On
//any certification files
<Directory /usr/local/apache/services/form>
Require all granted
</Directory>
</VirtualHost>
In this configuration, I use Apache jmeter for testing.
GET : form.xxx.com //only return "index" string
Number of Threads(users):100
Ramp-up Period : 0
Loop count : 10
But result is..
samples: 1000
Average: 3069
Min : 13
Max : 22426
Std.Dev: 6671.693614549157
Error %: 10.0%
Throughput : 24.1/sec
KB/sec : 10.06/sec
AvgBytes : 428.5
During testing, raise connection refused or connection timeout and stop receving requests in 400~500 requests. Server cpu or memory is not full.
How to improve performance?
fix mpm worker configuration? or fix WSGI configuration in httpd-vhosts?
I modify httpd-mpm.conf below, but no difference.
<IfModule mpm_worker_module>
StartServers 10
ServerLimit 32
MinSpareThreads 75
MaxSpareThreads 250
ThreadsPerChild 25
MaxRequestWorkers 800
MaxConnectionsPerChild 0
</IfModule>
You have a number of things which are wrong in your configuration. One may be a cut and paste error. Another is a potential security issue. And one will badly affect performance.
The first is that you have:
WSGIProcessGroup form
If that is really want you have, then the web request wouldn't even be getting to the WSGI application and should return a 500 error response. If it isn't giving an error, then your request is being delegated to a mod_wsgi daemon process group not even mentioned in the above configuration. This would all come about as the value to WSGIProcessGroup doesn't match the name of the defined daemon process group specified by the WSGIDaemonProcess directive.
What you would have to have is:
WSGIProcessGroup paymentform
I suspect you have simply mucked up the configuration when you pasted it in to the question.
A related issue with delegation is that you have:
WSGIApplicationGroup %{RESOURCE}
This is what the default is anyway. There would usually never be a need to set it explicitly. What one would normally use if only delegating one WSGI application to a daemon process group is:
WSGIApplicationGroup %{GLOBAL}
This particular value forces the use of the main Python interpreter context of each process which avoids problems with some third party extension modules that will not work properly in sub interpreter contexts.
The second issue is a potential security issue. You have:
DocumentRoot /usr/local/apache/services/form
When using WSGIScriptAlias directive, there is no need to set DocumentRoot to be a parent directory of where your WSGI script file or source code for your application is.
The danger in doing this is that if WSGIScriptAlias was accidentally disabled, or changed to a sub URL, all your source code then becomes downloadable.
In short, let DocumentRoot default to the empty default directory for the whole server, or create an empty directory just for the VirtualHost and set it to that.
The final thing and which would drastically affect your performance is the use of maximum-requests option to WSGIDaemonProcess. You should never use maximum-requests in a production system unless you understand the implications and have a specific temporary need.
Setting this value and to a low value, means that the daemon processes will be killed off and restarted every 100 requests. Under a high volume of requests as with a benchmark, you would be constantly restarting your application processes.
The result of this would be increased CPU load and much slower response times, with potential for backlogging to the extent of very long response times due to overloading the server due to everything restarting all the time.
So, absolute first thing you should do is remove maximum-requests and you should see some immediate improvement.
You also have issues with process restarts in your Apache MPM settings. It is not as major as this only affects the Apache worker processes which are proxying requests, but it will also cause extra CPU usage, plus a potential need for a higher number of worker processes being required.
I have talked about the issue of Apache process churn due to MPM settings before in:
http://lanyrd.com/2013/pycon/scdyzk/
One final problem with your benchmarking is that your test, if all it is returning is the 'index' string from some simple hello world type program, is that it bears no relationship to your real world application.
Real applications are not usually so simple and time within the WSGI application is going to be much more due to template rendering, database access etc etc. This means the performance profile of a real application is going to be completely different and changes how you should configure the server.
In other words, testing with a hello world program is going to give you the completely wrong idea of what you need to do to configure the server appropriately. You really need to understand what the real performance profile of your application is under normal traffic loads and work from there. That is, hammering the server to the point of breaking is also wrong and not realistic.
I have been blogging on my blog site recently about how typical hello world tests people use are wrong, and give some examples of specific tests which show out how the performance of different WSGI servers and configurations can be markedly different. The point of that is to show that you can't base things off one simple test and you do need to understand what your WSGI application is doing.
In all of this, to really truly understand what is going on and how to tune the server properly, you need to use a performance monitoring solution which is built into the WSGI server and so can give insights into the different aspects of how it works and therefore what knobs to adjust. The blog posts are covering this also.
I encountered a similar problem as ash84 described, I used jmeter to test the performance and found the error % becomes non-zero when the jmeter thread number is set beyond some value (50 in my case).
After I watched Graham Dumpleton's talk, I realized it happens mainly because there are not enough spare MPM threads prepared for serving the upcoming burst jmeter requests. In this case, some jmeter requests are not served in the beginning, even though the number of MPM threads catched up later.
In short, setting MinSpareThreads to a larger value fixed my problem, I raised jmeter threads from 50 to 100 and get 0% error.
MinSpareThreads 120
MaxSpareThreads 150
MaxRequestWorkers 200
The number of WSGIDaemonProcess processes times the number of WSGIDaemonProcess threads doesn't have to be greater than the number of jmeter threads. But you may need to set them to higher values to make sure WSGIDaemonProcess could handle the requests quickly enough.
I'm using mod_proxy_fcgi with apache 2.4 on a debian Jessie with my C++ application which does ServerSentEvents with libfcgipp.
My problem is, that apache still buffers my response data. I confirmed that it isn't buffered by the libfcgipp library by using wireshark: After starting the fcgi application via spawn-fcgi, the data gets send to the apache web server as soon as possible. But in my browser (which I use for testing, later there will be a C++ client) it only shows up after I "killed"/closed the sending request in the server application.
So I assume I need to disable buffering for either apache or mod_proxy_fcgi (or both). But I cannot find the appropriate documentation on how to do this.
As the result of a subsequent discussion on the httpd-dev mailing list, support for flushpackets and flushwait was added to mod_proxy_fcgi in r1802040 and backported for Apache 2.4.31 in r1825765. If you are using Apache 2.4.31 or later, you can disable buffering using <Proxy flushpackets=on> as described in the BigPipe documentation:
<FilesMatch "\.php$">
# Note: The only part that varies is /path/to/app.sock
SetHandler "proxy:unix:/path/to/app.sock|fcgi://localhost/"
</FilesMatch>
# Define a matching worker.
# The part that is matched to the SetHandler is the part that
# follows the pipe. If you need to distinguish, "localhost; can
# be anything unique.
<Proxy "fcgi://localhost/" enablereuse=on flushpackets=on max=10>
</Proxy>
Note: flushpackets and flushwait are currently only included in the Apache mod_proxy_fcgi documentation for trunk because r1808129 has not been backported to the 2.4.x branch.
A few notes, since I just spent the past few hours experimenting to find the answer to this question:
It's not possible to entirely disable output buffering when using mod_proxy/mod_proxy_fcgi, however, you can still have responses streamed in chunks.
It seems, based on my experimentation, that chunks have to be at least 4096 bytes before the output will be flushed to the browser.
You can disable output buffering with the mod_fastcgi or mod_fcgi module, but those mods aren't as popular/widely used with Apache 2.4.
If you have mod_deflate enabled and don't set SetEnv no-gzip 1 for the virtualhost/directory/etc. that's streaming data, then gzip will not allow the buffer to flush until the request is complete.
I was testing things out to see how to best use Drupal 8's new BigPipe functionality for streaming requests to the client, and I posted some more notes in this GitHub issue.
I'm trying to configure apache to react faster. Currently I experience heavy lags and huge response times. When I googled for answers, there were articles mentioning KeepAlive, MaxClients and AllowOverride so my focus is on them for now, I guess. I just don't seem to find them.
Here is a the phpinfo(); output:
apache2handler
**************
Apache Version Apache/2.4.12 (Win64) PHP/5.6.8
Apache API Version 20120211
Server Administrator admin#example.com
Hostname:Port
Max Requests Per Child: 0 - Keep Alive: on - Max Per Connection: 100
Timeouts Connection: 60 - Keep-Alive: 5
Virtual Server No
Server Root C:/Apache24
Loaded Modules core mod_win32 mpm_winnt http_core mod_so mod_access_compat
mod_actions mod_alias mod_allowmethods mod_asis mod_auth_basic mod_authn_core mod_authn_file
mod_authz_core mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi
mod_dir mod_env mod_include mod_isapi mod_log_config mod_mime mod_negotiation mod_php5
mod_rewrite mod_setenvif
Directive Local Value Master Value
engine 1 1
last_modified 0 0
xbithack 0 0
Maybe somebody can explain this output to me? I particular:
"Timeouts" = "Connection: 60" setting
"Per Child" = "0" setting
If I understand this right:
there are 60 connections to be allowed
simultaneously
every connection has a maximum of 100 requests (why
so many?)
the server allows a client to load all the ressources
in one request for 5 seconds
maybe those settings are to be found in httpd.conf and not in php.ini? (right now I don't have access to those files)
As far as I know the Timeouts relates to how longer the server will wait for connection, with 60 seconds being the default.
The Per Child bit has to do with how many threads your running per child process.
I'm a bit vague on this stuff but have a read through the docs and you should find all the explainations you need!
I'm working with Apache 2.4.2 and I need to change the LimitRequestFieldSize. supposedly (according to some Google researchs) that can be done in the httpd.conf file but I cant find that LimitRequestFieldSize neither the httpd.conf or any file within the Apache. Have any idea how I can do it?
In the end I solved simply adding LimitRequestFieldSize 500000 to the file httpd-default.conf
What you just did is open the door to a DoS attack.
Take a look at the LimitRequestFieldSize directive in the Apache documentation:
Quoting from that source:
This directive specifies the number of bytes that will be allowed in
an HTTP request header.
The LimitRequestFieldSize directive allows the server administrator to
set the limit on the allowed size of an HTTP request header field. A
server needs this value to be large enough to hold any one header
field from a normal client request. The size of a normal request
header field will vary greatly among different client implementations,
often depending upon the extent to which a user has configured their
browser to support detailed content negotiation. SPNEGO authentication
headers can be up to 12392 bytes.
This directive gives the server administrator greater control over
abnormal client request behavior, which may be useful for avoiding
some forms of denial-of-service attacks.
The documentation also specifies that the context of that directive is server config (which means server-wide) and virtual host (you can apply this directive on a per-vhost basis).
In addition, you do not mention what your OS is. In case it's Linux (which I'm more familiar with):
The configuration file, httpd.conf, is found in /etc/httpd/conf/httpd.conf (RHEL, CentOS, Fedora, Scientific Linux).
In Debian, and derivatives like Ubuntu (I don't think that is the case here, but I am mentioning it anyway just for the record), the configuration file is apache2.conf and can be found in /etc/apache2/apache2.conf.
Hope it helps.
And last but not least, you may want to check out the Unix and Linux Q&A here in StackExchange for questions like this (assuming Linux or other *Nix OS). You may have better luck at getting an answer.
This issue can be solved by updating the directive LimitRequestFieldSize either in the apache httpd.conf or in the virtual hosts.
How to add the prop in the virtual host
<VirtualHost 10.10.50.50:80>
ServerName www.mysite.com
LimitRequestFieldSize 16384
RewriteEngine On
...
...
</VirtualHost>
How to add in the httpd.conf which is inside , apache2/conf/httpd.conf
LimitRequestFieldSize 16384
But even after doing this i am still getting bad request error.
I basically have two questions:
How do you set the RequestReadTimeout (in mod_reqtimeout), header and body time to: unlimited time
and
How do I apply that to a specific folder?
The default reqtimeout.conf is:
<IfModule reqtimeout_module>
RequestReadTimeout header=10-20,minrate=500
RequestReadTimeout body=10,minrate=500
</IfModule>
So that it would be something like:
<IfModule reqtimeout_module>
#Apply this to the /var/www/unlimitedtime folder
<Directory /var/www/unlimitedtime>
RequestReadTimeout header=unlimited,MinRate=0 body=unlimited,MinRate=0
</Directory>
</IfModule>
This doesn't work but it's just an example that maybe will make my question more clear.
Thx
Several tips from official documentation of top
RequestReadTimeout :
Context: server config, virtual host
That means this directive is a quite high level directive, you do not have the Location or Directory context here. In fact the timeouts are applied far before the web server can apply a directory decision on the request (the request is not received...), so it's quite normal. What it means is that you cannot apply this directive in a Directory, and there's nothing you can do for that, sorry.
type=timeout
The time in seconds allowed for reading all of the request headers or
body, respectively. A value of 0 means no limit.
So instead of using the 10-20 form simply set 0 and it becomes an unlimited timeout. Or at least that's what the documentation seems to imply. But that's a real nice way of making your webserver DOS-enabled. A few HTTP requests on the right url and you will get a nice Deny of Service, so I hope some other Timeout setting will override it (but maybe not, be careful) :-)