I am hosting multiple sites on a server with 7.5gb RAM. Using apache2 mpm_prefork.
Following command gives me a value of 200-300 in production
ps aux|grep -c 'apache2'
Using top i see only some hundred megabytes of RAM is free. Error log show nothing unusual. Is this much apache2 process normal?
MaxRequestWorkers is set to 512
Update:
Now i am using mod-status to check apache activity.
I have a row like this
Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request
0-0 29342 2/2/70 W 0.07 5702 0 3.0 0.00 1.67 XXX XXX /someurl
If i check again after sometime PID not changes and i get SS with greater value that previous time. M of this request is in 'W` sending reply state. So that means apache2 process locked in for that request?
On my VPS and root servers, the situation is partially similar. AFAIK the os tries to distribute most of the processing power/RAM to running processes and frees the resources for other processes as the need arises.
I run a very simple website (basically a redirect based on a php database) which gets on average 5 visits per second throughout the day, but at peak times (usually 2-3 times a day), this may go up to even 300 visits/s or more. I've modified the default apache settings as follows (based on various info found online as I'm not an expert):
Start Servers: 5 (default) / 25 (now)
Minimum Spare Servers: 5 (default) / 50 (now)
Maximum Spare Servers: 10 (default) / 100 (now)
Server Limit: 256 (default) / 512 (now)
Max Clients: 150 (default) / 450 (now)
Max Requests Per Child: 10000 (default)
Keep-Alive: On (default) / Off (now)
Timeout: 300 (default)
Server (VPS) specs:
4x 3199.998 MHz, 512 KB Cache, QEMU Virtual CPU version (cpu64-rhel6)
8GB RAM (Memory: 8042676k/8912896k available (5223k kernel code, 524700k absent, 345520k reserved, 7119k data, 1264k init)
70GB SSD disk
CENTOS 6.5 x86_64 kvm – server
During average loads the server handles just fine. Problems occur almost every day during peak traffic times, as in http time-outs or extremely long response/load times.
Question is, do I need to get a better server or can I improve response times during peak traffic by further tuning Apache config? Any help would be appreciated. Thank you!
maybe you need to enable mod_cache with mod_mem_cache, another parameter that i always configure is ulimits:
nofile to get more sockets
nproc to get more processes
http://www.webperformance.com/load-testing/blog/2012/12/setting-apache2-ulimit-for-maximum-prefork-performance/
finally TCP Tuning and Network, check all net.core and net.ipv4 parameters to get less latency
http://fasterdata.es.net/host-tuning/linux/
Can someone please walk me through the process of how I can load test my website using apache bench tool (ab)?
I want to know the following:
How many people per minute can the site handle?
Please walk me through the commands I should run to figure this out.
I tried every tutorial, and they are confusing.
The Apache benchmark tool is very basic, and while it will give you a solid idea of some performance, it is a bad idea to only depend on it if you plan to have your site exposed to serious stress in production.
Having said that, here's the most common and simplest parameters:
-c: ("Concurrency"). Indicates how many clients (people/users) will be hitting the site at the same time. While ab runs, there will be -c clients hitting the site. This is what actually decides the amount of stress your site will suffer during the benchmark.
-n: Indicates how many requests are going to be made. This just decides the length of the benchmark. A high -n value with a -c value that your server can support is a good idea to ensure that things don't break under sustained stress: it's not the same to support stress for 5 seconds than for 5 hours.
-k: This does the "KeepAlive" funcionality browsers do by nature. You don't need to pass a value for -k as it it "boolean" (meaning: it indicates that you desire for your test to use the Keep Alive header from HTTP and sustain the connection). Since browsers do this and you're likely to want to simulate the stress and flow that your site will have from browsers, it is recommended you do a benchmark with this.
The final argument is simply the host. By default it will hit http:// protocol if you don't specify it.
ab -k -c 350 -n 20000 example.com/
By issuing the command above, you will be hitting http://example.com/ with 350 simultaneous connections until 20 thousand requests are met. It will be done using the keep alive header.
After the process finishes the 20 thousand requests, you will receive feedback on stats. This will tell you how well the site performed under the stress you put it when using the parameters above.
For finding out how many people the site can handle at the same time, just see if the response times (means, min and max response times, failed requests, etc) are numbers your site can accept (different sites might desire different speeds). You can run the tool with different -c values until you hit the spot where you say "If I increase it, it starts to get failed requests and it breaks".
Depending on your website, you will expect an average number of requests per minute. This varies so much, you won't be able to simulate this with ab. However, think about it this way: If your average user will be hitting 5 requests per minute and the average response time that you find valid is 2 seconds, that means that 10 seconds out of a minute 1 user will be on requests, meaning only 1/6 of the time it will be hitting the site. This also means that if you have 6 users hitting the site with ab simultaneously, you are likely to have 36 users in simulation, even though your concurrency level (-c) is only 6.
This depends on the behavior you expect from your users using the site, but you can get it from "I expect my user to hit X requests per minute and I consider an average response time valid if it is 2 seconds". Then just modify your -c level until you are hitting 2 seconds of average response time (but make sure the max response time and stddev is still valid) and see how big you can make -c.
Please walk me through the commands I should run to figure this out.
The simplest test you can do is to perform 1000 requests, 10 at a time (which approximately simulates 10 concurrent users getting 100 pages each - over the length of the test).
ab -n 1000 -c 10 -k -H "Accept-Encoding: gzip, deflate" http://www.example.com/
-n 1000 is the number of requests to make.
-c 10 tells AB to do 10 requests at a time, instead of 1 request at a time, to better simulate concurrent visitors (vs. sequential visitors).
-k sends the KeepAlive header, which asks the web server to not shut down the connection after each request is done, but to instead keep reusing it.
I'm also sending the extra header Accept-Encoding: gzip, deflate because mod_deflate is almost always used to compress the text/html output 25%-75% - the effects of which should not be dismissed due to it's impact on the overall performance of the web server (i.e., can transfer 2x the data in the same amount of time, etc).
Results:
Benchmarking www.example.com (be patient)
Completed 100 requests
...
Finished 1000 requests
Server Software: Apache/2.4.10
Server Hostname: www.example.com
Server Port: 80
Document Path: /
Document Length: 428 bytes
Concurrency Level: 10
Time taken for tests: 1.420 seconds
Complete requests: 1000
Failed requests: 0
Keep-Alive requests: 995
Total transferred: 723778 bytes
HTML transferred: 428000 bytes
Requests per second: 704.23 [#/sec] (mean)
Time per request: 14.200 [ms] (mean)
Time per request: 1.420 [ms] (mean, across all concurrent requests)
Transfer rate: 497.76 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 5 14 7.5 12 77
Waiting: 5 14 7.5 12 77
Total: 5 14 7.5 12 77
Percentage of the requests served within a certain time (ms)
50% 12
66% 14
75% 15
80% 16
90% 24
95% 29
98% 36
99% 41
100% 77 (longest request)
For the simplest interpretation, ignore everything BUT this line:
Requests per second: 704.23 [#/sec] (mean)
Multiply that by 60, and you have your requests per minute.
To get real world results, you'll want to test Wordpress instead of some static HTML or index.php file because you need to know how everything performs together: including complex PHP code, and multiple MySQL queries...
For example here is the results of testing a fresh install of Wordpress on the same system and WAMP environment (I'm using WampDeveloper, but there are also Xampp, WampServer, and others)...
Requests per second: 18.68 [#/sec] (mean)
That's 37x slower now!
After the load test, there are a number of things you can do to improve the overall performance (Requests Per Second), and also make the web server more stable under greater load (e.g., increasing the -n and the -c tends to crash Apache), that you can read about here:
Load Testing Apache with AB (Apache Bench)
Steps to set up Apache Bench(AB) on windows (IMO - Recommended).
Step 1 - Install Xampp.
Step 2 - Open CMD.
Step 3 - Go to the apache bench destination (cd C:\xampp\apache\bin) from CMD
Step 4 - Paste the command (ab -n 100 -c 10 -k -H "Accept-Encoding: gzip, deflate" http://localhost:yourport/)
Step 5 - Wait for it. Your done
Load testing your API by using just ab is not enough. However, I think it's a great tool to give you a basic idea how your site is performant.
If you want to use the ab command in to test multiple API endpoints, with different data, all at the same time in background, you need to use "nohup" command. It runs any command even when you close the terminal.
I wrote a simple script that automates the whole process, feel free to use it: http://blog.ikvasnica.com/entry/load-test-multiple-api-endpoints-concurrently-use-this-simple-shell-script
I was also curious if I can measure the speed of my script with apache abs or a construct / destruct php measure script or a php extension.
the last two have failed for me: they are approximate.
after which I thought to try "ab" and "abs".
the command "ab -k -c 350 -n 20000 example.com/" is beautiful because it's all easier!
but did anyone think to "localhost" on any apache server for example www.apachefriends.org?
you should create a folder such as "bench" in root where you have 2 files: test "bench.php" and reference "void.php".
and then: benchmark it!
bench.php
<?php
for($i=1;$i<50000;$i++){
print ('qwertyuiopasdfghjklzxcvbnm1234567890');
}
?>
void.php
<?php
?>
on your Desktop you should use a .bat file(in Windows) like this:
bench.bat
"c:\xampp\apache\bin\abs.exe" -n 10000 http://localhost/bench/void.php
"c:\xampp\apache\bin\abs.exe" -n 10000 http://localhost/bench/bench.php
pause
Now if you pay attention closely ...
the void script isn't produce zero results !!!
SO THE CONCLUSION IS: from the second result the first result should be decreased!!!
here i got :
c:\xampp\htdocs\bench>"c:\xampp\apache\bin\abs.exe" -n 10000 http://localhost/bench/void.php
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: Apache/2.4.33
Server Hostname: localhost
Server Port: 80
Document Path: /bench/void.php
Document Length: 0 bytes
Concurrency Level: 1
Time taken for tests: 11.219 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 2150000 bytes
HTML transferred: 0 bytes
Requests per second: 891.34 [#/sec] (mean)
Time per request: 1.122 [ms] (mean)
Time per request: 1.122 [ms] (mean, across all concurrent requests)
Transfer rate: 187.15 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 1
Processing: 0 1 0.9 1 17
Waiting: 0 1 0.9 1 17
Total: 0 1 0.9 1 17
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 1
95% 2
98% 2
99% 3
100% 17 (longest request)
c:\xampp\htdocs\bench>"c:\xampp\apache\bin\abs.exe" -n 10000 http://localhost/bench/bench.php
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: Apache/2.4.33
Server Hostname: localhost
Server Port: 80
Document Path: /bench/bench.php
Document Length: 1799964 bytes
Concurrency Level: 1
Time taken for tests: 177.006 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 18001600000 bytes
HTML transferred: 17999640000 bytes
Requests per second: 56.50 [#/sec] (mean)
Time per request: 17.701 [ms] (mean)
Time per request: 17.701 [ms] (mean, across all concurrent requests)
Transfer rate: 99317.00 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 1
Processing: 12 17 3.2 17 90
Waiting: 0 1 1.1 1 26
Total: 13 18 3.2 18 90
Percentage of the requests served within a certain time (ms)
50% 18
66% 19
75% 19
80% 20
90% 21
95% 22
98% 23
99% 26
100% 90 (longest request)
c:\xampp\htdocs\bench>pause
Press any key to continue . . .
90-17= 73 the result i expect !
Steps to perform:
Commands For load testing
Step 1 - Install Xampp.
Step 2 - Open CMD.
Step 3 - Go to the Apache bench destination (cd C:\xampp\apache\bin) from CMD.
Step 4 - Paste the command
ab -n 100 -c 10 -k -H "Accept-Encoding: gzip, deflate" http://localhost:port/
Step 5 - Wait for it. You're done
Main Command Only “Get”
ab -k -c 25 -n 2000 http://192.168.1.113:3001/api/v1/filters/3
One of the best solution for post request load testing. Click here
I have a 1 GB VPS and Apache slows to a crawl almost from start up. I ran ApacheBench on a static.html file and things don't differ. However, the site will have both MySQL and PHP and a high volume of AJAX requests, so I'd like to tune for that.
When I restart, error logs show this almost immediately:
[error] server reached MaxClients setting, consider raising the MaxClients setting
ab -n 1000 -c 1000
shows:
Document Path: /static.html
Document Length: 7 bytes
Concurrency Level: 1000
Time taken for tests: 57.784 seconds
Complete requests: 1000
Failed requests: 64
(Connect: 0, Receive: 0, Length: 64, Exceptions: 0)
Write errors: 0
Total transferred: 309816 bytes
HTML transferred: 6552 bytes
Requests per second: 17.31 [#/sec] (mean)
Time per request: 57784.327 [ms] (mean)
Time per request: 57.784 [ms] (mean, across all concurrent requests)
Transfer rate: 5.24 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 25 13.4 25 48
Processing: 1070 16183 15379.4 9601 57737
Waiting: 0 14205 15176.5 9591 42516
Total: 1070 16208 15385.0 9635 57783
Percentage of the requests served within a certain time (ms)
50% 9635
66% 20591
75% 20629
80% 36357
90% 42518
95% 42538
98% 42556
99% 42560
100% 57783 (longest request)
If I run ab on a php file, it finishes sometimes, most of the time it won't and sometimes gets errors like
apr_socket_recv: Connection reset by peer (104)
and
socket: No buffer space available (105)
httpd.conf items:
Timeout 10
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 1
<IfModule prefork.c>
StartServers 3
MinSpareServers 5
MaxSpareServers 9
ServerLimit 40
MaxClients 40
MaxRequestsPerChild 5000
</IfModule>
Top... (CPU and Load 1min are very erratic during testing):
top - 10:44:51 up 11:50, 3 users, load average: 0.17, 0.42, 0.90
Tasks: 84 total, 2 running, 82 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.8%us, 3.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1793072k total, 743604k used, 1049468k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21831 mysql 18 0 506m 71m 6688 S 0.7 4.1 4:03.18 mysqld
1828 root 15 0 113m 52m 2052 S 0.0 3.0 0:02.85 spamd
1830 popuser 18 0 113m 51m 956 S 0.0 2.9 0:00.00 spamd
8012 apache 15 0 327m 35m 17m S 3.7 2.0 0:11.83 httpd
8041 apache 15 0 320m 28m 15m S 0.0 1.6 0:11.83 httpd
8022 apache 15 0 321m 27m 14m S 2.3 1.6 0:11.05 httpd
8033 apache 15 0 320m 27m 14m S 1.7 1.6 0:10.06 httpd
Is there something obvious that is wrong here? or what would be my next step in troubleshooting?
Sounds like you don't have enough memory -- 1GB isn't much when you're running PHP with prefork and MySQL on the same server. Your MaxClients should probably be 10-20, not 40.
A few weeks ago I wrote a script to tune Apache httpd that would probably help determine the maximum values for your server. You can find the weblog entry here http://surniaulula.com/2012/11/09/check-apache-httpd-mpm-config-limits/ and the script is on Google Code as well.
Enjoy!
js.
One of my sites is growing and I'm having scalability issues. My knowledge with this new software is rather small, my hosting company doesn't have a clue either.
Shared Memory is not working since variables are not cached between requests, is there a way to make this work? At the moment the script is relying on Memcached but there is the TCP/IP overhead.
Will PHP-FPM automatically solve this problem ? Since it comes bundled with PHP 5.3.3 maybe an upgrade of PHP will suffice ?
== OUTPUT FROM apc.php ==
General Cache Information
APC: 3.0.19
PHP: 5.2.14
Server: Apache/2.2.16 (Unix) mod_ssl/2.2.16 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.5
Shared Memory: 1 Segment(s) with 128.0 MBytes
(mmap memory, pthread mutex locking)
File Cache Information
Cached Files 91 ( 4.2 MBytes)
Hits 245
Misses 92
Request Rate (hits, misses) 0.41 cache requests/second
Hit Rate 0.30 cache requests/second
Miss Rate 0.11 cache requests/second
Insert Rate 0.11 cache requests/second
Cache full count 0
User Cache Information (PROBLEM!!!!)
Cached Variables 0 ( 0.0 Bytes)
Hits 0
Misses 0
Request Rate (hits, misses) 0.00 cache requests/second
Hit Rate 0.00 cache requests/second
Miss Rate 0.00 cache requests/second
Insert Rate 0.00 cache requests/second
Cache full count 0
Runtime Settings
apc.cache_by_default 1
apc.coredump_unmap 0
apc.enable_cli 0
apc.enabled 1
apc.file_update_protection 2
apc.filters
apc.gc_ttl 3600
apc.include_once_override 0
apc.max_file_size 10M
apc.mmap_file_mask /tmp/apc.RqsiCE
apc.num_files_hint 1024
apc.report_autofilter 0
apc.rfc1867 0
apc.rfc1867_freq 0
apc.rfc1867_name APC_UPLOAD_PROGRESS
apc.rfc1867_prefix upload_
apc.shm_segments 1
apc.shm_size 128
apc.slam_defense 0
apc.stat 1
apc.stat_ctime 0
apc.ttl 7200
apc.user_entries_hint 4096
apc.user_ttl 7200
apc.write_lock 1
What are you using in your PHP code to try storing and fetching variables? Are you sure you're using APC to store variables? Try to create a sample code to check out if you're using everything properly.
Try to enable apc.enable_cli
I've worked with APC with fastCGI - php-cgi and php-fpm