new server from Godaddy runny really slow - apache

I purchased a new dedicated server from Godaddy yesterday. The website pages are loading really slow. I have 16 GB of Ram and i7 processor. I am trying to optimize my Apache server for high traffic 10K+ active users. Here is the old and new settings:
OLD:
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 150
MaxRequestsPerChild 0
</IfModule>
New:
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 2500
ServerLimit 2500
MaxRequestsPerChild 0
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 3
Timeout 30
</IfModule>
What are the best setting to solve my problem. Note the website is based php mysqli application. Also it has about 3 to 5 images on each page.

Have a look at this page - I use it all the time as a reference.
I also strongly recommend having a play with apache's benchmark tool once you've got it up and running. Keep an eye for a few days on the traffic intake and adjust settings accordingly. Every setup is different - the applications you run can use variable amounts of memory. Maybe the most commonly used page is a low resource page? Maybe people spend a lot of time on single pages? It's all conditional.
Good luck!

Related

mod_evasive not working in Apache 2.4.48 on Ubuntu 18.04

I installed mod_evasive as per instructions in https://www.atlantic.net/vps-hosting/how-to-install-and-configure-modevasive-with-apache-on-ubuntu-18-04/ but with configuration like below:
DOSHashTableSize 3097
DOSPageCount 1
DOSSiteCount 10
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 10
But when I run the perl script I don't see the IP being blacklisted with all requests getting response 200 ok, when I am expecting 403 Forbidden :(
What am I getting wrong??
Additional details: When I re-start my Apache, I see 6 instances of it. And when I run the test perl script, check for the number of apache instances immediately I see the count at 30-ish instances before it comes down to 10 after a while.
My Apache config looks like below:
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 5
My mpm_prefork_module config looks like below:
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxRequestWorkers 80
MaxConnectionsPerChild 1
Is this an issue with Apache configuration??
The issue was not with mod_evasive or its configuration per se.
In my case I had to tweak the configuration of mpm_prefork_module like below to get mod_evasive configuration to work:
StartServers 10
MinSpareServers 10
MaxSpareServers 10
MaxRequestWorkers 80
MaxConnectionsPerChild 0
Basically fix the number of servers to constant by setting StartServers = MinSpareServers = MaxSpareServers = {your_magic_number} and set MaxConnectionsPerChild=0, so that no new server processes are spawned and no re-cycling of connections happen, allowing Child to hold infinitely many concurrent connections.
I lost a day in fixing this one and hope with this answer having the formulae, you don't lose your day :)

Apache optimization for images or static files

I have several pages with about 200 images each. When I access them, Apache opens a lot of threads, using more than 1 Gb of RAM. I can see several "httpd" entries on "top" command, each using 0.6% of RAM.
All files are static, small JPG files. I'm using .htaccess for client side caching, but this is not enough since I have several new users each hour, non-cached ones.
My config:
KeepAlive On
MaxKeepAliveRequests 200
KeepAliveTimeout 30
StartServers 1
MinSpareServers 2
MaxSpareServers 4
ServerLimit 300
MaxClients 300
MaxRequestsPerChild 0
MaxRequestWorkers 300
What is the best way to serve lots of static files, per page, with low memory usage? It's a CentOS 7, Apache 2.4.6, almost in default config, except the directives above.
Thanks.
1GB Ram is quite nothing but Apache is depending also a lot about your CPU specs.
I am not sure how many CPU you have but I think the following setting need to be increased to
StartServers 5
MinSpareServers 5
MaxSpareServers 10
Also you can check the loading time using the developer tools in browser where you can check each image how much it takes to load.

Configure Apache MPM worker module for 1000+ users at an instant

I have youtube proxy site http://playit.pk, the issue is that whenever users increase over 500 at an instant the server gets really slow. I am using MPM Worker module and have tried several configs..
Current one is:
<IfModule mpm_worker_module>
ServerLimit 40
StartServers 10
MaxClients 2000
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 50
MaxRequestsPerChild 0
</IfModule>
other configurations are:
Timeout 20
KeepAlive On
MaxKeepAliveRequests 100
Main server is only responsible for request handling, no streaming takes place here.
With the above configuration, still there is delay and apache shows around 1000 requests currently being processed.
Use fastCGI
FcgidMinProcessesPerClass 0
FcgidMaxProcessesPerClass 700

Configuring HAProxy+nginx+PHP-FPM to out perform Apache+mod_php

Edit
Running my OS on the VirtualBox was the issue. As soon as I made my OS the native on the disk, I was able to see the performance boost.
Original
I've read a lot of people recommend ditching Apache+mod_php for HAProxy+nginx+PHP-FPM. I'm trying to verify that it's a more efficient setup, but am not seeing the results people describe. Both siege and ab (Apache Benchmark) are showing that Apache at any number of concurrent connections is giving better responses per second, and can support more connections .
I'm running Ubuntu 11.04 server on VirtualBox. It has 10 gigs of space, and 1,344 megs of memory. I used apt-get for installing the programs mentioned above. Here are the related config files with just the important parts included.
haproxy.cfg
global
maxconn 4096
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.sock mode 0600 level admin
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
listen tcpcluster *:80
mode tcp
option tcplog
balance roundrobin
server tcp01 192.168.1.199:8080 check
nginx.conf
worker_processes 2;
events {
worker_connections 768;
}
www.conf
pm = dynamic
pm.max_children = 10
pm.min_spare_servers = 2
pm.max_spare_servers = 4
pm.max_requests = 500
apache.conf
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 10
MaxRequestsPerChild 0
</IfModule>
<IfModule mpm_worker_module>
StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxClients 10
MaxRequestsPerChild 0
</IfModule>
<IfModule mpm_event_module>
StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxClients 10
MaxRequestsPerChild 0
</IfModule>
Given that PHP-FPM and Apache both have a maximum of 10 children, I would expect that any speed advantage would be visible. In every test I've run (always waiting until load is 0.01 before I run the test), Apache is always able to handle more request more efficiently.
Is there some other optimization that can be made so that the setup best suited to scale will outperform the setup that should not be more efficient?
Use haproxy as a connection concentrator : use "mode http" instead of "mode tcp", use "option http-server-close" and set a server maxconn value well below the worker connections value. You should cross a point where the lower concurrency brings much more performance with much lower RAM usage and better cache efficiency along the whole chain.
BTW, what are the numbers we're talking about ? Do they measure in hundreds or in thousands of requests per second ? Because clearly, the application server will make a real difference only in the higher loads. Obviously if the application runs very slowly, there is no reason to see a difference when replacing the server.

OSQA apache memory footprint

I have an OSQA (python / django q&a application) installation serving 8 different sites. The sites are all in development, receiving minimal traffic. The server is a virtual private server with 512 mb of ram.
Apache is only serving dynamic pages with mod_wsgi behind Nginx. I can't stop Apache consuming more and more memory with each request until the server chokes.
I experimented with the configuration parameters without much luck to minimize the memory footprint. With the following mpm_prefork parameters in apache2.conf:
StartServers 2
MinSpareServers 1
MaxSpareServers 4
MaxClients 4
MaxRequestsPerChild 100
2 apache processes start using 4 mb and after the first request there are 4 processes with each nearly 50 mb and with each new request those 4 processes climb steadily up to nearly 200 mb each.
I feel like there is something wrong going on. Any suggestions are greatly appreciated.
KeepAlive Off
MaxSpareThreads 3
MinSpareThreads 1
ServerLimit 3
SetEnvIf X-Forwarded-SSL on HTTPS=1
ThreadsPerChild 2
WSGIDaemonProcess osqaWSGI processes=2 python-path=/web/osqa_server:/web/osqa_server/lib/python2.6 threads=1 maximum-requests=550
WSGIProcessGroup osqaWSGI
Ran httperf against this with 10,000 concurrent hits and it was still standing.