How to make apache slow and unreliable? - apache

I'm writing some code on a mobile device that uses a REST service to retrieve data from a host. That REST services is being proxied by Apache. In test mode I would like to be able to simulate network outages (as if the device has lost it's cell connection) to test the applications handling of intermittent failures. I also need to validate it's behavior with slow network connections.
I'm currently using Traffic Shaper XP to slow the network connection, but now I need something to make the Apache server send connection resets both randomly and on predefined sequences (to setup and repeat specific test scenarios).

I highly recommend https://github.com/Shopify/toxiproxy from Shopify:
Download https://github.com/Shopify/toxiproxy/releases the cli and server
Run the server:
./toxiproxy-server-linux-amd64
On the cli setup proxy to apache on another port e.g. 8080
./toxiproxy-cli create apache -l localhost:8080 -u localhost:80
Make connection slow and unreliable:
./toxiproxy-cli toxic add apache -t latency -a latency=3000
./toxiproxy-cli toxic add apache -t limit_data -a bytes=1000 --tox=0.01
here add 3 second of latency and stop after 1000 bytes for 1% of requests there are other options for bandwidth etc. You can add or remove these during use. Lots of other features and libraries there.

In Apache2 you can make it slow by adjust prefork settings in apache2.conf. The settings below ought to make apache pretty fn slow. They made my local web application take 700% longer to load.
<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 2
MaxSpareServers 2
MaxClients 4
MaxRequestsPerChild 0
</IfModule>

It looks like DummyNet is the closest thing, but it’s still not quite there. For repeatable testing it would be good to have some control over dropped packets and resets.

Write a little proxy that forwards TCP connections from your app to the apache server and that you can set up in your test to cut the connection after x number of bytes or milliseconds.

On a different (or on the same) computer use the commandline tool ab to get some load on the apache. More informations here.

Is this a Unix or Linux environment? nice it up to give it lower priority then run a high CPU usage task like listening to music, playing a movie, calculating pi, etc. The low priority for Apache should create problems similar to what you're looking for.

Related

1 Server Multi cPanel, how the apache work?

i want to ask something about dedicated server.
i have dedicated server and a cPanel website with heavy load, when i check the server load, all parameter didn't go up to 60% usage. but the apache work is high.
so i wonder if i can do this.
i buy dedicated server(DS) and install 2 cPanel on same DS. i know that cPanel need an IP to bind the license so i add 1 additional IP to my DS.
what i am trying to archieve here is to split workload in same website, and to split the traffic i use loadbalancer from CF.
so i have abc.com with 2 different IPs and use LoadBalancer to split the load.
here is why i need to do this
Server load relative low (under 80%)
Apache load relative high 3-10 req/s
There is a problem in your problem definition
What do you mean by Apache work?
if you want have more threads and processes of Apache httpd on the same server, you dont need to install two Cpanel instances, you could tune your Apache httpd worker configuration for a better performance and resource utilization.
you can even use litespeed or nginx web servers on cpanel.

Why would Apache be slow when application server is quick?

We are using Apache as the web server, and it proxies requests to Jboss (think Tomcat) Java application server using AJP.
We have logging on for Apache and for our web application in Jboss.
We are seeing, not always but sometimes, cases where the processing time for a request in Jboss is less than half a second, but in the Apache log for the same request it is taking over 8 seconds to complete the request.
I can't even think where to start looking and I have not come up with a good Google search to try and work out why Apache is sitting on the request for so long. Any help appreciated.
Disclaimer: Educated guess taken from my experience with running such setups.
Preface
Apache can be configured to allow only a limited number of connections at the same time. In fact this is a prudent way to configure Apache since every connection uses a certain amount of resources and having no upper limit puts you at risk to run into a situation, where your main memory is exhausted and your server becomes unresponsive.
Resource exhaustion
That being said, Apache is usually configured as shown below, your numbers and modules may be different though. The principle still applies.
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 150
MaxRequestsPerChild 0
</IfModule>
This indicates that Apache can process at most 150 concurrent connections.
If a client initiates the 151'th connection, the operating system kernel tries to forward this connection to the Apache process, but it won't answer any more connections. The kernel then enqueues the connection until another connection is closed by Apache.
The time it takes until the kernel can successfully initiate the connection will look to the user as if the request as such takes longer to complete.
The application-server on the other hand, doesn't know about the delay and received the request only after the connection has been initiated. To the application-server therefore everything looks normal.
If you don't have enough resources to increase the concurrent connections in Apache, consider switching to a more resource-efficient web-proxy, like nginx or Varnish.
I don't think apache is acutally slow in your case. I guess you are using keepalived connections between apache and jboss. Under some circumstances, for example the connector is using blocking IO strategy and mean while the number of apache httpd processes are higher than the number of executor threads configurated in jboss connector. It might cause the jboss container thread being blocked after it served a request. You should post your apache and jboss configurations in order to get more specific answers.

Monitor Bandwidth on Apache Individual Virtual Websites (MRTG?)

How do I monitor bandwidth usage of individual virtual sites on Apache? (Ubuntu 14).
On our IIS server, we use the performance monitor, save to csv file and have MRTG parse the data and display it as graphs.
Can I do this with MRTG? I read of an unsupported module for Apache (mod_monitor??) that some had tried to use but really don't want to go with unsupported software.
The short answer is that you probably cannot do it without a little additional work.
The longer answer is that, while MRTG can graph anything in theory, you have to provide it with a way to obtain the data. The throughput of a network interface is already provided via SNMP, but the network traffic per virtual server is a little harder to come by, and you need to convince Apache to hand this data over in a format you can use.
You are clearly already aware of much of this, since under IIS you used the performance monitor to obtain the data from the perfstats. In fact, with IIS, instead of dumping the stats to a file and parsing it, you can use a plugin like mrtg-nsclient to query the perfstats directly via the Nagios nsclient++ agent. However, you are using Apache...
One way to achieve it would be to run each virtual server on a separate TCP port, and then use iptables logging rules to count the bytes passed. The output of iptables -L can then be parsed by MRTG to get the counters.
If you want to use name virtual hosts, though, only Apache's internals have the relevant data.
I have an MRTG data collection plugin that obtains total traffic counts via the mod_status URL. This allows graphing of the number of active Apache threads, and total traffic. However it is not split by virtual server, so you cannot get the individual statistics. Even with ExtendedStatus on, you only see the activity of the current threads, not counts split by vhost. ExtendedStatus will allow you to see how many threads are active per vhost, but not the total bytes transferred by each vhost.
The output you want appears to exist in mod_watch which will output one line of statistics per vhost on the URL /watch-list. However, this is an older module and may require modification in order for it to compile against Apache 2.4. It is also very hard to come by as the author has apparently tried to bury it. It used to be on github but vanished in 2012.
Try here: https://github.com/pld-linux/apache-mod_watch for the source,
Try here: http://fossies.org/windows/www/httpd-modules-2.4-win64-VC11.zip/index_o.html for the windows binary for Apache 2.4

Service Temporarily Unavailable under load in OpenShift Enterprise 2.0

Using OpenShift Enterprise 2.0, I have a simple jbossews (tomcat7) + mysql 5.1 app that uses JSP files connected to a mysql database. The app was created as a non-scaled app (fwiw the same issue happens when scaling is enabled).
Using a JMeter driver with only a single concurrent user and no think time, it will chug along for about 2 minutes (at about 200 req/sec) and then it will start returning "503 Service Temporarily Unavailable" in batches (a few seconds at a time) on and off for the remainder of the test. Even if I change nothing (don't restart the app) if I wait a moment and then try again, it will do the same thing--first it seems fine, but then it will start with the errors.
The gear is far from fully-utilized (memory/cpu), and the only log I can find that shows a problem is the /var/log/httpd/error_log, which fills up with these entries:
[Tue Mar 25 15:51:13 2014] [error] (99)Cannot assign requested address: proxy: HTTP: attempt to connect to 127.8.162.129:8080 (*) failed
Looking at the 'top' command on the node host at the time that the errors start to occur, I see several httpd processes surge to the top on and off.
So it looks like I am somehow running out of proxy connections or something similar. However, I'm not sure how that is happening with only a single concurrent user. Any ideas of how to fix this? I couldn't find any similar posts.
The core problem is that the system is running out of ephemeral ports due to connections stuck in TIME_WAIT. Check using:
netstat -pan --tcp | less
or
netstat -pan --tcp | grep -c ".*TIME_WAIT"
to just count the number of connections in time wait state.
These are connections made by the node port proxy (httpd) to the tomcat backend. There are several ways to change TCP settings in order to lessen the problem. First attempt is to enable reuse. Append the following to /etc/sysctl.conf:
# allow reuse of time_wait connections
net.ipv4.tcp_tw_reuse=1
This will allow connections in TIME_WAIT state to be reused if there are no ephemeral ports available.
However,the problem mostly remains that these connections are not being properly pooled. I do not run into this issue outside of a gear with the same app+driver--meaning that the connections are properly pooled and don't have to sit in TIME_WAIT state at all. Something in the proxy must be interfering with the connection closure.
Looks like the mod_proxy / mod_rewrite are not configured for connection pooling/keepalive or they are not compatible with it.
You should first try and move to vhost routing, if your hitting this issue, but tcp tw reuse can help if vhost connection are still so high that you still run out of port.
https://access.redhat.com/articles/1203843 also has a lot of good information on the topic, including this on possible causes of error 503:
Understanding that HAProxy preforms health checks on Gear
(application) contexts is important because if these checks fail you
can see 502 or 503 errors when trying to access your application,
because the Proxy disables the route to the application (IE: puts the
gear in maintenance mode).
...and...
...if you are seeing 502 or 503 errors when trying to access your
application, it could be because the proxy is disabling the routes to
the application (IE: puts the gear in maintenance mode), because it is
failing health checks...

Can I use Apache mod_proxy as a connection pool, under the Prefork MPM?

Summary/Quesiton:
I have Apache running with Prefork MPM, running php. I'm trying to use Apache mod_proxy to create a reverse proxy that I can re-route my requests through, so that I can use Apache to do connection pooling. Example impl:
in httpd.conf:
SSLProxyEngine On
ProxyPass /test_proxy/ https://destination.server.com/ min=1 keepalive=On ttl=120
but when I run my test, which is the following command in a loop:
curl -G 'http://localhost:80/test_proxy/testpage'
it doesn't seem to re-use the connections.
After some further reading, it sounds like I'm not getting connection pool functionality because I'm using the Prefork MPM rather than the Worker MPM. So each time I make a request to the proxy, it spins up a new process with its own connection pool (of size one), instead of using the single worker that maintains its own pool. Is that interpretation right?
Background info:
There's an external server that I make requests to, over https, for every page hit on a site that I run.
Negotiating the SSL handshake is getting costly, because I use php and it doesn't seem to support connection pooling - if I get 300 page requests to my site, they have to do 300 SSL handshakes to the external server, because the connections get closed after each script finishes running.
So I'm attempting to use a reverse proxy under Apache to function as a connection pool, to persist the connections across php processes so I don't have to do the SSL handshake as often.
Sources that gave me this idea:
http://httpd.apache.org/docs/current/mod/mod_proxy.html
http://geeksnotes.livejournal.com/21264.html
First of all, your test method cannot demonstrate connection pooling since for every call, a curl client is born and then it dies. Like dead people don't talk a lot, a dead process cannot keep a connection alive.
You have clients that bothers your proxy server.
Client ====== (A) =====> ProxyServer
Let's call this connection A. Your proxy server does nothing, it is just a show off. The handsome and hardworking server is so humble that he hides behind.
Client ====== (A) =====> ProxyServer ====== (B) =====> WebServer
Here, if I am not wrong, the secured connection is A, not B, right?
Repeating my first point, on your test, you are creating a separate client for each request. Every client needs a separate connection. Connection is something that happens between at least two parties. One side leaves and connection is lost.
Okay, let's forget curl now and look together at what we really want to do.
We want to have SSL on A and we want A side of traffic to be as fast as possible. For this aim, we have already separated side B so it will not make A even slower, right?
Connection pooling? There is no such thing as connection pooling at A. Every client comes and goes making a lot of noise. Only thing that can help you to reduce this noise is "Keep-Alive" which means, keeping connection alive from a client for some short period of time so this very same client can ask for other files that will be required by this request. When we are done, we are done.
For connections on B, connections will be pooled; but this will not bring you any performance since on one-server setup you did not have this part of the noise production.
How do we help this system run faster?
If these two servers are on the same machine, we should get rid of the show-off server and continue with our hardworking webserver. It adds a lot of unnecessary work to the system.
If these are separate machines, then you are being nice to web server by taking at least encyrption (for ssl) load from this poor guy. However, you can be even nicer.
If you want to continue on Apache, switch to mpm_worker from mpm_prefork. In case of 300+ concurrent requests, this will work much better. I really have no idea about the capacity of your hardware; but if handling 300 requests is difficult, I believe this little change will help your system a lot.
If you want to have an even more lightweight system, consider nginx as an alternative to Apache. It is very easy to setup to work with PHP and it will have a better performance.
Other than front-end side of things, also consider checking your database server. Connection pooling will make real difference here. Be sure if your PHP installation is configured to reuse connections to database.
In addition, if you are hosting static files on the same system, then move them out either on another web server or do even better by moving static files to a cloud system with CDN like AWS's S3+CloudFront or Rackspace's CloudFiles. Even without CloudFront, S3 will make you happy. Rackspace's solution comes with Akamai!
Taking out static files will make your web server "oh what happened, what is this silence? ohhh heaven!" since you mentioned this is a website and web pages have many static files for each dynamically generated html page most of the time.
I hope you can save the poor guy from the killer work.
Prefork can still pool 1 connection per backend server per process.
Prefork doesn't necessarily create a new process for each frontend request, the server processes are "pooled" themselves and the behavior depends on e.g. MinSpareServers/MaxSpareServers and friends.
To maximise how often a prefork process will have a backend connection for you, avoid very high or low maxspareservers or very high minspareservers as these will result in "fresh" processes acceptin new connections.
You can log %P in your LogFormat directive to help get an idea if how often processes are being reused.
The Problem in my case was, the the connection pooling between reverse proxy and backend server was not taking place because of the Backend Server Apache closing the SSL connection at the end of each HTTPS request.
The backend Apache Server was doing this becuse of the following Directive being present in the httpd.conf:
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
This directive does not make sense when the backend server is connected via a reverse proxy and this can be removed from the backend server config.