I'm setting up a new Apache server.
If I set the MaxClients definition to, let's say, one - does it then block anyone who else who tries to connect to my Apache server except for that one person?
Thanks for answers
Yes, you'll get only one request handled by apache, if you put a quite long KeepAlive Timeout the process will be held for that amount of time by the user. The 2nd user will get his request done by apache only when the first one is finished (or when the keepAlive Timeout ends if the request is in keepAlive mode). So with a small KeepAlive setting, if the requests are fast to handle, you could have a lot of users, done one after the other, without knowing apache handles only one request in parallel.
If you make some activity in the first connection you may be able to block the 2nd request for a very long time.
Now, you should'nt consider it as a way to restrict access to one dedicated person, it's not a security feature. You rely on TCP/IP connections, if the connection breaks, then the 2nd user may get access to the server.
Related
We are running multiple Tomcat JVMs under a single Apache cluster. If we shut down all the JVMs except one, sometime we get 503s. If we increase the
retry interval to 180(from retry=10), problem goes away. That bring me
to this question, how does Apache detects a stopped Tomcat JVM? If I
have a cluster which contains multiple JVMs and some of them are down,
how Apache finds that one out? Somewhere I read, Apache uses a real
request to determine health of a back end JVM. In that case, will that
request failed(with 5xx) if JVM is stopped? Why higher retry value is
making the difference? Do you think introducing ping might help?
If someone can explain a bit or point me to some doc, that would be awesome.
We are using Apache 2.4.10, mod_proxy, byrequests LB algorithm, sticky session,
keepalive is on and ttl=300 for all balancer members.
Thanks!
Well let's examine a little what your configuration is actually doing in action and then move to what might help.
[docs]
retry - Here either you 've set it 10 or 180 what you specify is how much time apache will consider your backend server down and thus won't send him requests. So the higher the value, you gain the time for your backend to get up completely but you put more load to the others since you are -1 server for more time.
stickysession - Here if you lose a backend server for whatever reason all the sessions are on it get an error.
All right now that we described the relevant variables for your situation let's clear that apache mod_proxy does not have a health check mechanism embedded, it updates the status of your backend based on responses on real requests.
So your current configuration works as following:
Request arrives on apache
Apache send it to an alive backend
If request gets an error http code for response or doesn't get a response at all, apache puts that backend in ERROR state.
After retry time is passed apache sends to that backend server requests again.
So reading the above you understand that the first request that will reach a backend server which is down will get an error page.
One of the things you can do is indeed ping, according to the docs will check the backend before send any request. Consider of course the overhead that produces.
Although I would suggest you to configure mod_proxy_ajp which is offering extra functionality (and configuration ofc) to your tomcat backend failover detection.
How would you monitor a server performance in sense of :
Count requests that has been timedout without processing at all (client was starved)
Count requests that has been timedout while in process
Count requests that failed because of error at least in the apache lvl
Thanks
Count requests that has been timedout without processing at all (client was starved)
It depends on what platform you are operating and what the apache server is used for. In case the apache server is used as a back-end for some website you could add timestamps to each request made by the client (website user), or let the client keep track of the requests it performed with their associated timestamps. Send this data to the server and let the server compare this data to it's own logs.
Thus I would advise to keep track both client-sided and server-sided of all requests received and sent with their additional status (success or failure, timestamp).
For more specific info I think more context on the actual implementation is a must.
As per my knowledge, Apache do not support this kind of feature other than server status. But that doesn't include enough metrics to match your requirement.
But nginx provides more metrics which almost include what you need.
Nginx open source version support following metrics,
accepts / accepted
handled
dropped
active
requests / total
Please refer this article. If you are trying to host a php web app, you can move to nginx in that case.
I am not an expert in that case but here is my take on this,
Request time out generate 408 error in logs which is countable and apache provide a variable %D to measure the process time.
Count requests that has been timedout without processing at all
(client was starved)
If there is no process time or minimal then you can assume request is not processed at all.
Count requests that has been timedout while in process
Opposite goes for previous theory you will get some time logged for processing.
Count requests that failed because of error at least in the apache lvl
You will surely get the error log for any reason apache have encountered.
And what will be the role of keep alive in this case is another thing.
Logging methods are different in apache 2 & 2.4 keep that in mind but common logging format will lead you to result.
Edit:
If you are looking for tools to give you some insights try with below, and apache httpd server does provide all the necessary insights which nginx and other server out there can provide.
https://logz.io/
http://goaccess.prosoftcorp.com/
http://awstats.sourceforge.net/
Refrences:
http://httpd.apache.org/docs/current/mod/mod_log_config.html
https://httpd.apache.org/docs/2.4/mod/mod_reqtimeout.html
https://httpd.apache.org/docs/2.4/logs.html
For a benchmarking test, I have a very basic test setup wherein I have a single user looping for 100 times (loop delay 100ms) hitting an https endpoint (GET) with HttpClient4 implementation, keep-alive has been turned on.
In the test results, I have observed a pattern wherein every 5/6th request the connect metric is higher as if a full SSL handshake is occurring, check the image below. I am a bit confused with this, any ideas on whats going on here and why the connect times are higher every n request?
[UPDATE]
I was able to troubleshoot this issue a bit further today after turning on access logs on the load balancer (target of this test) and I can see a pattern wherein JMeter seems to be switching the ports on the client side every few requests - the frequency matches the pattern observed previously with the JMeter test results.
This should probably explain the elevated connect times, now the question is why JMeter switches the port?
This could be keep-alive, it certainly was for my issue. Firstly make sure it's enabled on the sampler. Then there's also this JMeter setting to say how long to keep connections alive for.
httpclient4.time_to_live
I've set to 120000 in jmeter.properties but looking at the docs user.properties file should be used. I know jmeter.properties with a setting of 120000 worked for me.
I set the value high to see if it is an http keep alive causing the port switch. Whatever you set it to you need to ensure the client you are emulating does the same.
As you get some quick results I would guess it is a short timer somewhere and not the server side not allowing keep alive at all. Wireshark can help you pin point this as it could be the server side resetting the connection after a certain time. The above config extends the client side time which may get the information you need, if not have a look at the server side equivalent which will vary depending on what services the endpoint.
I keep coming across certain terms used in the Apache settings. While trying to understand the various discussions and Apache's docs, I need some help figuring out what some of these terms mean:
What is a Client?
What is the difference between a client and a child process? Are they the same?
If MaxClient = 255, does it mean that Apache will process up to 255 page loads in parallel and the rest are queued?
When is a KeepAlive request used?
What is the relationship between a child process and the request of this child process?
First, note that these answers apply either to Apache 1.x, or Apache 2.x only when using the prefork mode.
The machine that opens an HTTP connection and sends a request.
No, they are not the same. An Apache child can handle one request/client at a time, but when that one is finished, the same child can handle a new one.
Yes.
It is used to keep the HTTP connection open in case the client wants to issue another request. A client can remain connected, for example, to download images and such that are associated with a web page. Having KeepAlive On improves performance for the client (user), but having it off reduces memory usage by the server. It is a trade-off.
The Apache process launches a bunch of children. When a request comes in, the parent (root) process picks an idle child to handle that request. When that request is finished, the child is now idle and can handle a new request.
First, I hope you understand that apache 1.3 is very very old, and therefore the documentation will generally be somewhat harder to understand than the newer documentation (i.e. maybe you should upgrade if you have the choice).
I'm not sure where "Client" is referred to by itself in the apache docs by I would assume it refers to anything connecting to an open port and communicating.
Again, not sure where "child" is referred to by itself, so I can't help you there.
MaxClient is the number of processes apache will start to handle requests. It sounds like for Apache 1.3 that what you said is accurate, apache will only handle MaxClient requests in parallel (queuing the rest up to some other maximum for the queue).
KeepAlive is not really a request. It is sent in the request header to tell the server that the browser supports KeepAlive. It has to do with a feature of HTTP that allow one connection to be used for more than one access. If you allow KeepAlive your server will probably get less TCP connections.
I'm not even sure what you're asking here so you'll need to be more specific.
Occasionally when a user tries to connect to the myPHP web interface on one of our web servers, the request will time out before they're prompted to login.
Is the timeout time configured on the server side or within their web browser?
Can you tell me how to increase the amount of time it waits before timing out when this happens?
Also, what logs can I look at to see why their request takes so long from time to time?
This happens on all browsers. They are connecting to myPHP in a LAMP configuration on CentOS 5.6.
Normally when you hit a limit on execution time with LAMP, it's actually PHPs own execution timeout that needs to be adjusted, since both Apache's default and the browsers' defaults are much higher.
Edit: There are a couple more settings of interest to avoid certain other problems re: memory use and parsing time, they can be found at this link.
Typically speaking, if PHP is timing out on the defaults, you have larger problems than the timeout itself (problems connecting to the server itself, poor coding with long loops).
Joachim is right concerning the PHP timeouts though, you'll need to edit the php.ini to increase the timeout of PHP itself before troubleshooting anything else on the server; however, I would suggest trying to find out why people are hitting the timeout in the first place.
max_execution_time = 30;