How can I increase the amount of time apache waits before timing out an HTTP request? - apache

Occasionally when a user tries to connect to the myPHP web interface on one of our web servers, the request will time out before they're prompted to login.
Is the timeout time configured on the server side or within their web browser?
Can you tell me how to increase the amount of time it waits before timing out when this happens?
Also, what logs can I look at to see why their request takes so long from time to time?
This happens on all browsers. They are connecting to myPHP in a LAMP configuration on CentOS 5.6.

Normally when you hit a limit on execution time with LAMP, it's actually PHPs own execution timeout that needs to be adjusted, since both Apache's default and the browsers' defaults are much higher.
Edit: There are a couple more settings of interest to avoid certain other problems re: memory use and parsing time, they can be found at this link.

Typically speaking, if PHP is timing out on the defaults, you have larger problems than the timeout itself (problems connecting to the server itself, poor coding with long loops).
Joachim is right concerning the PHP timeouts though, you'll need to edit the php.ini to increase the timeout of PHP itself before troubleshooting anything else on the server; however, I would suggest trying to find out why people are hitting the timeout in the first place.
max_execution_time = 30;

Related

How does Apache detects a stopped Tomcat JVM?

We are running multiple Tomcat JVMs under a single Apache cluster. If we shut down all the JVMs except one, sometime we get 503s. If we increase the
retry interval to 180(from retry=10), problem goes away. That bring me
to this question, how does Apache detects a stopped Tomcat JVM? If I
have a cluster which contains multiple JVMs and some of them are down,
how Apache finds that one out? Somewhere I read, Apache uses a real
request to determine health of a back end JVM. In that case, will that
request failed(with 5xx) if JVM is stopped? Why higher retry value is
making the difference? Do you think introducing ping might help?
If someone can explain a bit or point me to some doc, that would be awesome.
We are using Apache 2.4.10, mod_proxy, byrequests LB algorithm, sticky session,
keepalive is on and ttl=300 for all balancer members.
Thanks!
Well let's examine a little what your configuration is actually doing in action and then move to what might help.
[docs]
retry - Here either you 've set it 10 or 180 what you specify is how much time apache will consider your backend server down and thus won't send him requests. So the higher the value, you gain the time for your backend to get up completely but you put more load to the others since you are -1 server for more time.
stickysession - Here if you lose a backend server for whatever reason all the sessions are on it get an error.
All right now that we described the relevant variables for your situation let's clear that apache mod_proxy does not have a health check mechanism embedded, it updates the status of your backend based on responses on real requests.
So your current configuration works as following:
Request arrives on apache
Apache send it to an alive backend
If request gets an error http code for response or doesn't get a response at all, apache puts that backend in ERROR state.
After retry time is passed apache sends to that backend server requests again.
So reading the above you understand that the first request that will reach a backend server which is down will get an error page.
One of the things you can do is indeed ping, according to the docs will check the backend before send any request. Consider of course the overhead that produces.
Although I would suggest you to configure mod_proxy_ajp which is offering extra functionality (and configuration ofc) to your tomcat backend failover detection.

JMeter and Connect Times for SSL Connections

For a benchmarking test, I have a very basic test setup wherein I have a single user looping for 100 times (loop delay 100ms) hitting an https endpoint (GET) with HttpClient4 implementation, keep-alive has been turned on.
In the test results, I have observed a pattern wherein every 5/6th request the connect metric is higher as if a full SSL handshake is occurring, check the image below. I am a bit confused with this, any ideas on whats going on here and why the connect times are higher every n request?
[UPDATE]
I was able to troubleshoot this issue a bit further today after turning on access logs on the load balancer (target of this test) and I can see a pattern wherein JMeter seems to be switching the ports on the client side every few requests - the frequency matches the pattern observed previously with the JMeter test results.
This should probably explain the elevated connect times, now the question is why JMeter switches the port?
This could be keep-alive, it certainly was for my issue. Firstly make sure it's enabled on the sampler. Then there's also this JMeter setting to say how long to keep connections alive for.
httpclient4.time_to_live
I've set to 120000 in jmeter.properties but looking at the docs user.properties file should be used. I know jmeter.properties with a setting of 120000 worked for me.
I set the value high to see if it is an http keep alive causing the port switch. Whatever you set it to you need to ensure the client you are emulating does the same.
As you get some quick results I would guess it is a short timer somewhere and not the server side not allowing keep alive at all. Wireshark can help you pin point this as it could be the server side resetting the connection after a certain time. The above config extends the client side time which may get the information you need, if not have a look at the server side equivalent which will vary depending on what services the endpoint.

controlling number of concurrent connection apache server

I'm using apache with ldirector i'm facing some issues during load times when google, bing crawlers hit my site it makes apache to choke due to which my server's cup useage went to 100% utlization. after this i have to stop apache and monitor load manually i want to automate all this scenario. here is what i want when ever load comes on apache it normalizes server according to given settings and if cpu usage goes high it should not be exceded to given cpu usage limit.
I want to control all this via shell script, please give suggestions.

Is php scalable with reverse ajax long polling?

I am working on a website that displays some data from DB that changes frequently (Status of a queue and a chat conversation). My current setup is Apache/PHP/MySQL. Naturally I would like to avoid polling the server every x seconds since this does not scale well. I would like to do reverse ajax long polling, however, I've read that Apache does not work well with this since it quickly runs out of worker threads. There are many other web servers out there that get around this problem: nginx, tornado, etc. However, my problem is, PHP is the ONLY server-side scripting language I know. Also I've already written some PHP scripts so I'd like to keep them if I can. I am ok with switching server so long as I can still use PHP.
But after doing some more research, I've read that people say PHP (PHP-FPM?) also creates a process for every request made, which means if I have hundreds/thousands of open connections, there will be hundreds/thousands of processes, which will be problem as well.
Can I conclude that there's no good scalable ways to make long polling websites using PHP? Should I abandon PHP and learn another server scripting language? I can continue developing long polling using my current setup (Apache/PHP) for now but I don't want the choice of scripting language to pose any limitation on the scalability of my system when I deploy. So what should I do? I am not very experienced with web programming, so if any gurus out there can give me some pointers I'd appreciate it! Thank you!
PHP runned in php-fpm mode will still have limitations, especially if your code is eating a lot of memory. You won't be able to run thousands of parallel processes without some available memory. But it usually perform faster than mod_php, and at least HTTP request that do not need PHP are handled by the webserver, and if that webserver is nginx you'll get a lot more HTTP requests available in parallel.
With php-fpm you will also have a queue of waiting requests, that may be usefull in case a temporary big traffic, as at least requests are queued, not rejected.
Now the long polling operations are OK with nginx (or others, that's an example), but not with PHP. PHP is not built to be a long-running server, each request is a new process, it's really not the right choice for a KeepAlive thing. But "Divide ut regnes" (divide and rule). Your long polling tasks could run near your PHP application, but without your PHP application.
As an example look at the jappix project, this is a PHP project. But you need to put somewhere an XMPP server (like ejabberd), and a BOSH server with nginx as a proxy on port 80 to that BOSH server (so you have the xmpp chat protocol on port 80, via nginx and ejabberd, and nothing on the PHP side for that). The problem is then to connect your application authentification, identification, and such, and this will have to be done by extending the XMPP server configuration (so that it use the same LDAP server as your PHP app for example).
Your second long polling problem is the status of a queue. You may find some XMPP extensions for that, maybe. Or you may perform regular ajax queries on the queue. One of the useful technique to avoid the big number of ajax requests on your PHP application is to reschedule the next ajax check on the ajax callback of the check, based on the Fibonacci numbers (it's an example). So the first time the next ajax call will be scheduled 1 minutes after, next time 2 minutes, then 3m, 5m, 8m, 13m, 21m, 34m, 55m, 89m, 144m, etc. The idea is that it's maybe important to check new messages incoming 1 minute after a page load. As the user is still reading the same page (or drinking a coffee, talking to a friend, going to holidays without switching off his computer, etc), we can delay more and more the next checks. Is a way of assuming the user is not really active. Note that you could detect user activity by other means and alter the rescheduling.
PHP is nor right for long polling, Comet and reverse ajax technologies. You should use Node.js

Apache - MaxClients

I'm setting up a new Apache server.
If I set the MaxClients definition to, let's say, one - does it then block anyone who else who tries to connect to my Apache server except for that one person?
Thanks for answers
Yes, you'll get only one request handled by apache, if you put a quite long KeepAlive Timeout the process will be held for that amount of time by the user. The 2nd user will get his request done by apache only when the first one is finished (or when the keepAlive Timeout ends if the request is in keepAlive mode). So with a small KeepAlive setting, if the requests are fast to handle, you could have a lot of users, done one after the other, without knowing apache handles only one request in parallel.
If you make some activity in the first connection you may be able to block the 2nd request for a very long time.
Now, you should'nt consider it as a way to restrict access to one dedicated person, it's not a security feature. You rely on TCP/IP connections, if the connection breaks, then the 2nd user may get access to the server.