How do i monitor how many connections apache is getting? Something like http://www.cyberciti.biz/faq/apache-server-status/ . Basically i need some tool that will send an email when the number of connections exceed a specified limit. I am not able to find any which would give me the server side statistics of the live server. All that i got is related to simulate the real instance. Please help me if any.
mod_status makes the information available - you just need somethnig to poll the page and report.
Nagios provides a great platform for implementing monitoring (scheduling / alerting / reporting / escalation / automatic responses), while there are at least 2 plugins (check_apachestatus.pl and check_apache2.sh) which will report on concurrent connections.
Related
We use Jmeter to do performance testing. I gave 200 threads(200 users). and we have two servers. like sever A, Server B. i tested indivisibly for 200 Users, it works. and we load balancing server Like server C. So request goes to ether server A Or Server B. But if configure my same jmx script(200 thread) with Server C. it gives error below (but it works for 50 users-- no error).
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:61)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)
at org.apache.jmeter.protocol.http.sampler.MeasuringConnectionManager$MeasuredConnection.receiveResponseHeader(MeasuringConnectionManager.java:201)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:127)
If the issue can be reproduced only on higher loads - it's definitely a server (or load balancer) issue so congratulations on finding the first bottleneck.
Now you can investigate the reason and suggest the fixes, the next steps could be:
Inspect application under test / load balancer logs - you can find a clue there
Inspect application under test / load balancer / database / any other middleware configuration. in the majority of cases default configuration is good for development and debugging but you will need to perform some performance tuning before running a prod-like load test
Collect main health metrics on the application under test side (CPU, RAM, Network, Disk, Swap usage, etc.). It might be the case your application simply lacks hardware resources. You can use built-in tools of the operating system(s) or an APM tool or JMeter PerfMon Plugin
Re-run your test with a profiling tool telemetry enabled on the application under test side. This will give you an overview with regards to where the application spends the most time, which are the "heaviest" functions or functions called most frequently so you would know what to optimise.
Make sure that the load balancer equally (or according to the other algorithm) distributes the requests between the backend servers. It might be the case you're hitting only one server, if this is - consider adding the DNS Cache Manager to your Test Plan and re-run your test to see if it helps.
I'm using apache with ldirector i'm facing some issues during load times when google, bing crawlers hit my site it makes apache to choke due to which my server's cup useage went to 100% utlization. after this i have to stop apache and monitor load manually i want to automate all this scenario. here is what i want when ever load comes on apache it normalizes server according to given settings and if cpu usage goes high it should not be exceded to given cpu usage limit.
I want to control all this via shell script, please give suggestions.
As a recovery mechanism I need to write a software if my tomcat fails, I need to send email notification. Are there any api's supported from tomcat where I can receive critical events.
Any help on this regard would be very useful to me.
thanks
Lokesh
It depends: What do you consider a critical event?
Answering time above 2 sec/page?
Out of Memory
crash
database not available
...
You should look for generic monitoring tools, nagios is a good starting point and there are lots of equally good alternatives, open source as well as commercial.
Then monitor your tomcat installation, e.g. through standard http, on jmx, on process/OS level. Include your infrastructure: Database, Apache, others.
We have many Apache instances all over our intranet. Some instances run on the same machine. Some instances run on different machines.
I need a tool that can manage these instances from one central location.
Get CPU stats
Get Connection stats
Stop/start Apache instances
Get access to error log
I looked at webmin, but the documentation isn't too clear how it works. Without installing it I'd have trouble getting it to go.
Any recommendations?
I've never used it myself, but I've seen people with monitoring requirements be very happy with Cacti. Besides general health monitoring like CPU stats it has an extremely simple Apache stats plugin that might do what you need:
Script to get the requests per second and the requests currently being processed from
an Apache webserver.
maybe you can put something together with that.
I'm running in to an issue in an OS X app that creates multiple, persistent connections to the same host using NSURLConnection. I create a separate connection for different rooms, and it stays connected the entire time the room is open to consume a streaming API. When opening many rooms, it stops working correctly.
I created a separate sample app that creates 10 connections, and it seems to only allow 6 connections to work, and the others are queued. Does anyone know if there is a way to override this limit? I can't find it documented anywhere. The only workaround I've found is it seems to be per host name, so testing with "localhost" and "127.0.0.1" allows 6 connections per host. I uploaded a sample project with client and server here - http://cl.ly/1x3K0D1F072V3U2T0C0I.
I filed a Radar for something that seems like the same issue but on iOS. I found that you can't have more than 5 connections open at once. The connections don't have to be pointing to the same domain. Anything after that would be queued. So if you have 5 connections open to an extremely slow endpoint, any other connections will not go through.
Radar: http://openradar.appspot.com/radar?id=2542401
Apple's reply:
This is the effect of our NSURLConnection connection cache. It is expected. We expect to address this type of configuration with new API.
I asked if they could give me anymore information (does it vary? does the type of connection affect it?) and they said:
Unfortunately, we can't give details about the connection limit behavior.
User agents in general (Chrome, Firefox, Safari) use six simultaneous TCP connections per hostname, with potential one-offs.
You could break this limitation by using CFNetwork API (CFHTTPMessage).
Here is the CFNetwork Programming Guide.
https://developer.apple.com/library/mac/documentation/Networking/Conceptual/CFNetwork/Introduction/Introduction.html#//apple_ref/doc/uid/TP30001132
BTW, if you decide to use CFNetwork, you'll need to work around the proxy and authenticate.
Wish this could helped!