I am using Apache HTTP Server 1.3.29
I am currently with an Apache server that is experiencing the error:
Internal Server Error 500
Exception: EWebBrokerException
Message: Maximum number of concurrent connections exceeded. Please try again later
This message appears when many users are using the system, but do not know the number of connections to cause this.
I need help optimize server to support more connections / access
Here is the link to the server httpd.conf view (only the important parts):
http://www.codesend.com/view/8fd87e7d6cc1c94eee30a8c45981e162/
Thanks!
It's not for lack of machine resources. The server has 16GB of RAM and a great processor, the problem occurs when the consumer is not even 30%, maybe some adjustment in Apache, this is the help you seek here.
Related
I have a simple Tomcat API and my goal is to manage the higher number of req /sec.
My problem is the following :
Scenario 1: When the client is using some persistent connections I manage to reach around 20000 req/sec using a single instance of the API. The server is loaded and the CPU of the server is almost fully used.
Scenario 2: When the client is closing connections after each request, the API only manages 600 req/sec and the server resources are not used at all. So I guess there is a bottleneck either on the global number of connections, either on the number of connections the server is able to manage per second.
What I want to know is if there is a configuration (on tomcat or on the server) that I can change to improve performance during scenario 2.
If not, which kind of resources is limiting? Can I address the problem by deploying many 1 CPU servers?
What I have looked to for the moment :
The number of thread and connection in Tomcat config :
I have adjusted theses number from default to 200 threads and 2000 connections, I don't see any effect during scenario2.
Ulimit is set to unlimited
JVM is configured as follow : JAVA_OPTS: -Xmx8g
It was better if you provide more information about your deployment but generally there are some works that can help you to achieve better performance.
First of all you should measure the cost of each request and optimize it as much as you can. For example, if your API with each request, execute a query on the local database and this query is consuming a lot of CPU usages, you should optimize your query.By doing this your server can tolerate more request before its cpu becomes 100%.
Note some tools like JProbe can help you for optimizing your API.
secondly, monitor your resources during the test and find which one of them becomes fully used. You should check Network Connection, Disk, Memory and CPU loads during the test and identify weakness of your resources. Track thread blocks and deadlocks as they are important to performance.
You can scale-up your server resources based on this information or decide to implement distributed architecture or add a load-balancer to your solution or add a caching strategy for you project.
In your Tomcat configuration there some settings which can improve your performance such as :
Configuring connectors
set maxThreads to a high enough value
set acceptCount to a high enough value
Configuring cache
set cacheMaxSize attribute to the appropriate value.
Configuring content compression
turning content compression on and using GZIP compression
Wall of text (my apologies, but you'll need to read it all):
Error Message: Database Error: Connection Timeout Expired. The timeout period elapsed while attempting to consume the pre-login handshake acknowledgement. This could be because the pre-login handshake failed or the server was una
Environment:
Virtual VMware Server 2008R2 SP1, running SQL 2008 SP3
32GB RAM - about 50 Databases
10Gb LAN connection, datastore storage provided by SSD SAN.
Application is CSTS connecting to SQL Server "DIRGE".
The application is configured to connect to another application for document retrieval "Onbase", who's database is also stored on DIRGE.
Throughout the day, CSTS will get connection time-outs. It's usually in spurts, so if one user is getting a timeout, usually someone else is getting one as well.
SQL has 28GB of the 32GB allocated. Memory utilization is a consistent >95%.
We cannot add more RAM as 2008R2 standard doesn't see more than 32GB.
CPU utilization was very high at times and the trend was it was getting more and more utilization, so we added a second CPU (2 sockets, 4 cores per socket).
I've scoured the event logs and the SQL logs, and the CSTS error logs looking for a commonality. I'm finding very little. I've resolved all the event log errors, no joy.
NOTE: Onbase server also gets connection time outs to SQL, so I don't believe it's application specific.
Scheduled Events:
Logs are backed up at 8am, 11a, 2pm, 5pm.
There's a SSIS package that runs every 15 minutes and takes about 8 minutes to run. However, I did not find any correlation to the timeouts.
There are maintenance plans that run after hours as well.
IP4 and 6 are both enabled.
Clients are referencing the database server by IP, so it's not a name resolution issue.
IP Protocols are enabled, static set to port 1433.
I ran a portqry from the Onbase server to TCP 143 and UDP 1434 and it IS listening.
We have a Solarwinds Database Analyzer running and watching this server; it says CPU and RAM are issues. I can get more details from it if anyone is interested.
I've google-fu'd the heck out of this and I just can't seem to find a good answer. From my searching, it seems this is a networking issue, but we've watched the network and I'm not seeing anything that would be the cause. Throughput is very little overall.
I will say this: The ONBASE server is on a different subnet than DIRGE, but I've ran a test DB connection using the name, named pipe and IP and they all work without issue.
The problem is I'm on a DBA so I'm learning this on the fly (I'm a Sr Systems Engineer).
I'm curious if someone has a suggestion on how to hunt this down.
We have a problem with Weblogic 10.3.2. We install a standard domain with default parameters. In this domain we only have a managed server and only running a web application on this managed server.
After installation we face performance problems. Sometimes user waits 1-2 minutes for application responses. (Forexample user clicks a button and it takes 1-2 minutes to perform GUI refresh. Its not a complicated task.)
To overcome these performance problems define parameters like;
configuraion->server start->arguments
-Xms4g -Xmx6g -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=500
And also we change the datasource connection pool parameters of the application in the weblogic side as below.
Initial Capacity:50
Maximum Capacity:250
Capacity Increment: 10
Statement Cache Type: LRU
Statement Cache Size: 50
We run Weblogic on 32 GB RAM servers with 16 CPUs. %25 resource of the server machine is dedicated for the Weblogic. But we still have performance problem.
Our target is servicing 300-400 concurrent users avoiding 1-2 minutes waiting time for each application request.
Defining a work manager can solve performance issue?
My datasource or managed bean definition is incorrect?
Can anyone help me?
Thanks for your replies
I have two rails app which are running on linode. OS is ubuntu, nginx server. The subdomain instance giving problem. It is getting down just after 1 day. On restarting the server, it is working fine.
The error log says- "*1 upstream timed out (110: Connection timed out) while reading response header from upstream,".
I googled for the problem and found that increasing proxy_read_timeout value will solve the problem. But I am unable to find the reason.
Is there an issue of over utilization of resources? I have 24 GB of storage and 512 MB of RAM as shown in linode manager. I have 10 cron jobs in total (5 in each app). They all start at the same time. Can that be the issue?
Please tell me the reason and solution for it.
It definitely sounds like a resource issue... Or perhaps something else is killing / hogging your app. Generally the upstream request is a request from the web server to the app server, so if your app is doing something wonky, this would cause the timeout to occur. I'm not sure what the default timeout is, but I'm guessing it's rather short. Increasing the timeout would at least buy you time to look at the system resources the process stack to try to figure out what's going on.
How do i monitor how many connections apache is getting? Something like http://www.cyberciti.biz/faq/apache-server-status/ . Basically i need some tool that will send an email when the number of connections exceed a specified limit. I am not able to find any which would give me the server side statistics of the live server. All that i got is related to simulate the real instance. Please help me if any.
mod_status makes the information available - you just need somethnig to poll the page and report.
Nagios provides a great platform for implementing monitoring (scheduling / alerting / reporting / escalation / automatic responses), while there are at least 2 plugins (check_apachestatus.pl and check_apache2.sh) which will report on concurrent connections.