Found "Thread Exhausted" in Gemfire Server Log - gemfire

I checked the gemfire server log and found the following statements in my log file. :
Rejected connection from Server connection from [client host address=XXX.XXX.XXX.XX; client port=XXXX1] because incoming request was rejected by pool possibly due to thread exhaustion
Rejected connection from Server connection from [client host address=XXX.XXX.XXX.XX; client port=XXXX2] because incoming request was rejected by pool possibly due to thread exhaustion
....
What are the possible causes? How do i find the root cause?
I using Gemfire 9.8.6, and most of the regions are replicated. Clients are connected to the server regions through Caching Proxy by Spring Data Gemfire.
gemfire.properties [Server]
Based on the Cache Server Log File, i found out that my Handshaker max Pool Size: 4 and max-connections=800 and max-threads=0
Handshaker max Pool size: 4
CacheServer Configuration: port=51XX max-connections=800 max-threads=0 notify-by-subscription=true socket-buffer-size=1250000
I changed the file descriptors for redhat soft limit to 8192, and the hard limit to 81920, and number of processes (nproc) soft limit to 501408, with an unlimited hard limit.
Total Number of Server : 2
Total Number of Locator : 2
Total Number of Client: 15
Thank you for your help

This message is generally logged by the GemFire server whenever it doesn't have enough resources to handle the amount of requests coming in. I'd suggest having a look at Fine-Tuning Your Client/Server Configuration and Making Sure You Have Enough Sockets.
Hope this helps. Cheers.

Related

what's concurrent connections and is every server have limit concurrent connections?

i'm confused i tried to understand, i googled days and every day i get more confused about concurrent connections
what i know :
concurrent connections: is number of open TCP connections for every visitor send a request to my website count one concurrent connections, please bear with me and if i'm wrong correct me thanks.
what i want to know:
is every server have a its own concurrent connections related to the server capability (CPU, RAM, DRIVE..)?
i read that nginx HTTP server it can handle up to 1024 concurrent connections????
it's that mean that every server can only handle 1024 simultaneous connections that doesn't make any sense to me, know servers can handle C10K there's servers that can handle up to C100M connections , is this 1024 has anything limit with file descriptors?
i read that nginx work with event driven, non-bloking I/O?
i read also that nginx create one thread that handles requests with event loop : (a thread loops on file descriptors looking for ready I/O instead of getting blocking waiting of I/O to ready instead instead it's goes to handle anther request)
about 1024 concurrent connections is nginx can only create 1024 thread per process??
i read that server max concurrent connections is 65535 connections of 16-bit of port????
please bear with me and if i'm wrong correct me
if you know can you explain to me with details how nginx handle request from create process, threads, threads pool and how nginx handle non-blocking I/O is nginx use select() with FD_SETSIZE = 1024
and what is worker_connections and why is 1024 what is this 1024???????
and please tell me about how Apache works to with details process to thread to handle every request
Thanks for help this really going help me with my project create a non-blocking HTTP server

How to check if clients are starved in an apache server?

How would you monitor a server performance in sense of :
Count requests that has been timedout without processing at all (client was starved)
Count requests that has been timedout while in process
Count requests that failed because of error at least in the apache lvl
Thanks
Count requests that has been timedout without processing at all (client was starved)
It depends on what platform you are operating and what the apache server is used for. In case the apache server is used as a back-end for some website you could add timestamps to each request made by the client (website user), or let the client keep track of the requests it performed with their associated timestamps. Send this data to the server and let the server compare this data to it's own logs.
Thus I would advise to keep track both client-sided and server-sided of all requests received and sent with their additional status (success or failure, timestamp).
For more specific info I think more context on the actual implementation is a must.
As per my knowledge, Apache do not support this kind of feature other than server status. But that doesn't include enough metrics to match your requirement.
But nginx provides more metrics which almost include what you need.
Nginx open source version support following metrics,
accepts / accepted
handled
dropped
active
requests / total
Please refer this article. If you are trying to host a php web app, you can move to nginx in that case.
I am not an expert in that case but here is my take on this,
Request time out generate 408 error in logs which is countable and apache provide a variable %D to measure the process time.
Count requests that has been timedout without processing at all
(client was starved)
If there is no process time or minimal then you can assume request is not processed at all.
Count requests that has been timedout while in process
Opposite goes for previous theory you will get some time logged for processing.
Count requests that failed because of error at least in the apache lvl
You will surely get the error log for any reason apache have encountered.
And what will be the role of keep alive in this case is another thing.
Logging methods are different in apache 2 & 2.4 keep that in mind but common logging format will lead you to result.
Edit:
If you are looking for tools to give you some insights try with below, and apache httpd server does provide all the necessary insights which nginx and other server out there can provide.
https://logz.io/
http://goaccess.prosoftcorp.com/
http://awstats.sourceforge.net/
Refrences:
http://httpd.apache.org/docs/current/mod/mod_log_config.html
https://httpd.apache.org/docs/2.4/mod/mod_reqtimeout.html
https://httpd.apache.org/docs/2.4/logs.html

Maximum number of concurrent connections exceeded

I am using Apache HTTP Server 1.3.29
I am currently with an Apache server that is experiencing the error:
Internal Server Error 500
Exception: EWebBrokerException
Message: Maximum number of concurrent connections exceeded. Please try again later
This message appears when many users are using the system, but do not know the number of connections to cause this.
I need help optimize server to support more connections / access
Here is the link to the server httpd.conf view (only the important parts):
http://www.codesend.com/view/8fd87e7d6cc1c94eee30a8c45981e162/
Thanks!
It's not for lack of machine resources. The server has 16GB of RAM and a great processor, the problem occurs when the consumer is not even 30%, maybe some adjustment in Apache, this is the help you seek here.

Weblogic server failed to response under load

We have a quite strange situation on my sight. Under load our WL 10.3.2 server failed to response. We are using RestEasy with HttpClient version 3.1 to coordinate with web service deployed as WAR.
What we have is a calculation process that run on 4 containers based on 4 physical machines and each of them send request to WL during calculation.
Each run we see a messages from HttpClient like this:
[THREAD1] INFO I/O exception (org.apache.commons.httpclient.NoHttpResponseException) caught when processing request: The server OUR_SERVER_NAME failed to respond
[THREAD1] INFO Retrying request
The HttpClient make several requests until get necessary data.
I want to understand why WL can refuse connections. I read about WL thread pool that process http request and found out that WL allocate separate thread to process web request and the numbers of threads is not bounded in default configuration. Also our server is configured Maximum Open Sockets: -1 which means that the number of open sockets is unlimited.
From this thread I'd want to understand where the issue is? Is it on WL side or it's a problem of our business logic? Can you guys help to deeper investigate the situation.
What should I check more in order to understand that our WL server is configured to work with as much requests as we want?

Tomcat - Configuring maxThreads and acceptCount in Http connector

I currently have an application deployed using Tomcat that interacts with a Postgres database via JDBC. The queries are very expensive, so what I'm seeing is a timeout caused by Tomcat or Apache (Apache sits in front of Tomcat in my configuration). I'm trying to limit the connections to the database to 20-30 simultaneous connections, so that the database is not overwhelmed. I've done this using the \.. configuration, setting maxActive to 30 and maxIdle to 20. I also bumped up the maxWait.
In this scenario I'm limiting the USE of the database, but I want the connections/requests to be POOLED within Tomcat. Apache can accept 250 simultaneous requests. So I need to ensure Tomcat can also accept this many, but handle them appropriately.
Tomcat has two settings in the HTTP Connector config file:
maxThreads - "Max number of request processing threads to be created by the Http Connector, which therefore determines the max number of simultaneous requests that can be handled."
acceptCount - "The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused."
So I'm guessing that if I set maxThreads to the max number of JDBC connections (30), then I can set acceptCount to 250-30 = 220.
I don't quite understand the difference between a thread that is WAITING on a JDBC connection to open up from the pool, versus a thread that is queued... My thought is that a queued thread is consuming less cycles whereas a running thread, waiting on the JDBC pool, will be spending cycles checking the pool for a free thread...?
Note that the HTTP connector is for incoming HTTP requests, and unrelated to JDBC. You probably want to configure the JDBC connection pool separately, such as the connectionProperties for the JDBC connector:
http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
Unless your application handles request in a matter where it directly connects to the database on a per http connection basis, then you should configure your JDBC connection pool based on what your database software is set to/ can handle and your maxthreads to what your application / hardware can handle.
Keeping maxActive value (of db connection pooling) lesser than maxThreads (i.e. number of concurrent threads) makes sense in most of the cases. You can set acceptCount to a higher value depending upon what traffic you are expecting in your website and how fast one request can be processed.