I currently have an application deployed using Tomcat that interacts with a Postgres database via JDBC. The queries are very expensive, so what I'm seeing is a timeout caused by Tomcat or Apache (Apache sits in front of Tomcat in my configuration). I'm trying to limit the connections to the database to 20-30 simultaneous connections, so that the database is not overwhelmed. I've done this using the \.. configuration, setting maxActive to 30 and maxIdle to 20. I also bumped up the maxWait.
In this scenario I'm limiting the USE of the database, but I want the connections/requests to be POOLED within Tomcat. Apache can accept 250 simultaneous requests. So I need to ensure Tomcat can also accept this many, but handle them appropriately.
Tomcat has two settings in the HTTP Connector config file:
maxThreads - "Max number of request processing threads to be created by the Http Connector, which therefore determines the max number of simultaneous requests that can be handled."
acceptCount - "The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused."
So I'm guessing that if I set maxThreads to the max number of JDBC connections (30), then I can set acceptCount to 250-30 = 220.
I don't quite understand the difference between a thread that is WAITING on a JDBC connection to open up from the pool, versus a thread that is queued... My thought is that a queued thread is consuming less cycles whereas a running thread, waiting on the JDBC pool, will be spending cycles checking the pool for a free thread...?
Note that the HTTP connector is for incoming HTTP requests, and unrelated to JDBC. You probably want to configure the JDBC connection pool separately, such as the connectionProperties for the JDBC connector:
http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
Unless your application handles request in a matter where it directly connects to the database on a per http connection basis, then you should configure your JDBC connection pool based on what your database software is set to/ can handle and your maxthreads to what your application / hardware can handle.
Keeping maxActive value (of db connection pooling) lesser than maxThreads (i.e. number of concurrent threads) makes sense in most of the cases. You can set acceptCount to a higher value depending upon what traffic you are expecting in your website and how fast one request can be processed.
Related
i'm confused i tried to understand, i googled days and every day i get more confused about concurrent connections
what i know :
concurrent connections: is number of open TCP connections for every visitor send a request to my website count one concurrent connections, please bear with me and if i'm wrong correct me thanks.
what i want to know:
is every server have a its own concurrent connections related to the server capability (CPU, RAM, DRIVE..)?
i read that nginx HTTP server it can handle up to 1024 concurrent connections????
it's that mean that every server can only handle 1024 simultaneous connections that doesn't make any sense to me, know servers can handle C10K there's servers that can handle up to C100M connections , is this 1024 has anything limit with file descriptors?
i read that nginx work with event driven, non-bloking I/O?
i read also that nginx create one thread that handles requests with event loop : (a thread loops on file descriptors looking for ready I/O instead of getting blocking waiting of I/O to ready instead instead it's goes to handle anther request)
about 1024 concurrent connections is nginx can only create 1024 thread per process??
i read that server max concurrent connections is 65535 connections of 16-bit of port????
please bear with me and if i'm wrong correct me
if you know can you explain to me with details how nginx handle request from create process, threads, threads pool and how nginx handle non-blocking I/O is nginx use select() with FD_SETSIZE = 1024
and what is worker_connections and why is 1024 what is this 1024???????
and please tell me about how Apache works to with details process to thread to handle every request
Thanks for help this really going help me with my project create a non-blocking HTTP server
I have a requirement where I have configured 'min-connctions= number of client thread * number of geode cache server'.
so that during load there will be not connection while accessing the connection or no new connection is needed if all threads start accessing to a single Server.
the problem is that it is locator who decide the how many connection client can create to a server JVM based on load probe.
I want to ignore everything just wanting equal number on Connections to all Cache Server JVMs.
We have a legacy cluster of servers running Apache 2.4 that run our application sitting behind an ELB. This ELB has two listeners, one HTTP, and one HTTPS which terminates at the ELB and sends regular HTTP traffic to the instances behind it. This ELB also has pre-open turned off (it was causing a busy worker buildup). Under normal load we have 1-3 busy workers per instance.
We have a new cluster of servers we are trying to migrate to behind a new ELB. The purpose of this migration is to allow for SNI – serving TLS traffic to thousands of domains. As such this cluster uses mod_proxy_protocol which has been enabled at the ELB level. For the purposes of testing we’ve been weighting traffic at the DNS (Route 53) level to send 30% of our traffic to the new load balancer. Under even this small load we see 5 – 10 busy workers and that grows as traffic does.
As a further test we took one of these new instances, disabled proxy_protocol, and moved it from the new ELB to the old ELB, the worker count drops to average levels, being 1-3 busy workers. This seems to indicate that there is an issue wither with the ELB (differences between HTTP and TCP handling?) or mod_proxy_protocol.
My question: Why is it that we have twice the busy apache workers when using proxy protocol and the new ELB? I would think that since TCP listeners are dumb and don’t do any processing on the traffic, they would be faster and as a result consume less workers time than HTTP listeners which actively ‘modify’ the traffic going thru them.
Any guidance to help us diagnose this issue is appreciated.
The difference is simple and significant:
An ELB in HTTP mode takes care of holding the idle keep-alive connections from browsers without holding open corresponding connections to the instance. There's no necessary correlation between browser connections and back-end connections -- a backend connection can be reused.
In TCP mode, it's 1:1. It has to be, because the ELB can't reuse a back-end connection for different browser connection on the front-end -- it's not interpreting what's going down the pipe. That's always true for TCP, but if the reason isn't intuitive, it should be particularly obvious with the proxy protocol enabled. The PROXY "header" is not in fact a "header" in the usual sense -- it's a preamble. It can only be sent at the very beginning of a connection, identifying the source address and port. The connection persists until the browser or server closes it, or it times out. It's 1:1.
This is not likely to be viable with Apache.
Back to HTTP mode, for a minute.
This ELB also has pre-open turned off (it was causing a busy worker buildup).
I don't know how you did that -- I've never seen it documented, so I assume this must have been through a support request.
This seems like a case of solving entirely the wrong problem. Instead of having a number of connections that seems to you to be artificially high, all you've really accomplished is keeping the number of connections artificially low -- ultimately, you're probably actually impairing your performance and ability to scale. Those spare connections are for the purpose of handling bursts of demand. If your instance is too small to handle them, then I would suggest that the real problem is just that: your instance is too small.
Another approach -- which is exactly the solution I use for my dreaded legacy Apache-based applications (one of which has a single Apache server sitting behind a total of about 15 to 20 different ELBs -- necessary because each ELB is offloading SSL using a certificate provided by one of the old platform's customers) -- is HAProxy between the ELBs and Apache. HAProxy can handle literally hundreds of connections and millions of requests per day on tiny instances (I'm talking tiny -- t2.nano and t2.micro), and it has no problem keeping the connections alive from all of the ELBs yet closing the Apache connection after each request... so it's optimizing things in both directions.
And of course, you can also use HAProxy with a TCP balancer and the proxy protocol -- the author of HAProxy was also the creator of the proxy protocol standard. You can also just run it on the instances with Apache rather than on separate instances. It's lightweight in memory and CPU and doesn't fork. I'm not affilated with the project, other than having submitted occasional bug reports during the development of the Lua integration.
I am using Jedis client for connecting to my Redis server. The following are the settings I'm using for connecting with Jedis (using apache common pool):
JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setTestOnBorrow(true);
poolConfig.setTestOnReturn(true);
poolConfig.setMaxIdle(400);
// Tests whether connections are dead during idle periods
poolConfig.setTestWhileIdle(true);
poolConfig.setMaxTotal(400);
// configuring it for some good max value so that timeout don't occur
poolConfig.setMaxWaitMillis(120000);
So far with these setting I'm not facing any issues in terms of reliability (I can always get the Jedis connection whenever I want), but I am seeing a certain lag with Jedis performance.
Can any one suggest me some more optimization for achieving high performance?
You have 3 tests configured:
TestOnBorrow - Sends a PING request when you ask for the resource.
TestOnReturn - Sends a PING whe you return a resource to the pool.
TestWhileIdle - Sends periodic PINGS from idle resources in the pool.
While it is nice to know your connections are still alive, those onBorrow PING requests are wasting an RTT before your request, and the other two tests are wasting valuable Redis resources. In theory, a connection can go bad even after the PING test so you should catch a connection exception in your code and deal with it even if you send a PING. If your network is stable, and you do not have too many drops, you should remove those tests and handle this scenario in your exception catches only.
Also, by setting MaxIdle == MaxTotal, there will be no eviction of resources from your pool (good/bad?, depends on your usage). And when your pool is exhausted, an attempt to get a resource will endup in timeout after 2 minutes of waiting for a free resource.
I am trying to simulate a slow http read attack against apache server running on my localhost.
But it seems like, the server does not complain and simply waits forever for the client to read.
This is what I do:
Request a huge file (say ~1MB) from the http server
Read the response from the server in a loop waiting 100 secs before successive reads
Since the file is huge and the client receive buffer is small, the server has to send the file in multiple chunks. But, at the client side, I wait for 100 secs between successive reads. As a result, the server often polls the client and finds that, the receive window size of the client is zero since the client has not yet read the receive buffer.
But it looks like the server does not bother to break the connection and it silently keeps polling the client. Server sends the data when the client window size is > 0 and again goes back to wait for the client.
I want to know whether there are any apache config parameters that I can set to break the connection from the server side after waiting sometime for the client to read the data.
Perhaps this would be more useful to you, (simpler and saves you time): http://ha.ckers.org/slowloris/ which is a Perl script that sends partial HTTP requests, the Apache server leaves the connection open (now unavailable to new users) and if executed on a Linux environment, (Linux does not limit threads beyond hardware capability) you can effectively block all open sockets, and in turn prevent other users from accessing the server. It uses minimal bandwidth because it does not "flood" the server with requests, it simply slowly takes the sockets hostage. You can download the file here: http://ha.ckers.org/slowloris/slowloris.pl
To prevent an attack like this (well, mitigate) see here: https://serverfault.com/questions/32361/how-to-best-defend-against-a-slowloris-dos-attack-against-an-apache-web-server
You could also use a load-balancer or round-robin setup.
Try slowhttptest to test the slow read attack you're describing. (It can also be used to test slow sending of headers.)