My application is built on java and uses a sql server database.
There would be roughly around 50 people using the application. But i get up to a 900 connection objects in the database and the db goes down during peak times. How to effectively use the connection objects and keep the number of connections on control.
To Control the DB connections below are the Points to Follow.
Check the code if every connection which is created is being
destroyed after use. E.g. in Every DB Transaction should be on below
Pattern, if you are using some ORM make sure you are opening and
closing connections as per ORM provider recommendation.
openDBConnection();
Try{
PerformDBTransaction();
}Finally
{CloseDBConnection();}
Check if you enabled connection Pooling. If yes then check the
minimum Connections Property in the connection string or ORM
Configuration because whenever a single transaction is done on SQL
Server, it creates the same number of connection that are mentioned
in minimum Connections property so may be in your application 900
connections are mentioned.
If your application is a web app or communicating to web services
then check if you have enabled Web Gardening(Multiple Processes
handling web requests on the same Server) or Web Farming(Multiple
Servers are used by using load balancing to handle the requests). If
one or both of exists then be informed that below is the Formula for
Connections creation for SQL Server. Total Number of Pooled
Connections = Total Number of Processes (Web Gardening Number) *
Total Number of Web/App Servers (Web Farming Number) * Min
Connections Parameter Value in Connection String.
For Effective Use:
Connection Pooling is already an effective way to use connections but what should be the value of connection pooling it depends on your requirement and DB Server Hardware Capabilities. If you are sure that only 50 users will do transactions one transaction by each in a unit of time then simply you need configure connection pool so that only 50 connections can be created.
Related
This is my understanding after reading the Documents:
Pooling, like many other DBs, we have only a number of allowed connections, so you guys all line-up and wait for a free connection returned to the pool. (a connection is like a token in a sense)
at any given time, number of active and/or available connections is controlled in the range of 0-max.
idleTimeoutMillis said is "milliseconds a client must sit idle in the pool and not be checked out before it is disconnected from the backend and discarded." Not clear on this. Generally supposed when a client say a web app has done its CRUD but not return the connection voluntarily believed is idle. node-postgres will start the clock, once reaches the number of milliseconds will take the connection back to the pool for next client. So what is not be checked out before it is disconnected from the backend and discarded?
Say idleTimeoutMillis: 100, does it mean this connection will be literally disconnected (log-out) after idle for 100 millisecond? If yes then it's not returning to the pool and will result in frequent login connection as the doc said below:
Connecting a new client to the PostgreSQL server requires a handshake
which can take 20-30 milliseconds. During this time passwords are
negotiated, SSL may be established, and configuration information is
shared with the client & server. Incurring this cost every time we
want to execute a query would substantially slow down our application.
Thanks in advance for the stupid questions.
Sorry this question was not answered for so long but I recently came across a bug which questioned my understanding of this library too.
Essentially when you're pooling you're saying to the library you can have a maximum of X connections to the Database simultaneously open. So every request that comes into a CRUD API for example will open a new connection and you will have a total of X requests possible as each request opens a new connection. Now that means as soon as a request comes in it 'checks out' a connection from the pool. This also means another request cannot use this connection as it is currently blocked by another request.
So in order to let's say 'reuse' the same connection when one request is done with that connection you have to release it and say it's ready to use again 'checking out'. Now when another request comes in it is able to use this connection and do the aforementioned query.
idleTimeoutMillis this variable to me is very confusing to me and took a while to get my head around. When there is an open connection to a DB which has been released or 'checked out' it is in an IDLE state, which means that anyone wanting to make a request can make a request with this connection as it is not being used. This variable says that when a connection is in an IDLE state how long do we wait until we can close this connection. For various things this may be used. Obviously having open DB connections requires memory and so forth so closing them might be beneficial. Also when autoscaling - let's say you been at max requests/second and and you're using all DB conns then this is useful to keep IDLE connections open for a bit. However if this is too long and you scale down then you can run into prolonged memory as each IDLE connection will require some memory space.
The benefit of this is when you have an open connection and just send a query with it you don't need to re-authenticate with the DB it's authenticated and ready to go.
I'm relatively very new to this, but I have a Tomcat cluster set up (using mod_proxy from httpd) with session replication (separate redis server) for fault-tolerance.
I have a couple of questions about this setup:
My application (spring/hibernate) has a different database per user. So the problem here is that the data source (using spring along with hibernate for persistence) is created at Tomcat level. Thus, whatever connection pooling I do will be at server level.
As per the cluster configuration the Tomcat instances will create their own Connection Pool.
I'd like to know if connection pooling is possible at a cluster level using Tomcat i.e. is there a way to make sure that all the servers in the cluster are using the shared Connection Pool?
I do not want to configure a DataSource on every Tomcat instance because of performance issues. Before the cluster setup, the application was deployed on a single server and the DataSource was configured such that it allowed only a few (50) connections in a connection pool per DataSource.
Now in a clustered environment, I cannot afford to create or split those number of connections on every Tomcat, and also dynamic registration of nodes will create further problems. I'd also like to know is there some alternative solution to this problem if connection pooling is not possible or inefficient?
I'm going to handle your questions in reverse order, since the second one is more simple.
Database connection pooling in Tomcat cannot be configured cluster-wide: you have to configure a separate pool for each node in the cluster. But this doesn't have to be bad news... there's nothing wrong with configuring a node to have 5 or 10 or 100 connections in the connection pool on each node.
It's true, you might end up with a situation where you have too many users connecting to the database at a single time which overwhelms your database, but that could also happen with a single node as well. There isn't anything conceptually different about multiple-nodes that wouldn't also be true for a single node.
the key is to make sure that your cluster balances users appropriately so that you don't have a limit of e.g. 5 database connections per node, but 100 users end up on one node while the other nodes only have 5 users per node. In that case, the popular node (100 users) will have to share those 5 connections while on the other nodes, each user gets a connection all to themselves.
Back to your first item, which is more complicated. If you have a separate database per user, then connection-pooling is an impossible thing to accomplish because you will absolutely have to establish a new connection for every user every time. Those connections aren't poolable, at least not without being quite careful about it. It sounds like you have an architectural issue that you might have to solve before you can identify a technical solution to that issue.
I'm new to Azure SQL Database as this is my first project to migrate from a on premise setup to everything on Azure. So the first thing that got my concern is that there is a limit on concurrent login to the Azure SQL Database, and if it exist that number, then it'll start dropping the subsequent request. For the current service tier (S0), it caps at 60 concurrent logins, which I have already reached multiple times as I've encountered a few SQL failures from looking at my application log.
So my question is:
Is it normal to exceed that number of concurrent login? I'd like to get an idea of whether or not my application is having some issue, or my current service tier is too low.
I've looked into our database code to make sure we are not leaving database connection open. We use Enterprise Library, every use of DBCommand and IDataReader are being wrapped within a using block, thus they will get disposed once it runs out of scope.
Note: my web application consists of a front end app with multiple web services supporting the underlying feature, and each service will connect to the same database for specific collection of data, which makes me think hitting 60 concurrent login might be normal since a page or an action might involve multiple calls behind the scene, thus multiple connection to the database from a few api, and if there are more than one user on the application, then 60 is really easy to reach.
Again, in the past with on prem setup, I never notice this kind of limitation.
Thanks.
To answer your question, the limit is 60 on an S(0)
http://gavinstevens.com/2016/11/30/sql-server-vs-azure-sql/
I'm using entity framework 6 on multiple applcations. All of this applications (around 10) make use of the same database. In some cases (when there are lots of connections at the same time, I believe), applications won't work, the exception is "underlying provider failed on open".
I made some research and I found out entity framework's default max pool connections is 100, so I increase that number to 1,000 in most of the applications.
Is it poosible that if I left one application with the 100 default, my other applications will stop working too?
As I understand, entity framework tells SQL how many connections will be available, but are this connections for the application only or is it general?
As I suspected, all the applications making connections to the same sql instance must change it's connection string to allow more than entity's framework default connections (100).
Scenario:
Application 1 and 2 have MaxPoolSize = 1000.
Application 3 has MaxPoolSize = 100.
Application 1 and 2 are making connections to SQL, the current connections are 200... Everything works fine.
Application 3 tries to make a connection to SQL, it's configuration tells SQL that the max pool size is only 100, so SQL blocks the pooling, until connections timeout and are under 100.
In other words, you have to make sure all MaxPoolSize are changed from the default 100 to avoid this problem, in this case I changed all of them to allow a max of 1000 connections.
I have created 2 distinct data source connections (to MS SQL Server 2008) in the ColdFusion Administrator that have exactly the same settings except for the actual name of the connection. My question is will this create two distinct connection pools or will they share one?
They will have different pools. The pools are defined at the data source level and you have two distinct data sources as far as ColdFusion is concerned. Why would you have two different data sources with the exact same settings? I guess if you wanted to force them to use different connection pools. I can't think of a reason why though.
I found this page that documents how database connections are handled in ColdFusion. Note that the "Maintain Database Connections" setting is set for each data source.
Here is the section related to connection pooling from that page (in case it goes away):
If the "Maintain Database Connections" is set for a data source, how does ColdFusion Server maintain the connection pool?
When "Maintain Database Connections" is set for a data source, ColdFusion keeps the connection open after its first connection to the database. It does not log out of the database after this first connection. You can change this setting according to the instructions in step d above. Another setting in the ColdFusion Administrator, called "Limit cached database connection inactive time to X minutes," closes a "maintained" database connection after X inactive minutes. This setting is server wide and determines when a "maintained" connection is finally closed. You can modify this setting by going to the "Caching" tab within the ColdFusion Administrator. The interface for modifying the "Limit cached database connection inactive time to X minutes" looks like the following:
If a request is using a data source connection that is already opened, and another request to the data source comes in, a new connection is established. Since only one request can use a connection at any time, the simultaneous request will open up a new connection because no idle cached connections are available. The connection pool can increase up to the setting for simultaneous connections limit which is set for each data source. This setting, called, "Limit Connections," is in the ColdFusion Administrator. Click on one of the data source tabs and then click on one of your data sources. Click on "CF Settings" and put a check next to "Limit Connections" and enter a number in the sentence, "Enable the limit of X simultaneous connections." Please note that if you do not set this under the data source setting, ColdFusion Server will use the server wide "Simultaneous Requests" setting.
At this point, there is a pool of two database connections that ColdFusion Server maintains. Each connection remains in the pool until either the "Connection Timeout" period is reached or exceeds the inactivity time. If neither of the first two options are implemented, the connections remain in the pool until ColdFusion is restarted.
The "Connection Timeout" setting closes the connection and eliminates it from the pool whether or not it has been active or inactive. If the process is active, it will not terminate the connection. You can change this setting by going to "CF Settings" for your data source in the ColdFusion Administrator. Note: Only the "Cached database connection inactive time" setting will end the connection and eliminate it from the pool if it hasn't been used. You can also use the "Connection Timeout" to override the"Cached database connection inactive" setting as it applies only to a single data source, not all data sources.
They have different pools. Pooling is implemented by cf java code. (Or was that part in the jrun code.... ). It doesn't use any jdbc based pooling. Cf10 could have switched to jdbc based pooling but I doubt it.
As a test
Set the 'verify connection' sql to wait-for delay '00:01:00' or similar (wait for 1 minute) on both pools. As pool access is single-threaded for each pool - including the time taken to run the verify - have 2 pages each accessing a different data source , request both. If they complete after 1 minute it's 2 pools, if one page takes 1 minute and the other takes 2 minutes - it's one pool
As a side note, if during this 1 minute verify you yank out the network cable (causing the jdbc socket to stay open forever waiting for a response ) your thread pool is now dead and you need to restart CF
Try to create temporary table with two different datasource, if you get error for second query it use same pool and run perfectly file means different pool.