I'm using entity framework 6 on multiple applcations. All of this applications (around 10) make use of the same database. In some cases (when there are lots of connections at the same time, I believe), applications won't work, the exception is "underlying provider failed on open".
I made some research and I found out entity framework's default max pool connections is 100, so I increase that number to 1,000 in most of the applications.
Is it poosible that if I left one application with the 100 default, my other applications will stop working too?
As I understand, entity framework tells SQL how many connections will be available, but are this connections for the application only or is it general?
As I suspected, all the applications making connections to the same sql instance must change it's connection string to allow more than entity's framework default connections (100).
Scenario:
Application 1 and 2 have MaxPoolSize = 1000.
Application 3 has MaxPoolSize = 100.
Application 1 and 2 are making connections to SQL, the current connections are 200... Everything works fine.
Application 3 tries to make a connection to SQL, it's configuration tells SQL that the max pool size is only 100, so SQL blocks the pooling, until connections timeout and are under 100.
In other words, you have to make sure all MaxPoolSize are changed from the default 100 to avoid this problem, in this case I changed all of them to allow a max of 1000 connections.
Related
My application is built on java and uses a sql server database.
There would be roughly around 50 people using the application. But i get up to a 900 connection objects in the database and the db goes down during peak times. How to effectively use the connection objects and keep the number of connections on control.
To Control the DB connections below are the Points to Follow.
Check the code if every connection which is created is being
destroyed after use. E.g. in Every DB Transaction should be on below
Pattern, if you are using some ORM make sure you are opening and
closing connections as per ORM provider recommendation.
openDBConnection();
Try{
PerformDBTransaction();
}Finally
{CloseDBConnection();}
Check if you enabled connection Pooling. If yes then check the
minimum Connections Property in the connection string or ORM
Configuration because whenever a single transaction is done on SQL
Server, it creates the same number of connection that are mentioned
in minimum Connections property so may be in your application 900
connections are mentioned.
If your application is a web app or communicating to web services
then check if you have enabled Web Gardening(Multiple Processes
handling web requests on the same Server) or Web Farming(Multiple
Servers are used by using load balancing to handle the requests). If
one or both of exists then be informed that below is the Formula for
Connections creation for SQL Server. Total Number of Pooled
Connections = Total Number of Processes (Web Gardening Number) *
Total Number of Web/App Servers (Web Farming Number) * Min
Connections Parameter Value in Connection String.
For Effective Use:
Connection Pooling is already an effective way to use connections but what should be the value of connection pooling it depends on your requirement and DB Server Hardware Capabilities. If you are sure that only 50 users will do transactions one transaction by each in a unit of time then simply you need configure connection pool so that only 50 connections can be created.
I'm relatively very new to this, but I have a Tomcat cluster set up (using mod_proxy from httpd) with session replication (separate redis server) for fault-tolerance.
I have a couple of questions about this setup:
My application (spring/hibernate) has a different database per user. So the problem here is that the data source (using spring along with hibernate for persistence) is created at Tomcat level. Thus, whatever connection pooling I do will be at server level.
As per the cluster configuration the Tomcat instances will create their own Connection Pool.
I'd like to know if connection pooling is possible at a cluster level using Tomcat i.e. is there a way to make sure that all the servers in the cluster are using the shared Connection Pool?
I do not want to configure a DataSource on every Tomcat instance because of performance issues. Before the cluster setup, the application was deployed on a single server and the DataSource was configured such that it allowed only a few (50) connections in a connection pool per DataSource.
Now in a clustered environment, I cannot afford to create or split those number of connections on every Tomcat, and also dynamic registration of nodes will create further problems. I'd also like to know is there some alternative solution to this problem if connection pooling is not possible or inefficient?
I'm going to handle your questions in reverse order, since the second one is more simple.
Database connection pooling in Tomcat cannot be configured cluster-wide: you have to configure a separate pool for each node in the cluster. But this doesn't have to be bad news... there's nothing wrong with configuring a node to have 5 or 10 or 100 connections in the connection pool on each node.
It's true, you might end up with a situation where you have too many users connecting to the database at a single time which overwhelms your database, but that could also happen with a single node as well. There isn't anything conceptually different about multiple-nodes that wouldn't also be true for a single node.
the key is to make sure that your cluster balances users appropriately so that you don't have a limit of e.g. 5 database connections per node, but 100 users end up on one node while the other nodes only have 5 users per node. In that case, the popular node (100 users) will have to share those 5 connections while on the other nodes, each user gets a connection all to themselves.
Back to your first item, which is more complicated. If you have a separate database per user, then connection-pooling is an impossible thing to accomplish because you will absolutely have to establish a new connection for every user every time. Those connections aren't poolable, at least not without being quite careful about it. It sounds like you have an architectural issue that you might have to solve before you can identify a technical solution to that issue.
I'm new to Azure SQL Database as this is my first project to migrate from a on premise setup to everything on Azure. So the first thing that got my concern is that there is a limit on concurrent login to the Azure SQL Database, and if it exist that number, then it'll start dropping the subsequent request. For the current service tier (S0), it caps at 60 concurrent logins, which I have already reached multiple times as I've encountered a few SQL failures from looking at my application log.
So my question is:
Is it normal to exceed that number of concurrent login? I'd like to get an idea of whether or not my application is having some issue, or my current service tier is too low.
I've looked into our database code to make sure we are not leaving database connection open. We use Enterprise Library, every use of DBCommand and IDataReader are being wrapped within a using block, thus they will get disposed once it runs out of scope.
Note: my web application consists of a front end app with multiple web services supporting the underlying feature, and each service will connect to the same database for specific collection of data, which makes me think hitting 60 concurrent login might be normal since a page or an action might involve multiple calls behind the scene, thus multiple connection to the database from a few api, and if there are more than one user on the application, then 60 is really easy to reach.
Again, in the past with on prem setup, I never notice this kind of limitation.
Thanks.
To answer your question, the limit is 60 on an S(0)
http://gavinstevens.com/2016/11/30/sql-server-vs-azure-sql/
If I have a WCF service hosted in an Azure webrole, how many small machine instances would I need to spin up so that potentially 1000 clients could be connected at once?
Processing power is not the issue I'm concerned about, just the maximum number of active connections that Azure will allow me to have open at any given moment.
We have a service method that will take some time to complete (say 20-30 seconds) and we need to know roughly how many open connections Azure will allow us to have per small instance so we can ensure 1000 people man connect at once.
Thanks!
The limit #Jordan refers to is the number of IIS threads that can be active. Following #Jordan's link to here you will see that the IIS threads will get passed off to .Net threads while .Net is handling them.
Your .Net threads are effectively limited by the resources on the system, although 1000 might be OK. Better would be to pass the requests off to asynchronous handlers (if you can - I don't know what you are trying to do), which leaves only the maximum number of open TCP connections Windows Server 2008 R2 will allow, which should not be a problem for 1000 connections.
Existing answers mostly cover it, but a different type of answer is that Windows Azure doesn't care how many connections you have. Your question then becomes one about Windows and IIS/.NET/WCF or whatever technology you choose to use.
Looks like for .NET 3.5 it was 12, and in .NET 4.0 it's 5,000. Not sure how they decided on those numbers.
source: http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/a6a4213b-b402-4a6c-940c-10937e34d9b5/
There is no limit with Azure Webrole - the only limits are whatever you've purchased - things like CPU, RAM, bandwith.
I'm using PHP's PDO layer for data access in a project, and I've been reading up on it and seeing that it has good innate support for persistent DB connections. I'm wondering when/if I should use them. Would I see performance benefits in a CRUD-heavy app? Are there downsides to consider, perhaps related to security?
If it matters to you, I'm using MySQL 5.x.
You could use this as a rough "ruleset":
YES, use persistent connections, if:
There are only few applications/users accessing the database, i.e. you will not result in 200 open (but probably idle) connections, because there are 200 different users shared on the same host.
The database is running on another server that you are accessing over the network
An (one) application accesses the database very often
NO, don't use persistent connections, if:
Your application only needs to access the database 100 times an hour.
You have many webservers accessing one database server
You're using Apache in prefork mode. It uses one connection for each child process, which can ramp up fairly quickly. (via #Powerlord in the comments)
Using persistent connections is considerable faster, especially if you are accessing the database over a network. It doesn't make so much difference if the database is running on the same machine, but it is still a little bit faster. However - as the name says - the connection is persistent, i.e. it stays open, even if it is not used.
The problem with that is, that in "default configuration", MySQL only allows 1000 parallel "open channels". After that, new connections are refused (You can tweak this setting). So if you have - say - 20 Webservers with each 100 Clients on them, and every one of them has just one page access per hour, simple math will show you that you'll need 2000 parallel connections to the database. That won't work.
Ergo: Only use it for applications with lots of requests.
In brief, my experience says that persistent connections should be avoided as far as possible.
Note that mysql_close is a no-operation (no-op) for connections that are created using mysql_pconnect. This means persistent connection cannot be closed by client at will. Such connection will be closed by mysqldb server when no activity occurs on the connection for duration more than wait_timeout. If wait_timeout is large value (say 30 min) then mysql db server can easily reach max_connections limit. In such case, mysql db will not accept any future connection request. This is when your pager starts beeping.
In order to avoid reaching max_connections limit, use of Persistent connection need careful balancing of following variables...
Number of apache processes on one host
Total number of hosts running apache
wait_timout variable in mysql db server
max_connections variable in mysql db server
Number of requests served by one apache process before it is re-spawned
So, pl use persistent connection after enough deliberation. You may not want to invite complex runtime issues for a small gain that you get from persistent connection.
Creating connections to the database is a fairly expensive operation. Persistent connections are a good idea. In the ASP.Net and Java world, we have "connection pooling", which is roughly the same thing, and also a good idea.
IMO, The real answer to this question is whatever works best for you app. I would recommend you benchmark your app using both persistent and non-persistent connections.
Maggie Nelson # Objectively Oriented posted about this in August and Robert Swarthout made an accompanying post with some hard numbers. Both are pretty good reads.
In my humble opinion:
When using PHP for web development, most of your connection will only "live" for the life of the page executing. A persistant connection is going to cost you a lot of overhead as you'll have to put it in the session or some such thing.
99% of the time a single non-persistant connection that dies at the end of the page execution will work just fine.
The other 1% of the time, you probably should not be using PHP for the app, and there is no perfect solution for you.
In general, you'll need to use non-persistent connections sometimes, and it's nice to have a single pattern to apply to db connection design (as long as there's relatively little upside to using persistent connections in your context.)
I was going to ask this same question but rather than ask the same question again I'll just add some information that I've found.
Are PHP persistent connections evil ?
Persistent Database Connections
It is also worth noting that the newer mysqli extension does not even include the option to use persistent database connections.
I'm still using persitent connections at the moment but plan to switch to non-persistent in the near future.