When to open and close the connection to DB - sql

I'm coding in Java EE and I have some class that manage all the actions with my DB.
I was asking myself when should I open/close the connection with the DB.
Is it better to open and close it in each method ?
Or is it better to open it in the constructor and close it when I finished using my class ?
Thx

there is no generic solution. all decisions depends of concrete task. you should remember next things:
every connect to the database is the application time. if your method will be called too often, your application will waste lot of time to connect and disconnect tasks and much more with the slow network. in case of rarely calls, it will not so important;
if your method connects to database in the constructor for the long time without any operations the connection may be dropped. this is not neccesary, but may cause of the network issues or database connections policy. so before every query should be checked with fast and simple operation like select 'some random text' from dual;
database resources is not infinite and total number of connections is limited. this limit can be very large, but it still exist. so, if your application can be used in parallel several (hungred, thousands) times, it may reach the limits with permanent connections.
if you have no information of future usage of method, I advise use time limited permament connection. it should be opened with first query and closed with timer if method did no queries through this connection for some time like 3-5 seconds. sure, any querier should check connection status before query. open it, if it closed, with touching the closing timer. and don't forget explicitly close connection at destructor.

I would argue that you shouldn't be managing it in code. Every EE server I've worked on has connection pooling to remove the need for you to care about this. Basically you "open" the connection when you need and "close" it when you need. Those words are in quotes because it is up to the pool to manage when a connection is truly opened and closed.
From a design perspective then use the connection only when you need it. Object construction doesn't make sense - what if a method in the class doesn't get called for an hour? What is the purpose of having it open when you don't need it? So if a method needs a connection open and close it in the method.

Related

Golang sql database open and close

In Go database/sql package it says it is rare to close the database with db.Close because it is meant to be shared by many go routines. Then which one is better when we are given 100 functions that queries from a same data:
Open the database inside each function
Open the database only one time and use the same connection for every 100 function.
1 is easier because if one fails other 99 can still be working. And no need to pass database connection arguments. But in performance wise which one is better?
You missed an important part of what the documentation says:
The returned DB is safe for concurrent use by multiple goroutines and maintains its own pool of idle connections. Thus, the Open function should be called just once. It is rarely necessary to close a DB.
(emphasis mine)
So, your option #2 doesn't actually make sense. The connections are pooled - so use the same connection for every 100 function doesn't apply. Also, option #1 is a waste of time - just do it once as the documentation states, but call Ping after you do just to make sure everything is fine (and have it actually attempt to connect to the database - regardless of driver).

Connection Pooling with VB.NET and orphaned connections

I am a DBA, not a developer, and could use some insight. The development staff is using VB.NET to create web based applications with connections to a DB2 database. Please assume that the connection string in the web.config file is coded correctly.
I can see the number of orphaned connections from the web servers grow over time. By orphaned I mean that there is no activity associated with the connection for hours, yet I can see other connections being created and destroyed every couple of seconds.
I suspect that the connections are not being properly closed, but there are two different groups looking at the problem and so far haven't turned up anything. By the end of the day I can have hundreds of these connections - all of which are cleared when the application pool is reset every night. (This means it is NOT a database problem)
Is there a coding technique to ensure a connection is closed using vb.net on IIS v7+?
Am I seeing the symptom of another problem on IIS?
You need to have the developers implement the Dispose pattern, which is facilitated by the Using statement in VB.NET, like this (this is pertinent to SQL Server, but whatever connection object you are using for DB2 should work as well):
Using theConnection As New SqlConnection()
'' Command, parameter logic goes here
End Using
By wrapping the connection object inside of a Using block, it will guarantee that the connection is closed and properly disposed of memory-wise, even if there is an exception in the code within the Using block.
Sounds like a code-review is in order for whoever is in charge of the developers.

Why one Public OleDbConnection is deprecated? Alternative to solve the bug: too many connections opened

I have to work with a Project made by another developer. A project Win-Form with Visual-Basic code, with MS-Access as db and some OleDbConnections. There is a bug: sometimes the application can't open the OleDbConnection because the max number of connections has been reached on the db. I know the best way to use the connections is this:
Using cn As New OleDbConnction(s)
...
cn.Close()
End Using
But in the project there are many classes to work with the db, and in many of these classes there are OleDbConnections with "Friend" visibility, that are opened and closed in different times. For this reason it's impossible to put all the OleDbConnections in a Using construct, and it's very very hard to find what operation "forgets" to close one of these OleDbConnection.
A possible solution could be to use only one unique public OleDbConnection, and to check, before opening it, if it isn't already opened.
But someone have told me it's a very bad practice. I suppose he told me this about the performance, but I don't know it exactly.
Can you tell me why one unique public OleDbConnection is so deprecated?
Have you got, for me, an "easy" solution for my problem?
Thank you,
Pileggi
From your description, I see a couple of possible issues that could result in your problem:
nested connections:
You open multiple connections within each-other
open/release connections too fast:
As David-W-Fenton mentionned, with access, every time you open/close a single connection, the lock file will be created/removed. This operation is quite slow and if you quickly open/close the database within you application (execute lots of atomic queries), you may get this issue.
A few possible ways to investigate and solve the issue:
Trace all open/close calls
Add some debug traces that show every time you open and close a connection.
It will allow you to detect nested connections and where your connection pool is being wasted.
Force connection polling
An easy 'fix' may be to explicitely set connection pooling in your connection string. It should be the default behaviour, so maybe it won't do anything to solve your problem, but it's so simple that there is no reason not to try it:
OLE DB Services=-1
Use a connection manager class to create/release connections for you.
Replace all the explicit creations of new OleDbConnection and close operations by your own code.
This would allow you to always re-use a single existing connection throughout your application and allow you to quickly make tweaks for the whole of your app by centralising the behaviour in a single place.
So why holding a single connection is generally deprecated?
Generally, you should not keep connections open throughout your application as they force the database server to keep resources available for you, and it decreases the number of client that can connect (there is always a limited number of connections available).
For Access though -a file-based database without server part- keeping a single connection open is actually preferable because of the delay associated with opening new connections (creation of the lock file). Since Access is not meant to be used with a large number of concurrent users, the resource cost of keeping the connection open is not significant enough to be an issue.
From simple tests, it can be shown that keeping a connection always open allows subsequent connections to open about 10x faster!
The OleDb driver does connection pooling for you, so it is able to re-use connections when they are freed.
By keeping your connections and database operations small and contained, you would be less likely to run into concurrency issues when using threads. Keeping a global connection may become an issue if you are executing multiple operations using the same pipeline to the database.
Just adding some information that works for years successfully for me (it is somewhat similar to what David-W-Fenton suggests)
First, an OleDbConnection to Microsoft Access (MDB, JET) is not using connection pooling. As Microsoft states in KB191572:
Connections that use the Jet OLE DB providers and ODBC drivers are not
pooled because those providers and drivers do not support pooling.
Regarding connection pooling, there is also this blog post from Ivan Mitev that states:
So what does this mean? It is apparent that that the presence of an
actively opened connection made the test with multiple connection
closing and opening finish a lot faster (2-3 times). The only possible
explanation for me is that the connection pool is released each time
there are no active connections. I have to make further investigations
and read something like Pooling in the Microsoft Data Access
Components. Or maybe hold a single opened connection just for the sake
of keeping the pool alive. This would be ugly, but still it is a good
enough workaround! If anyone has a better idea, please share it.
And Microsoft notes in MSDN:
The ADO Connection object implicitly uses IDataInitialize. However,
this means your application needs to keep at least one instance of a
Connection object instantiated for each unique user—at all times.
Otherwise, the pool will be destroyed when the last Connection object
for that string is closed.
Based on all this and my own tests, my solution to "simulate" connection pooling even with Microsoft Access databases roughly follows these steps:
Open one OleDbConnection to the Access database as early as possible in application lifecycle.
Do your normal SQL queries, disposing OleDbConnections as early as possible, just like recommended.
Dispose that one always-open OleDbConnection as late as possible in application lifecycle.
This sped up my applications (mostly WinForms) tremendously.
Please note that this also works for Sqlite which seems to not support connection pooling, too.

Run a SQL command in the event my connection is broken? (SQL Server)

Here's the sequence of events my hypothetical program makes...
Open a connection to server.
Run an UPDATE command.
Go off and do something that might take a significant amount of time.
Run another UPDATE that reverses the change in step 2.
Close connection.
But oh-no! During step 3, the machine running this program literally exploded. Other machines querying the same database will now think that the exploded machine is still working and doing something.
What I'd like to do, is just as the connection is opened, but before any changes have been made, tell the server that should this connection close for whatever reason, to run some SQL. That way, I can be sure that if something goes wrong, the closing update will run.
(To pre-empt the answer, I'm not looking for table/record locks or transactions. I'm not doing resource claims here.)
Many thanks, billpg.
I'm not sure there's anything built in, so I think you'll have to do some bespoke stuff...
This is totally hypothetical and straight off the top of my head, but:
Take the SPID of the connection you
opened and store it in some temp
table, with the text of the reversal
update.
Use an a background process (either
SSIS or something else) to monitor
the temp table and check that the
SPID is still present as an open connection.
If the connection dies then the background process can execute the stored revert command
If the connection completes properly then the SPID can be removed from the temp table so that the background process no longer reverts it when the connection closes.
Comments or improvements welcome!
I'll expand on my comment. In general, I think you should reconsider your approach. All database access code should open a connection, execute a query then close the connection where you rely on connection pooling to mitigate the expense of opening lots of database connections.
If it is the case that we are talking about a single SQL command whose rows on which it operates should not change, that is a problem that should be handled by the transaction isolation level. For that you might investigate the Snapshot isolation level in SQL Server 2005+.
If we are talking about a series of queries that are part of a long running transaction, that is more complicated and can be handled via storage of a transaction state which other connections read in order to determine whether they can proceed. Going down this road, you need to provide users with tools where they can cancel a long running transaction that might no longer be applicable.
Assuming it's even possible... this will only help you if the client machine explodes during the transaction. Also, there's a risk of false positives - the connection might get dropped for a few seconds due to network noise.
The approach that I'd take is to start a process on another machine that periodically pings the first one to check if it's still on-line, then takes action if it becomes unreachable.

How to find unclosed connection? Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding

I've had this problem before and found that basically I've got a connection that I'm not closing quickly enough (leaving connections open and waiting for garbage collection isn't really a best practice).
Now I'm getting it again but I can't seem to find where I'm leaving my connections open. By the time is see the error the database has cleared out the old connections so I can't see all the locked up connections last command (very helpful last time I had this issue).
Any idea how I could instrument my code or database to track what's going on so I can find my offending piece of code?
The error you are providing doesnt really point to a connection that is left open; it is more likely that there is a query that is taking longer than the application expects.
you can increase the time it waits for a response, and you could use Sql to find which queries are the most taxing.
Hopefully you have one data access layer class, instead of a whole bunch of classes, each one creating its own connection, right? What language are you using? If your using C#, the biggest cause of this problem is DataReaders and returning these objects to the upper layers. Most likely some client class is not closing the DataReader it received from your DAL class, leaving the connection open/locked for who knows how long. Track down the DataReaders you're returning and make sure your client classes are closing/disposing of them properly.
I'd also start thinking about redesigning your data access layer by implementing Disposable pattern and possibly returning POCOs instead of Data (...Tables, ...Sets, ...Readers) objects.