SQLite, open one permanent connection or not? - vb.net

I have been under the understanding that database connections are best used and closed. However with SQLite Im not sure that this applies. I do all the queries with a Using Connection statment. So it is my understanding that I open a connection and then close it doing this. When it comes to SQLite and optimal usage, is it better to open one permament connection for the duration of the program being in use or do I continue to use the method that I currently use.
I am using the database for a VB.net windows program with a fairly large DB of about 2gig.
My current method of connection example
Using oMainQueryR As New SQLite.SQLiteCommand
oMainQueryR.CommandText = ("SELECT * FROM CRD")
Using connection As New SQLite.SQLiteConnection(conectionString)
Using oDataSQL As New SQLite.SQLiteDataAdapter
oMainQueryR.Connection = connection
oDataSQL.SelectCommand = oMainQueryR
connection.Open()
oDataSQL.FillSchema(crd, SchemaType.Source)
oDataSQL.Fill(crd)
connection.Close()
End Using
End Using
End Using

As with all things database, it depends. In this specific case of sqlite, there are two "depends" you need to look at:
Are you the only user of the database?
When are implicit transactions committed?
For the first item, you probably want to open/close different connections frequently if there are other users of the database or if it's all possible that more than process will be hitting your sqlite database file at the same time.
For the second item, I'm not sure how sqlite specifically behaves. Some database engines don't commit implicit transactions until the connection is closed. If this is the case for sqlite, you probably want to be closing your connection a little more often.
The idea that connections should be short-lived in .Net applies mainly to Microsoft Sql Server, because the .Net provider for Sql Server is also able to take advantage of a feature known as connection pooling. Outside of Sql Server this advice is not entirely without merit, but it's not as much of a given.

If it is a local application being used by only one user I think it is fine to keep one connection opened for the life of the application.

I think with most databases the "Best used and closed" idea comes from the perspective of saving memory by ensuring you only have the minimum number of connections need open.
In reality opening the connection can be a large amount of of overhead and should be done when needed. This is why managed server infrastructure (weblogic etc.) promotes the use of connection pooling. In this way you have N connections that are utilizable at any given time. You never "waste" resources but you also aren't left with the responsibility of managing them at a global level.

Related

SQL connection pooling in Azure Functions

In traditional webservers you would have a SQL connection pool and persistent connection to the database.
But I am thinking of creating my entire application as Azure Functions.
Will the functions create a new connection the SQL server everytime its called upon?
Azure Functions doesn't currently have SQL as an option for an input or output binding, so you'd need to use the SqlClient classes directly to make your connections and issue your queries.
As long as you follow best practices of disposing your SQL connections (see this for example: C# SQLConnection pooling), you should get pooling by default.
Here's a full example of inserting records into SQL from a function: https://www.codeproject.com/articles/1110663/azure-functions-tutorial-sql-database
Although this is already answered, I believe this answer can provide more information.
If you are not using connection pool then probably you are creating connection every time function is invoked. Creating connection has associated cost, for warmed up instances it is recommended to use connection pool. max number of connection should also be chosen cautiously since there can be couple of parallel functions app running (as per plan).
This is example of connection pool.

What would happen if an SQL Server instance become offline/Inaccessible if you have an Entity Data Model pointing to one of the intance's databases?

I am currently writing an application in VS2010 which will access many different SQL Servers spread out to a couple of servers on our network. This is however a dynamic environment, and the servers might be subject to decommissioning. I have a couple of entity data models which point to custom information-gathering databases in those servers, which will become useless to me when the servers decommission. The problem is that I am worried that if one of these servers decommission, my application would fail because the entity data models won't be able to point to the databases anymore. I cannot go like every 2 weeks to change the source code of the application to meet new server needs, as development time would be wasted.
Are my suspicions right, that my application would fail to work if the data models point to databases which may not exist anymore? Is there a workaround to cater for my needs to "ignore" a connection to a non-existent database?
You will get an exception when you try to do the first thing which connects to the DB.
The exception will note that the underlying provider failed on open, and will have a SqlException as the InnerException giving details of that.
Probably the best thing for you to do is to manually create and open the connection and pass that to the context in the constructor, using this overload.

SQL Server Status Monitor

My application connects to 3 SQL servers and 5 databases simultaneously. I need to show statuses {running|stopped|not found} on my status bar.
Any idea/code sample that I can use here? This code should not affect the speed of application or a overhead to SQL server.
Buddhi
I think you should use WMI (using the ServiceController class (with this constructor). You basically query the server where the sql server resides and check its status.
The example below is assuming your application is written in c#:
ServiceController sc = new ServiceController("MSSQLSERVER", serverName);
string status = sc.Status.ToString();
"This code should not affect the speed
of application or a overhead to SQL
server"
This is a Schroedinger's Cat scenario: in order to know the current status of a given remote service or process, you must serialize a message onto the network, await a response, de-serialize the response and act upon it. All of which will require some work and resources from all machines involved.
However, that work might be done in a background thread on the caller and if not called too often, may not impact the target server(s) in any measurable way.
You can use SMO (SQL Server Management Objects) to connect to a remote server and do pretty much anything you can do through the SQL admin tools since they use SMO to work their magic too. It's a pretty simple API and can be very powerful in the right hands.
SMO does, unsurprisingly, require that your have appropriate rights to the boxes you want to monitor. If you don't/can't have sufficient rights, you might want to ask your friendly SQL amin team to publish a simple data feed exposing some of the data you need.
HTH.
There will be some overhead within your application when connecting (verifying connection) or failing to connect (verifying no connection) but you can prevent waiting time, by checking this asynscronously.
We use the following SQL query to check the status of a particular database
SELECT 'myDatabase status is:' AS Description, ISNULL((select state_desc FROM sys.databases WITH (NOLOCK) WHERE name ='myDatabase'),'Not Found') AS [DBStatus]
This should have very little overhead, especially when paired with best practices like background or asynchronous threads
Full Disclosure, I am the founder of Cotega
If you are interested in a service to do this, our service allows for monitoring of SQL Server uptime and performance. In addition you can set notifications for when the database is not available, performance degrades, database size or user count issues occur, etc.

How to limit the number of connections to a SQL Server server from my tomcat deployed java application?

I have an application that is deployed on tomcat on server A and sends queries to a huge variety of SQL Server databases on an server B.
I am concerned that my application could overload this SQL Server database server and would like some way to preventing it making requests to connect to any database on that server if some arbitrary number of connections were already in existence and unclosed.
I am looking at using connection pooling but am under the impression that this will only pool connections to a specific database on the SQL Server server, I want to control the total of these combined connections that will occur to many different databases (incidentally I can only find out the names of individual db's dynamically as they change day to day). Will connection pooling take care of this for me, are am I looking at this from the wrong perspective?
I have no access to the configuration of the SQL Server server.
Links to tutorials or working examples of your suggested solution are most welcome!
Have you considered using DBCP? See here for more info
You're correct. A database pool will limit connections to either the database, or to all databases, depending on how the pool is configured.
Here are several open source packages that implement database connection pools. One of them probably does what you want.
connection pooling ... this will only pool connections to a specific database on the mssql server
AFAIK, this is correct. Your connection pool will only limit the number of connections to the specific database it is defined for.
want to control the total of these combined connections that will occur to many different databases
I dont think you can control the number of connections to all databases from the pool. You've written you don't have access to change things on the MSSQL server. so creating SYNONYMs to the various databases on MSSQL itself is not an option.
You can write your own class within the application, say a ConnPoolManager, which has an internal counter prior to getting and releasing Connections from any of the pools.
This class should cache all the JNDI lookups to each pool.
For the app to get a connection to ANY pool, it goes through the ConnPoolManager and if the counter shows the maxlimit is not yet crossed, only then does it fetch the connection.
Else it throws some exception for you to try later.
There might be a design pattern for this on the lines of Business Delegate.
Having said that, I think a bigger problem for you will be
incidentally I can only find out the names of individual db's dynamically as they change day to day
since you will be expected to create new or edit the connection pool settings in Tomcat each day? This is a maintenance nightmare in the future.

NHibernate + Sql Compact + IoC - Connection Managment

When working with NHibernate and Sql Compact in a Windows Form application I am wondering what is the best practice for managing connections. With SQL CE I have read that you should keep your connection open vs closing it as one would typically do with standard SQL. If that is the case and your using a IoC, would you make your repositories lifetime be singletons so they exist forever or dispose of them after you perform a "Unit of Work".
Also is there a way to determine the number of connections open to Sql CE?
In my DAL or DataService, which will have a lifetime of the entire app, I'd create and hold open a connection to the database and then let the ORM do whatever it wants for its own connection management. I would only do this in a Compact Framework app, though, where the speed of building up and tearing down the connection for each query might make a difference.