Singleton object for database class - but for multiple databases - singleton

Background: I am writing a C# class for a Windows Application that will act as a customized Sql Server class that includes custom error checking. And this class will allow for the general SQL functions: executing queries, connecting to database, closing connections, etc.
My scenario is this: I have two separate databases that I need to connect to at the same time. But I want to implement the Singleton pattern so that only one connection can be established for each database (so a total of two connections open at once, but one to each database). I want to prevent a second connection to either database from being established.
I thought about the Flyweight pattern, but I don't think that would apply in this case and that if I can get the Singleton pattern to work somehow it would be the better solution... I can always just not use Singleton and have two database objects, or I can create an "open" flag in the class to be checked when a new connection is requested to see if an existing connection is open, and handle based on that...
Ideas / How can I do this?

You can use Factory Pattern for this.
http://www.tutorialspoint.com/design_pattern/factory_pattern.htm

Related

SQL connection pooling in Azure Functions

In traditional webservers you would have a SQL connection pool and persistent connection to the database.
But I am thinking of creating my entire application as Azure Functions.
Will the functions create a new connection the SQL server everytime its called upon?
Azure Functions doesn't currently have SQL as an option for an input or output binding, so you'd need to use the SqlClient classes directly to make your connections and issue your queries.
As long as you follow best practices of disposing your SQL connections (see this for example: C# SQLConnection pooling), you should get pooling by default.
Here's a full example of inserting records into SQL from a function: https://www.codeproject.com/articles/1110663/azure-functions-tutorial-sql-database
Although this is already answered, I believe this answer can provide more information.
If you are not using connection pool then probably you are creating connection every time function is invoked. Creating connection has associated cost, for warmed up instances it is recommended to use connection pool. max number of connection should also be chosen cautiously since there can be couple of parallel functions app running (as per plan).
This is example of connection pool.

SQLite, open one permanent connection or not?

I have been under the understanding that database connections are best used and closed. However with SQLite Im not sure that this applies. I do all the queries with a Using Connection statment. So it is my understanding that I open a connection and then close it doing this. When it comes to SQLite and optimal usage, is it better to open one permament connection for the duration of the program being in use or do I continue to use the method that I currently use.
I am using the database for a VB.net windows program with a fairly large DB of about 2gig.
My current method of connection example
Using oMainQueryR As New SQLite.SQLiteCommand
oMainQueryR.CommandText = ("SELECT * FROM CRD")
Using connection As New SQLite.SQLiteConnection(conectionString)
Using oDataSQL As New SQLite.SQLiteDataAdapter
oMainQueryR.Connection = connection
oDataSQL.SelectCommand = oMainQueryR
connection.Open()
oDataSQL.FillSchema(crd, SchemaType.Source)
oDataSQL.Fill(crd)
connection.Close()
End Using
End Using
End Using
As with all things database, it depends. In this specific case of sqlite, there are two "depends" you need to look at:
Are you the only user of the database?
When are implicit transactions committed?
For the first item, you probably want to open/close different connections frequently if there are other users of the database or if it's all possible that more than process will be hitting your sqlite database file at the same time.
For the second item, I'm not sure how sqlite specifically behaves. Some database engines don't commit implicit transactions until the connection is closed. If this is the case for sqlite, you probably want to be closing your connection a little more often.
The idea that connections should be short-lived in .Net applies mainly to Microsoft Sql Server, because the .Net provider for Sql Server is also able to take advantage of a feature known as connection pooling. Outside of Sql Server this advice is not entirely without merit, but it's not as much of a given.
If it is a local application being used by only one user I think it is fine to keep one connection opened for the life of the application.
I think with most databases the "Best used and closed" idea comes from the perspective of saving memory by ensuring you only have the minimum number of connections need open.
In reality opening the connection can be a large amount of of overhead and should be done when needed. This is why managed server infrastructure (weblogic etc.) promotes the use of connection pooling. In this way you have N connections that are utilizable at any given time. You never "waste" resources but you also aren't left with the responsibility of managing them at a global level.

What would happen if an SQL Server instance become offline/Inaccessible if you have an Entity Data Model pointing to one of the intance's databases?

I am currently writing an application in VS2010 which will access many different SQL Servers spread out to a couple of servers on our network. This is however a dynamic environment, and the servers might be subject to decommissioning. I have a couple of entity data models which point to custom information-gathering databases in those servers, which will become useless to me when the servers decommission. The problem is that I am worried that if one of these servers decommission, my application would fail because the entity data models won't be able to point to the databases anymore. I cannot go like every 2 weeks to change the source code of the application to meet new server needs, as development time would be wasted.
Are my suspicions right, that my application would fail to work if the data models point to databases which may not exist anymore? Is there a workaround to cater for my needs to "ignore" a connection to a non-existent database?
You will get an exception when you try to do the first thing which connects to the DB.
The exception will note that the underlying provider failed on open, and will have a SqlException as the InnerException giving details of that.
Probably the best thing for you to do is to manually create and open the connection and pass that to the context in the constructor, using this overload.

Creating a (temp?) table that is accessible only in the current connection

I want to enforce row-level security for any client connecting to a particular SQL Server database (just one database). I don't want to impose any particular set up required to happen on the client side (because this would mean that a client can set up itself so that it would have access to anything - which of course would be bad; BTW, the client is a WinApp that connects using either Windows Auth or SQL Auth). I basically want this to be transparent to any client. The client should not even know this is happening.
The enforcement of the row level security will be performed in views inside the database that are layered above the tables (in essence: no one will have the ability to perform DML against the tables directly; instead, all operations shall be performed against the views that lay on top of the tables. These views will have instead-of triggers running under a particular 'execute as', to ensure the DML operations can be correctly executed).
So basically, I want to remove the potential of the client to circumvent this security model by baking it into the database itself.
I also want to separate out the permissions granted to the user from the effective permissions that are applied to the current connection (think of it this way: if you are connected to the DB, you have a Security Context associated with your connection - maintained in the DB, mind you - this Security Context contains the information about which items you have access to. Upon establishing the connection, this Security Context is created and populated with information based of the permissions assigned to you and when the connection is closed, the information in this Security Context is removed; in fact, the entire Security Context should be removed). Of course, the Security Context should only be available within a given connection, connections should not have the ability to even see the existence of a security context for other connections.
(EDIT: one of the scenarios explicitly targeted, which explains why the Security Context is separated from the 'defined permissions' is as follows: if you establish a connection to the DB, you get a SecContext; now while your connection is active/not closed, the admin assigns new permissions to you, these new permissions will not be applied to this currently open connection. You must close the connection and re-establish a connection to have these changes reflected in your SecContext)
I know how I can enforce that the views will only return information that the user has access to. That's the easy part... This question is about creation and deletion of the Security Context I am talking about.
The Security Context should be created on establishing the connection.
It should also reside in an artifact that is accessible only to the current connection.
It must be queryable during the lifespan of the connection by the connecting user.
Finally, the Security Context must be dropped/removed/deleted when the connection is closed.
Would anyone have an idea about how this can be achieved.
Thanks
A SQL Server temp table (one with a name starting with #) is only available to the current connection, and dropped at the end. You only have to deal with establishing it on creating a new connection.
However it sounds like you are re-implementing a lot of what the DBMS already does for you. I would recommend reading up more on SQL Server's built-in security mechanisms, in particular login/user/schema separation.
I don't really understand your application in the abstract, so I don't entirely understand where you're coming from here, but here's a question for you: it sounds like you're giving your users a direct connection to your database. Do you need to do that?
Perhaps I'm missing the point, but could not all of this row-level security be done away with entirely if you built into your application an API, rather than provided your users with a direct database connection? If you set things up that way, your API could be the gatekeeper which prevents users from making changes to rows to which they should not be given access. Perhaps that would be simpler than working directly at the database level?
I hope that's helpful.

Handling Multiple databases with NHibernate in a single application

At the moment I define the connection properties in a configuration file and only ever connect to one database. I'd like to be able to at some point have a user login, figure out (via a seperate central database maybe) what database they should be connected and from that point on all sessions created will talk to that database.
Whats the best way to achieve this. Create a configuration file for every possible database? Or could I have a single session manager and change the connection url on the fly accordingly? What sort of options do i have?
Update: Apologies I should have mentioned this was NHibernate. I didn't think it would matter but some things like Hibernate Shards will not be applicable to be as I believe NHibernate Shards is waiting.
You just need to make two datasources then call the one you need for the specific query.
Please take a look at this:
https://www.hibernate.org/450.html
Some official solutions.
And here:
http://www.java-forums.org/database/159-hibernate-multiple-database.html
an online thread about this issue.