When are database connections opened for a DbContext when using DI - asp.net-core

From my understanding, when you register a DbContext using AddDbContext it is registered as scoped lifetime which means one DbConnext per http request. And that you don't need to use a using statement because the connection will be closed and cleaned up for you if you are not performing long running db calls. But when is the connect to the database opened? Does it open/close the connection automatically after each db call or is the connection opened at the time the DbContext is injected into a service and closed when the request finishes?
The reason I ask is because I am rewriting an existing app that uses both classic asp and ASP.NET 4.8 (using EF and Microsoft Enterprise Data Library). I had previously implemented repositories/services and had the DbContext in a repository where all the calls in that repository used EF. Other repositories use sql statements using dapper. I was asked to remove the repositories and use our old method of creating separate DataAccess files (each implement the same partial class) and named based on the database being used and not functionality. I really don't like this idea because we have projects where there are 100s of database calls in one file because they happen to use the same database (some even using multiple databases) and we end up with DataAccess methods that are very similar/same but named differently. I refactored the repositories to follow this pattern. Since all the files implement the same partial class, I had to inject the DbContext into the constructor; which a good chuck of the database calls will not use. After doing so, I started wondering if the calls that don't use EF will now have an extra open db connection for no reason.

But when is the connect to the database opened?
From this we can see:
By default the a single DbContext will be created and used for each
HTTP request. A connection can be opened and closed many times during
the lifetime of a DbContext, but if the DbContext starts a transaction
the same underlying connection will remain open and be reused for the
lifetime of the transaction.
When you create a DbContext
it's not immediately initialized, nor is its connection immediately
opened.
It opens a connection when loading or saving and closes it once it's done. If you force it by starting a transaction and keeping it open, the connection will be opened during DbContext lifetime .
see this to know more.

Related

Hangfire using multiple connection string and DbContext

I'm having trouble using Hangfire with multiple connections on the Entity Framework. I have only one server that stores Hangfire jobs and each Job must be run with a different connection string. Example: I have 5 jobs stored and each job that launches must use a specific connection in its DbContext. In the requests of my API application I use HttpContext where I already inform through it which database should I use in the connection string. I am unable to inform an HttpContext to the hangfire and thus take advantage of the logic that already works. I am using dependency injection so the instances are created as soon as the job triggers the method. I could pass the name of the database as a parameter of the method that Hangfire should trigger, however I can't do anything with this information since I'm using Dependency Injection and at that moment the DbContext instances have already been created and without the connection string. Has anyone ever needed something like that?
If you go through the hangfire documents you'll get your answer.
Hangfire document
It is possible to run multiple server instances inside a process, machine, or on several machines at the same time. Each server use distributed locks to perform the coordination logic.
Each Hangfire Server has a unique identifier that consist of two parts to provide default values for the cases written above. The last part is a process id to handle multiple servers on the same machine. The former part is the server name, that defaults to a machine name, to handle uniqueness for different machines. Examples: server1:9853, server1:4531, server2:6742.
Since the defaults values provide uniqueness only on a process level, you should handle it manually if you want to run different server instances inside the same process:
r options = new BackgroundJobServerOptions
{
ServerName = String.Format(
"{0}.{1}",
Environment.MachineName,
Guid.NewGuid().ToString())
};
var server = new BackgroundJobServer(options);
// or
app.UseHangfireServer(options);

Connection Pooling with VB.NET and orphaned connections

I am a DBA, not a developer, and could use some insight. The development staff is using VB.NET to create web based applications with connections to a DB2 database. Please assume that the connection string in the web.config file is coded correctly.
I can see the number of orphaned connections from the web servers grow over time. By orphaned I mean that there is no activity associated with the connection for hours, yet I can see other connections being created and destroyed every couple of seconds.
I suspect that the connections are not being properly closed, but there are two different groups looking at the problem and so far haven't turned up anything. By the end of the day I can have hundreds of these connections - all of which are cleared when the application pool is reset every night. (This means it is NOT a database problem)
Is there a coding technique to ensure a connection is closed using vb.net on IIS v7+?
Am I seeing the symptom of another problem on IIS?
You need to have the developers implement the Dispose pattern, which is facilitated by the Using statement in VB.NET, like this (this is pertinent to SQL Server, but whatever connection object you are using for DB2 should work as well):
Using theConnection As New SqlConnection()
'' Command, parameter logic goes here
End Using
By wrapping the connection object inside of a Using block, it will guarantee that the connection is closed and properly disposed of memory-wise, even if there is an exception in the code within the Using block.
Sounds like a code-review is in order for whoever is in charge of the developers.

NHibernate session (and stateless session) and long running application

In the context of a windows web service that's meant to run jobs, we try to reuse the NHibernate DAL we developed for the web application.
For session management we have two options, each one having its advantages and drawbacks:
Stateful session
Going to grow a lot as it keeps track of everything (L1/session cache)
Needs to be carefully closed, session disposal doesn't seem to be enough to clear L1 cache (what I noticed using memory profiler)
Stateless Session
Currently fails to reuse mappings. All bags declared with "lazy=true" ends up with the following exception (even though the session has not be closed):
Initializing [...] failed to lazily initialize a collection of role:
[...], no session or session was closed
Obviously, we cannot update the mappings (they are shared with the web app) with lazy="false", it's gonna be a huge drawback for performances
Cannot interact with L2 cache: when shared L2 cache will be deployed, the service will be unable to invalidate L2 cache data in order for web application to have fresh up-to-date data
NHibernate has proven to be good until now, we have successfully used stateful session and NHibernate LINQ it in a web context, with structuremap for dependency injection.
My questions are:
Are there any good solutions to use NHibernate in a long running thread?
I'd prefer to use stateful session, but how to avoid memory leak?
Problem solved! There were actually a couple of problems.
First one was about instances' scope, and multi-threading:
Create a new session for each thread.
As soon as the thread finishes its work, clean all the instances attached to the thread. With StructureMap, within the thread, use new HybridLifecycle().FindCache().DisposeAndClear();. It will cause the session attached to the thread to close and dispose.
When the lifecycle is thread scoped, StructureMap uses a ThreadStatic variable to keep a reference to the object cache. So the trick is to call StructureMap's ObjectFactory within the thread. Initially, in our application, a main thread was responsible for creating new threads, and call the ObjectFactory. That's the major mistake we did, and were indeed unable to clean the threads once their job was done.
Session type:
No need to use a StateLessSession, as soon as the StateFul sessions instantiated are carefully disposed. In our case, StatelessSession have too many drawbacks (cache management is the main)
Important remark: be careful to instantiate NHibernate NHibernate Session Factory only once!
When NHibernate instances are managed carefully, there is no memory leak.
It's never a good idea to keep a stateful session open in a long running process.
My suggestion is to redesign your process to separate database related code from non-database related code so any database related operation can be kept within a short-span session.

NHibernate: Creating a ConnectionProvider that dynamically chooses which of several databases to connect to?

I have a project that connects to many SQL Server databases. They all have the same schema, but different data. Data is essentially separated by customer. When a request comes in to the asp.net app, it can tell which database is needed and sets up a session.
What we're doing now is creating a new SessionFactory for each customer database. This has worked out alright for a while, but with more customers we're creating more databases. We're starting to run into memory issues because each factory has it's own QueryPlanCache. I wrote a post about my debugging of the memory.
I want to make it so that we have one SessionFactory that uses a ConnectionProvider to open a connection to the right database. What I have so far looks something like this:
public class DatabaseSpecificConnectionProvider : DriverConnectionProvider
{
public override IDbConnection GetConnection()
{
if (!ThreadConnectionString.HasValue)
return base.GetConnection();
var connection = Driver.CreateConnection();
try
{
connection.ConnectionString = ThreadConnectionString.Value;
connection.Open();
}
catch(DbException)
{
connection.Dispose();
throw;
}
return connection;
}
}
This works great if there is only one database needed to handle the request since I can set the connection string in a thread local variable during initization. Where I run into trouble is when I have an admin-level operation that needs to access several databases.
Since the ConnectionProvider has no idea which session is opening the connection, it can't decide which one to use. I could set a thread local variable before opening the session, but that has trouble since the session connections are opened lazily.
I'm also going to need to create a CacheProvider to avoid cache colisions. That's going to have run into a similar problem.
So any ideas? Or is this just asking too much from NHibernate?
Edit: I found this answer that suggests I'd have to rely on some global state which is what I'd like to avoid. If I have multiple sessions active, I'd like the ConnectionProvider to respond with a connection to the appropriate database.
Edit 2: I'm leaning towards a solution that would create a ConnectionProvider for the default Session that is always used for each site. And then for connections to additional databases I'd open the connection and pass it in. The downsides to this I can see is that I can't use the second level cache on ancillary Sessions and I'll have to track and close the connection myself.
I've settled on a workaround and I'm listing it here in case anyone runs across this again.
It turned out I couldn't find anyway to make the ConnectionProvider change databases depending on session. It could only realistically depend on the context of the current request.
In my case, 95% of the time only the one customer's database is going to be needed. I created a SessionFactory and a ConnectionProvider that would handle that. For the remaining corner cases, I created a second SessionFactory and when I open the Session, I pass in a new Connection.
The downside to that is that the Session that talks to the second database can't use the second level cache and I have to make sure I close the connection at the end of the request.
That seems to be working well enough for now, but I'm curious how well it'll stand up in the long run.

When is the configuration loaded with nHibernate?

I was reading that the initial load time for the configuration can be fairly long in nHibernate depending on the # of mapping tables, etc.
Is this done once and stored in session or cache?
Will it happen every time the ASP.NET process recycles?
A Configuration object is normally associated to an ISessionFactory. If you have lots of mappings building (by calling cfg.BuildSessionFactory) a session factory might be slow. That's why you need to construct a session factory only once and use it throughout your entire application. In an ASP.NET application when the process recycles, you will lose the reference to this session factory and it needs to be reconstructed again.
If you find it is extremely slow to construct your session factory you could improve performance by disabling the reflection optimizer : Environment.UseReflectionOptimizer = false (cf doc)
The Configuration is used to build the ISessionFactory. It's a one shot deal - which will occurs at the application startup.