Hangfire using multiple connection string and DbContext - asp.net-core

I'm having trouble using Hangfire with multiple connections on the Entity Framework. I have only one server that stores Hangfire jobs and each Job must be run with a different connection string. Example: I have 5 jobs stored and each job that launches must use a specific connection in its DbContext. In the requests of my API application I use HttpContext where I already inform through it which database should I use in the connection string. I am unable to inform an HttpContext to the hangfire and thus take advantage of the logic that already works. I am using dependency injection so the instances are created as soon as the job triggers the method. I could pass the name of the database as a parameter of the method that Hangfire should trigger, however I can't do anything with this information since I'm using Dependency Injection and at that moment the DbContext instances have already been created and without the connection string. Has anyone ever needed something like that?

If you go through the hangfire documents you'll get your answer.
Hangfire document
It is possible to run multiple server instances inside a process, machine, or on several machines at the same time. Each server use distributed locks to perform the coordination logic.
Each Hangfire Server has a unique identifier that consist of two parts to provide default values for the cases written above. The last part is a process id to handle multiple servers on the same machine. The former part is the server name, that defaults to a machine name, to handle uniqueness for different machines. Examples: server1:9853, server1:4531, server2:6742.
Since the defaults values provide uniqueness only on a process level, you should handle it manually if you want to run different server instances inside the same process:
r options = new BackgroundJobServerOptions
{
ServerName = String.Format(
"{0}.{1}",
Environment.MachineName,
Guid.NewGuid().ToString())
};
var server = new BackgroundJobServer(options);
// or
app.UseHangfireServer(options);

Related

Considerations about Quartz.NET hosted inside scaled out instance of Azure App Service

How to handle Quartz configuration for an API which is created in ASP.NET Core and hosted in Azure App Service which is scaled out to more than one instances?
The API currently is always hosted in a single IIS application thus Quartz configuration looks like follow, no cluster configuration used
private static NameValueCollection SchedulerConfiguration(IConfiguration configuration)
{
var schedulerConfiguration = new NameValueCollection();
schedulerConfiguration["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz";
schedulerConfiguration["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.StdAdoDelegate, Quartz";
schedulerConfiguration["quartz.jobStore.tablePrefix"] = "QRTZ_";
schedulerConfiguration["quartz.jobStore.dataSource"] = "default";
schedulerConfiguration["quartz.dataSource.default.connectionString"] = configuration["ConnectionString"];
schedulerConfiguration["quartz.dataSource.default.provider"] = "SqlServer";
schedulerConfiguration["quartz.serializer.type"] = "json";
return schedulerConfiguration;
}
and it's utilizing QuartzHostedService for handling scheduled background jobs
services.AddHostedService<QuartzHostedService>();
I did a small experiment and deployed the API into the Azure App Service instance, then created a single Quartz job within the API and scaled out a number of instances to 4 before the trigger fired.
My assumption was, that without doing any changes to the above configuration job will be executed 4 times because when the new 3 instances kicked in, all of them would read job details from DB and register triggers to fire at a certain time. But to my surprise, the job was executed only once.
Any ideas why the job was executed only once?
Should I leverage cluster configuration when hosting Quartz scheduler inside scaled out Azure App Service?
I think that might be just pure luck, if your jobs run fast they might be run only by single instance in the cluster. But without clustered setup two nodes could take same job to run and cause conflicting database updates.
Yes, there's slight performance penalty when database based locks are in use, but that's the only way you can run busy instance safely.
I would also suggest that you look into the ASP.NET Core integration package. It helps with compile-safe configuration, I can for example see that you're now using wrong (inefficient) delegate for SQL Server, there's separate SqlServerDelegate.

Using transactions with EF4.1 and SQL 2012 - why is DTC required?

I've been doing a lot of reading on this one, and some of the documentation doesn't seem to relate to reality. Some of the potential causes would be appropriate here, however they're only related to 2008 or earlier.
I define a transaction scope. I use a number of different EF contexts (in different method calls) within the transaction scope, however all but one of them are only for data reads. The final use of a Context is to create and add some new objects to the context, and then call
context.SaveChanges()
IIS is running on one server. The DB (Sql2012) is running on another server (WinServer 2012).
When I execute this code, I receive the error:
Network access for Distributed Transaction Manager (MSDTC) has been
disabled. Please enable DTC for network access in the security
configuration for MSDTC using the Component Services Administrative
tool.
Obviously, if I enable DTC on the IIS machine, this goes away. However why should I need to?
This:
http://msdn.microsoft.com/en-us/library/ms229978.aspx
states:
• At least one durable resource that does not support single-phase
notifications is enlisted in the transaction. • At least two durable
resources that support single-phase notifications are enlisted in the
transaction
Which I understand is not the case here.
Ok. I'm not entirely sure if this should have been happening (according to the MS doco), but I have figured out why and the solution.
I'm using the ASPNet membership provider, and have two connection strings in my web.config. I thought the fact that they were pointing to the same DB was enough for them to be considered the same "durable resource".
However I found that the membership connection string also had:
Connection Timeout=60;App=EntityFramework
whereas the Entity Framework connection string didn't.
Setting these values to the same connection string meant that the transaction is not escalated to MSDTC.

NHibernate: Creating a ConnectionProvider that dynamically chooses which of several databases to connect to?

I have a project that connects to many SQL Server databases. They all have the same schema, but different data. Data is essentially separated by customer. When a request comes in to the asp.net app, it can tell which database is needed and sets up a session.
What we're doing now is creating a new SessionFactory for each customer database. This has worked out alright for a while, but with more customers we're creating more databases. We're starting to run into memory issues because each factory has it's own QueryPlanCache. I wrote a post about my debugging of the memory.
I want to make it so that we have one SessionFactory that uses a ConnectionProvider to open a connection to the right database. What I have so far looks something like this:
public class DatabaseSpecificConnectionProvider : DriverConnectionProvider
{
public override IDbConnection GetConnection()
{
if (!ThreadConnectionString.HasValue)
return base.GetConnection();
var connection = Driver.CreateConnection();
try
{
connection.ConnectionString = ThreadConnectionString.Value;
connection.Open();
}
catch(DbException)
{
connection.Dispose();
throw;
}
return connection;
}
}
This works great if there is only one database needed to handle the request since I can set the connection string in a thread local variable during initization. Where I run into trouble is when I have an admin-level operation that needs to access several databases.
Since the ConnectionProvider has no idea which session is opening the connection, it can't decide which one to use. I could set a thread local variable before opening the session, but that has trouble since the session connections are opened lazily.
I'm also going to need to create a CacheProvider to avoid cache colisions. That's going to have run into a similar problem.
So any ideas? Or is this just asking too much from NHibernate?
Edit: I found this answer that suggests I'd have to rely on some global state which is what I'd like to avoid. If I have multiple sessions active, I'd like the ConnectionProvider to respond with a connection to the appropriate database.
Edit 2: I'm leaning towards a solution that would create a ConnectionProvider for the default Session that is always used for each site. And then for connections to additional databases I'd open the connection and pass it in. The downsides to this I can see is that I can't use the second level cache on ancillary Sessions and I'll have to track and close the connection myself.
I've settled on a workaround and I'm listing it here in case anyone runs across this again.
It turned out I couldn't find anyway to make the ConnectionProvider change databases depending on session. It could only realistically depend on the context of the current request.
In my case, 95% of the time only the one customer's database is going to be needed. I created a SessionFactory and a ConnectionProvider that would handle that. For the remaining corner cases, I created a second SessionFactory and when I open the Session, I pass in a new Connection.
The downside to that is that the Session that talks to the second database can't use the second level cache and I have to make sure I close the connection at the end of the request.
That seems to be working well enough for now, but I'm curious how well it'll stand up in the long run.

Entity Framework for two applications and common database

I have two applications(web and a desktop app) that uses entity framework which use a common sql server database. They have unit of work pattern implemented and it keeps the context in the session or in the relevant thread. My question is how to update context of another application when one application updates something on the database ?
As an example let say the windows service has added some row to a table. How can the web application context get that one at the same time it is inserted.
Context in scenario of a web application should only last per the request. From what I see, you have to implement something as an event from database level as that seems to be the common place. This can be done using Triggers
In your scenario, you should perform following steps (just doing a drawing board scenario)
Add triggers at database level for each table, which will basically throw an event to the application layer.
Somehow extract those triggers into stored procedures, so that you can use with EF
Thereafter, implement a layer that sits on both the application whose primary responsibility is to notify the user of a change in the database by other application and then update the request by clicking a button(which in turn update the context). Basically the database level trigger, triggers something on the respective UI.
The meat of the work lies in the third point. You can achieve it in many ways. Alternatives are writing a service that polls another service (which accepts alerts from db trigger) for checking the modifications. so the logical separation could be like db --> service that accepts the change notification --> service that polls the notification service --> application
Above works logically and theoretically but hope it helps you out and I would be keen to know how you go about doing this.

Continuously checking database from a Windows service

I am making a Windows service which needs to continuously check for database entries that can be added at any time to tell it to execute some code. It is looking to see if it's status is set to pending, and it's execute time entry is > than the current time. Is the only way to do this to just run select statements over and over? It might need to execute the code every minute which means I need to run the select statement every minute looking for entries in the database. I'm trying to avoid unneccesary cpu time because I'm probably going to end up paying for cpu cycles on the hosting provider
Be aware that Notification Services is only for SQL 2005, and has been dropped from SQL 2008.
Rather than polling the database for changes, I would recommend writing a CLR stored procedure that is called from a trigger, which is raised when an appropriate change occurs (e.g. insert or update). The CLR sproc alerts your service which then performs its work.
Sending the service alert via a TCP/IP or HTTP channel is a good choice since you can deploy your service anywhere, just by modifying some configuration parameter that is read by the sproc. It also makes it easy to test the service.
I would use an event driven model in your service. The service waits on an auto-reset event, starting a block of work when the event is raised. The sproc communications channel runs on another thread and sets the event on each incoming request.
Assuming the service is doing a block of work and a set of multiple pending requests are outstanding, this design ensures that those requests trigger just 1 more block of work when the current one is finished.
You can also have multiple workers waiting on the same event if overlapping processing is desired.
Note: for external network access the CREATE ASSEMBLY statement will require the PERMISSION_SET option to be set to EXTERNAL_ACCESS.
Given you talk about the service provider, I suspect one of the main alternatives will not be open to you, which is notification services. It allows you to register for data changed events and be notified, without the need to poll the database. It does however require service broker enabled for it to work, and that potentially could be a problem if it is hosted - some companies keep it switched off.
The question is not tagged to a specific database just SQL, the notification services is a SQL Server facility.
If you're using SQL Server and open to a different approach, check out SQL Server Notification Services.
Oracle also provides notifications, the call it Database Change Notification