Why Hangfire server shuts down? - hangfire

I have a .net 5 application that uses Hangfire.
The problem is that as soon as the application completes it's startup, the hangfire server shuts down. If I place a breakpoint on the last line of Startup.cs, the hangfire server is alive.
There are no errors logged.
What might be the cause of this?
Below is the code used for adding Hangfire.
public void ConfigureServices(IServiceCollection services)
{
services.AddHangfire(config =>
{
config.UseSqlServerStorage(configuration.GetConnectionString("HangfireDefaultConnection"), new SqlServer.SqlServerStorageOptions
{
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
QueuePollInterval = TimeSpan.Zero,
UseRecommendedIsolationLevel = true,
DisableGlobalLocks = true,
SchemaName = "dbo.WR_Hangfire",
});
});
....
}
public void Configure(IApplicationBuilder app)
{
...
app.UseHangfireServer();
...
}

Turns out app.UseHangfireServer(); is deprecated and that was the problem. Solved by registering the server with services.UseHangfireServer().

Related

.Net core log Startup errors to app insights

I'm trying to log any startup errors but the logs are not flowing into application insights. I tried configuring app insights in program.cs but still I don't see any logs.
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.AddApplicationInsights();
logging.SetMinimumLevel(LogLevel.Information);
});
To Capture logs from program.cs or startup.cs, you need to configure like this.
.ConfigureLogging((context, builder) =>
{
// Providing a connection string is required if need to capture logs during application startup, such as
// in Program.cs or Startup.cs itself.
builder.AddApplicationInsights(
configureTelemetryConfiguration: (config) => config.ConnectionString = context.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"],
configureApplicationInsightsLoggerOptions: (options) => { }
);
// Capture all log-level entries from Program
builder.AddFilter<ApplicationInsightsLoggerProvider>(
typeof(Program).FullName, LogLevel.Trace);
// Capture all log-level entries from Startup
builder.AddFilter<ApplicationInsightsLoggerProvider>(
typeof(Startup).FullName, LogLevel.Trace);
});
For those using .NET 6 projects with minimal hosting model, this worked for me to capture the startup/shutdown messages into Application Insights:
// Program.cs
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddApplicationInsightsTelemetry();
builder.Logging.AddFilter<ApplicationInsightsLoggerProvider>("Microsoft.Hosting.Lifetime", LogLevel.Information);
...
By default it is not supported to handle exceptions from application startup, see this link.
What I did to get the logs into the application insights is to add a try/catch block in the Main of Program.cs and initialize the TelemetryClient manually.
See the following code:
public static void Main(string[] args)
{
try
{
CreateHostBuilder(args)
.Build().Run();
}
catch(Exception ex)
{
TelemetryConfiguration telemetryConfiguration = TelemetryConfiguration.CreateDefault();
telemetryConfiguration.ConnectionString = "application-insights-connection-string";
var telemetryClient = new TelemetryClient(telemetryConfiguration);
telemetryClient.TrackException(ex);
telemetryClient.Flush();
throw;
}
}

AspNetCore.Session-Distributed Cache keeps timing out on distributed servers

I have an AspNetCore (core 2.1) web appl that works fine in any single server environment, but times out after a few seconds in the environment with 2 load-balanced web servers.
Here are my startup.cs and other classes, and a screenshot of my AppSessionState table. I hope someone can point me to the right path. I've spent 2 days on this and can't find anything else that needs settings or what's wrong with what I'm doing.
Some explanation of below code:
As seen, I've followed the steps to configure the app to use Distributed SQL Server caching and have a helper static class HttpSessionService which handles adding/getting values from the Session State. Also, I have a Session-Timeout attribute that I annotate each of my controllers to control the session timeouts. And after a few seconds or clicks in the app, as each controller action makes this call
HttpSessionService.Redirect()
this Redirect() method gets a NULL user session from this line, which causes the app to timeout.
var userSession = GetValues<UserIdentityView>(SessionKeys.User);
I've attached two VS debuggers to both servers and I've noticed that even when all sessions coming to one of the debugger instance (one server) the AspNet Session still returned NULL for the above userSession value.
Again, this ONLY happens on a distributed environment, i.e. if I stop one of the sites on one of the web servers everything works fine.
I have looked and implemented the session state distributed caching with SQLServer as explained (the same) in different pages, here are few.
https://learn.microsoft.com/en-us/aspnet/core/performance/caching/distributed?view=aspnetcore-3.0
https://www.c-sharpcorner.com/article/configure-sql-server-session-state-in-asp-net-core/
And I do see sessions being written to my created AppSessionState table, yet the app continues to timeout in the environment with 2 load-balanced servers.
Startup.cs:
public void ConfigureServices(IServiceCollection services)
{
// Session State distributed cache configuration against SQLServer.
var aspStateConnStr = ConfigurationManager.ConnectionStrings["ASPState"].ConnectionString;
var aspSessionStateSchemaName = _config.GetValue<string>("AppSettings:AspSessionStateSchemaName");
var aspSessionStateTbl = _config.GetValue<string>("AppSettings:AspSessionStateTable");
services.AddDistributedSqlServerCache(options =>
{
options.ConnectionString = aspStateConnStr;
options.SchemaName = aspSessionStateSchemaName;
options.TableName = aspSessionStateTbl;
});
....
services.AddSession(options =>
{
options.IdleTimeout = 1200;
options.Cookie.HttpOnly = true;
options.Cookie.IsEssential = true;
});
services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
...
services.AddMvc().AddJsonOptions(opt => opt.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver());
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, IApplicationLifetime lifetime, IDistributedCache distCache)
{
var distCacheOptions = new DistributedCacheEntryOptions()
.SetSlidingExpiration(TimeSpan.FromMinutes(5));
// Session State distributed cache configuration.
lifetime.ApplicationStarted.Register(() =>
{
var currentTimeUTC = DateTime.UtcNow.ToString();
byte[] encodedCurrentTimeUTC = Encoding.UTF8.GetBytes(currentTimeUTC);
distCache.Set("cachedTimeUTC", encodedCurrentTimeUTC, distCacheOptions);
});
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseDatabaseErrorPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseSession(); // This must be called before the app.UseMvc()
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
HttpSessionService.Configure(app.ApplicationServices.GetRequiredService<IHttpContextAccessor>(), distCache, distCacheOptions);
}
HttpSessionService (helper class):
public class HttpSessionService
{
private static IHttpContextAccessor _httpContextAccessor;
private static IDistributedCache _distributedCache;
private static ISession _session => _httpContextAccessor.HttpContext.Session;
public static void Configure(IHttpContextAccessor httpContextAccessor, IDistributedCache distCache)
{
_httpContextAccessor = httpContextAccessor;
_distributedCache = distCache;
}
public static void SetValues<T>(string key, T value)
{
_session.Set<T>(key, value);
}
public static T GetValues<T>(string key)
{
var sessionValue = _session.Get<T>(key);
return sessionValue == null ? default(T) : sessionValue;
}
public static bool Redirect()
{
var result = false;
var userSession = GetValues<UserIdentityView>(SessionKeys.User);
if (userSession == null || userSession?.IsAuthenticated == false)
{
result = true;
}
return result;
}
}
SessionTimeoutAttribute:
public class SessionTimeoutAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext context)
{
var redirect = HttpSessionService.Redirect();
if (redirect)
{
context.Result = new RedirectResult("~/Account/SessionTimeOut");
return;
}
base.OnActionExecuting(context);
}
}
MyController
[SessionTimeout]
public class MyController : Controller
{
// Every action in this and any other controller time out and I get redirected by SessionTimeoutAttribute to "~/Account/SessionTimeOut"
}
Sorry for the late reply on this. I've changed my original implementation, by injecting IDistributedCache interface to all of my controllers and using this setting in the Statusup.cs class in ConfigureServices() function.
services.AddDistributedSqlServerCache(options =>
{
options.ConnectionString = aspStateConnStr;
options.SchemaName = aspSessionStateSchemaName;
options.TableName = aspSessionStateTbl;
options.ExpiredItemsDeletionInterval = null;
});
That made it work in a web farm.
As you can see I'm setting the ExpiredItemsDeletionInterval to null to prevent some basic cache entries from clearing out of cache, but with doing so I ran into another problem that when I attempt to get them I still get null back even if the entry is in the database table. So, that's another thing I'm trying to figure out.
It looks like you're capturing the Session value from HttpContext in your static HttpSessionService instance. That value is per-request so it's definitely going to randomly fail if you capture it like that. You need to go through the IHttpContextAccessor every time you want to access an HttpContext value, if you want to get the latest value.
Also, I'd suggest you pass an HttpContext in to your helper methods rather than using IHttpContextAccessor. It has performance implications and should generally only be used if you absolutely can't pass an HttpContext through. The places you show here seem to have an HttpContext available, so I'd recommend using that instead of the accessor.

SignalR hub resolves to null inside RabbitMQ subscription handler in ASP.NET Core

I have an ASP.NET Core MVC project with RabbitMQ (by means of EasyNetQ) and SignalR.
Next, I have a subscription on a RabbitMQ message that should send a notification to the client via SignalR.
But sadly, the hub always resolves to null.
An interesting observation is that when the application is still starting and there are still unacknowledged messages in the queue, the service actually resolves just fine.
public void ConfigureServices(IServiceCollection services)
{
services.AddSignalR();
services.RegisterEasyNetQ("host=localhost;virtualHost=/");
}
public void Configure(IApplicationBuilder app)
{
app.UseSignalR(route =>
{
route.MapHub<MyHub>("/mypath");
});
app.Use(async (context, next) =>
{
var bus = context.RequestServices.GetRequiredService<IBus>();
bus.SubscribeAsync<MyMessage>("MySubscription", async message =>
{
var hubContext = context.RequestServices
.GetRequiredService<IHubContext<MyHub>>();
// hubContext is null
await hubContext.Clients.All.SendAsync("MyNotification");
});
await next.Invoke();
});
}
I suspect that perhaps I'm doing something wrong with regards to registering the subscription inside an app.Use but I can't seem to find any useful examples so this was the best I could figure.
I'm on ASP.NET Core 3 preview 5, I don't know if that has anything to do with my problem.
So the question is: how do I get the hub context inside the message subscription handler?
UPDATE
I've checked the GetRequiredService docs and the call should actuall throw an InvalidOperationException if the service couldn't be resolved, but it doesn't. It returns null, which as far as I can tell, shouldn't be possible (unless the default container supports registration of null-valued instances).
I've managed to solve the issue with help from this issue by implementing an IHostedService instead.
public void ConfigureServices(IServiceCollection services)
{
services.AddSignalR();
services.RegisterEasyNetQ("host=localhost;virtualHost=/");
services.AddHostedService<MyHostedService>();
}
public void Configure(IApplicationBuilder app)
{
app.UseSignalR(route =>
{
route.MapHub<MyHub>("/mypath");
});
}
public class MyHostedService : BackgroundService
{
private readonly IServiceScopeFactory _serviceScopeFactory;
public ServiceBusHostedService(IServiceScopeFactory serviceScopeFactory)
{
_serviceScopeFactory = serviceScopeFactory;
}
protected override Task ExecuteAsync(CancellationToken stoppingToken)
{
var scope = _serviceScopeFactory.CreateScope();
var bus = scope.ServiceProvider.GetRequiredService<IBus>();
bus.SubscribeAsync<MyMessage>("MySubscription", async message =>
{
var hubContext = scope.ServiceProvider.GetRequiredService<IHubContext<MyHub>>();
await hubContext.Clients
.All
.SendAsync("MyNotification", cancellationToken: stoppingToken);
});
return Task.CompletedTask;
}
}

How to start Quartz in ASP.NET Core?

I have the following class
public class MyEmailService
{
public async Task<bool> SendAdminEmails()
{
...
}
public async Task<bool> SendUserEmails()
{
...
}
}
public interface IMyEmailService
{
Task<bool> SendAdminEmails();
Task<bool> SendUserEmails();
}
I have installed the latest Quartz 2.4.1 Nuget package as I wanted a lightweight scheduler in my web app without a separate SQL Server database.
I need to schedule the methods
SendUserEmails to run every week on Mondays 17:00,Tuesdays 17:00 & Wednesdays 17:00
SendAdminEmails to run every week on Thursdays 09:00, Fridays 9:00
What code do I need to schedule these methods using Quartz in ASP.NET Core? I also need to know how to start Quartz in ASP.NET Core as all code samples on the internet still refer to previous versions of ASP.NET.
I can find a code sample for the previous version of ASP.NET but I don't know how to start Quartz in ASP.NET Core to start testing.
Where do I put the JobScheduler.Start(); in ASP.NET Core?
TL;DR (full answer can be found below)
Assumed tooling: Visual Studio 2017 RTM, .NET Core 1.1, .NET Core SDK 1.0, SQL Server Express 2016 LocalDB.
In web application .csproj:
<Project Sdk="Microsoft.NET.Sdk.Web">
<!-- .... existing contents .... -->
<!-- add the following ItemGroup element, it adds required packages -->
<ItemGroup>
<PackageReference Include="Quartz" Version="3.0.0-alpha2" />
<PackageReference Include="Quartz.Serialization.Json" Version="3.0.0-alpha2" />
</ItemGroup>
</Project>
In the Program class (as scaffolded by Visual Studio by default):
public class Program
{
private static IScheduler _scheduler; // add this field
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.UseApplicationInsights()
.Build();
StartScheduler(); // add this line
host.Run();
}
// add this method
private static void StartScheduler()
{
var properties = new NameValueCollection {
// json serialization is the one supported under .NET Core (binary isn't)
["quartz.serializer.type"] = "json",
// the following setup of job store is just for example and it didn't change from v2
// according to your usage scenario though, you definitely need
// the ADO.NET job store and not the RAMJobStore.
["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz",
["quartz.jobStore.useProperties"] = "false",
["quartz.jobStore.dataSource"] = "default",
["quartz.jobStore.tablePrefix"] = "QRTZ_",
["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz",
["quartz.dataSource.default.provider"] = "SqlServer-41", // SqlServer-41 is the new provider for .NET Core
["quartz.dataSource.default.connectionString"] = #"Server=(localdb)\MSSQLLocalDB;Database=Quartz;Integrated Security=true"
};
var schedulerFactory = new StdSchedulerFactory(properties);
_scheduler = schedulerFactory.GetScheduler().Result;
_scheduler.Start().Wait();
var userEmailsJob = JobBuilder.Create<SendUserEmailsJob>()
.WithIdentity("SendUserEmails")
.Build();
var userEmailsTrigger = TriggerBuilder.Create()
.WithIdentity("UserEmailsCron")
.StartNow()
.WithCronSchedule("0 0 17 ? * MON,TUE,WED")
.Build();
_scheduler.ScheduleJob(userEmailsJob, userEmailsTrigger).Wait();
var adminEmailsJob = JobBuilder.Create<SendAdminEmailsJob>()
.WithIdentity("SendAdminEmails")
.Build();
var adminEmailsTrigger = TriggerBuilder.Create()
.WithIdentity("AdminEmailsCron")
.StartNow()
.WithCronSchedule("0 0 9 ? * THU,FRI")
.Build();
_scheduler.ScheduleJob(adminEmailsJob, adminEmailsTrigger).Wait();
}
}
An example of a job class:
public class SendUserEmailsJob : IJob
{
public Task Execute(IJobExecutionContext context)
{
// an instance of email service can be obtained in different ways,
// e.g. service locator, constructor injection (requires custom job factory)
IMyEmailService emailService = new MyEmailService();
// delegate the actual work to email service
return emailService.SendUserEmails();
}
}
Full answer
Quartz for .NET Core
First, you have to use v3 of Quartz, as it targets .NET Core, according to this announcement.
Currently, only alpha versions of v3 packages are available on NuGet. It looks like the team put a lot of effort into releasing 2.5.0, which does not target .NET Core. Nevertheless, in their GitHub repo, the master branch is already dedicated to v3, and basically, open issues for v3 release don't seem to be critical, mostly old wishlist items, IMHO. Since recent commit activity is quite low, I would expect v3 release in few months, or maybe half year - but no one knows.
Jobs and IIS recycling
If the web application is going to be hosted under IIS, you have to take into consideration recycling/unloading behavior of worker processes. The ASP.NET Core web app runs as a regular .NET Core process, separate from w3wp.exe - IIS only serves as a reverse proxy. Nevertheless, when an instance of w3wp.exe is recycled or unloaded, the related .NET Core app process is also signaled to exit (according to this).
Web application can also be self-hosted behind a non-IIS reverse proxy (e.g. NGINX), but I will assume that you do use IIS, and narrow my answer accordingly.
The problems that recycling/unloading introduces are explained well in the post referenced by #darin-dimitrov:
If for example, on Friday 9:00 the process is down, because several hours earlier it was unloaded by IIS due to inactivity - no admin emails will be sent until the process is up again. To avoid that, configure IIS to minimize unloads/recyclings (see this answer).
From my experience, the above configuration still doesn't give a 100% guarantee that IIS will never unload the application. For 100% guarantee that your process is up, you can setup a command that periodically sends requests to your application, and thus keeps it alive.
When the host process is recycled/unloaded, the jobs must be gracefully stopped, to avoid data corruption.
Why would you host scheduled jobs in a web app
I can think of one justification of having those email jobs hosted in a web app, despite the problems listed above. It is decision to have only one kind of application model (ASP.NET). Such approach simplifies learning curve, deployment procedure, production monitoring, etc.
If you don't want to introduce backend microservices (which would be a good place to move the email jobs to), then it makes sense to overcome IIS recycling/unloading behaviors, and run Quartz inside a web app.
Or maybe you have other reasons.
Persistent job store
In your scenario, status of job execution must be persisted out of process. Therefore, default RAMJobStore doesn't fit, and you have to use the ADO.NET Job Store.
Since you mentioned SQL Server in the question, I will provide example setup for SQL Server database.
How to start (and gracefully stop) the scheduler
I assume you use Visual Studio 2017 and latest/recent version of .NET Core tooling. Mine is .NET Core Runtime 1.1 and .NET Core SDK 1.0.
For DB setup example, I will use a database named Quartz in SQL Server 2016 Express LocalDB. DB setup scripts can be found here.
First, add required package references to web application .csproj (or do it with NuGet package manager GUI in Visual Studio):
<Project Sdk="Microsoft.NET.Sdk.Web">
<!-- .... existing contents .... -->
<!-- the following ItemGroup adds required packages -->
<ItemGroup>
<PackageReference Include="Quartz" Version="3.0.0-alpha2" />
<PackageReference Include="Quartz.Serialization.Json" Version="3.0.0-alpha2" />
</ItemGroup>
</Project>
With the help of Migration Guide and the V3 Tutorial, we can figure out how to start and stop the scheduler. I prefer to encapsulate this in a separate class, let's name it QuartzStartup.
using System;
using System.Collections.Specialized;
using System.Threading.Tasks;
using Quartz;
using Quartz.Impl;
namespace WebApplication1
{
// Responsible for starting and gracefully stopping the scheduler.
public class QuartzStartup
{
private IScheduler _scheduler; // after Start, and until shutdown completes, references the scheduler object
// starts the scheduler, defines the jobs and the triggers
public void Start()
{
if (_scheduler != null)
{
throw new InvalidOperationException("Already started.");
}
var properties = new NameValueCollection {
// json serialization is the one supported under .NET Core (binary isn't)
["quartz.serializer.type"] = "json",
// the following setup of job store is just for example and it didn't change from v2
["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz",
["quartz.jobStore.useProperties"] = "false",
["quartz.jobStore.dataSource"] = "default",
["quartz.jobStore.tablePrefix"] = "QRTZ_",
["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz",
["quartz.dataSource.default.provider"] = "SqlServer-41", // SqlServer-41 is the new provider for .NET Core
["quartz.dataSource.default.connectionString"] = #"Server=(localdb)\MSSQLLocalDB;Database=Quartz;Integrated Security=true"
};
var schedulerFactory = new StdSchedulerFactory(properties);
_scheduler = schedulerFactory.GetScheduler().Result;
_scheduler.Start().Wait();
var userEmailsJob = JobBuilder.Create<SendUserEmailsJob>()
.WithIdentity("SendUserEmails")
.Build();
var userEmailsTrigger = TriggerBuilder.Create()
.WithIdentity("UserEmailsCron")
.StartNow()
.WithCronSchedule("0 0 17 ? * MON,TUE,WED")
.Build();
_scheduler.ScheduleJob(userEmailsJob, userEmailsTrigger).Wait();
var adminEmailsJob = JobBuilder.Create<SendAdminEmailsJob>()
.WithIdentity("SendAdminEmails")
.Build();
var adminEmailsTrigger = TriggerBuilder.Create()
.WithIdentity("AdminEmailsCron")
.StartNow()
.WithCronSchedule("0 0 9 ? * THU,FRI")
.Build();
_scheduler.ScheduleJob(adminEmailsJob, adminEmailsTrigger).Wait();
}
// initiates shutdown of the scheduler, and waits until jobs exit gracefully (within allotted timeout)
public void Stop()
{
if (_scheduler == null)
{
return;
}
// give running jobs 30 sec (for example) to stop gracefully
if (_scheduler.Shutdown(waitForJobsToComplete: true).Wait(30000))
{
_scheduler = null;
}
else
{
// jobs didn't exit in timely fashion - log a warning...
}
}
}
}
Note 1. In the above example, SendUserEmailsJob and SendAdminEmailsJob are classes that implement IJob. The IJob interface is slightly different from IMyEmailService, because it returns void Task and not Task<bool>. Both job classes should get IMyEmailService as a dependency (probably constructor injection).
Note 2. For a long-running job to be able to exit in timely fashion, in the IJob.Execute method, it should observe the status of IJobExecutionContext.CancellationToken. This may require change in IMyEmailService interface, to make its methods receive CancellationToken parameter:
public interface IMyEmailService
{
Task<bool> SendAdminEmails(CancellationToken cancellation);
Task<bool> SendUserEmails(CancellationToken cancellation);
}
When and where to start and stop the scheduler
In ASP.NET Core, application bootstrap code resides in class Program, much like in console app. The Main method is called to create web host, run it, and wait until it exits:
public class Program
{
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.UseApplicationInsights()
.Build();
host.Run();
}
}
The simplest thing to do is just put a call to QuartzStartup.Start right in the Main method, much like as I did in TL;DR. But since we have to properly handle process shutdown as well, I prefer to hook both startup and shutdown code in a more consistent manner.
This line:
.UseStartup<Startup>()
refers to a class named Startup, which is scaffolded when creating new ASP.NET Core Web Application project in Visual Studio. The Startup class looks like this:
public class Startup
{
public Startup(IHostingEnvironment env)
{
// scaffolded code...
}
public IConfigurationRoot Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
// scaffolded code...
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
// scaffolded code...
}
}
It is clear that a call to QuartzStartup.Start should be inserted in one of methods in the Startup class. The question is, where QuartzStartup.Stop should be hooked.
In the legacy .NET Framework, ASP.NET provided IRegisteredObject interface. According to this post, and the documentation, in ASP.NET Core it was replaced with IApplicationLifetime. Bingo. An instance of IApplicationLifetime can be injected into Startup.Configure method through a parameter.
For consistency, I will hook both QuartzStartup.Start and QuartzStartup.Stop to IApplicationLifetime:
public class Startup
{
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(
IApplicationBuilder app,
IHostingEnvironment env,
ILoggerFactory loggerFactory,
IApplicationLifetime lifetime) // added this parameter
{
// the following 3 lines hook QuartzStartup into web host lifecycle
var quartz = new QuartzStartup();
lifetime.ApplicationStarted.Register(quartz.Start);
lifetime.ApplicationStopping.Register(quartz.Stop);
// .... original scaffolded code here ....
}
// ....the rest of the scaffolded members ....
}
Note that I have extended the signature of the Configure method with an additional IApplicationLifetime parameter. According to documentation, ApplicationStopping will block until registered callbacks are completed.
Graceful shutdown on IIS Express, and ASP.NET Core module
I was able to observe expected behavior of IApplicationLifetime.ApplicationStopping hook only on IIS, with the latest ASP.NET Core module installed. Both IIS Express (installed with Visual Studio 2017 Community RTM), and IIS with an outdated version of ASP.NET Core module didn't consistently invoke IApplicationLifetime.ApplicationStopping. I believe it is because of this bug that was fixed.
You can install latest version of ASP.NET Core module from here. Follow the instructions in the "Installing the latest ASP.NET Core Module" section.
Quartz vs. FluentScheduler
I also took a look at FluentScheduler, as it was proposed as an alternative library by #Brice Molesti. To my first impression, FluentScheduler is quite a simplistic and immature solution, compared to Quartz. For example, FluentScheduler doesn't provide such fundamental features as job status persistence and clustered execution.
In addition to #felix-b answer. Adding DI to jobs. Also QuartzStartup Start can be made async.
Based on this answer: https://stackoverflow.com/a/42158004/1235390
public class QuartzStartup
{
public QuartzStartup(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public async Task Start()
{
// other code is same
_scheduler = await schedulerFactory.GetScheduler();
_scheduler.JobFactory = new JobFactory(_serviceProvider);
await _scheduler.Start();
var sampleJob = JobBuilder.Create<SampleJob>().Build();
var sampleTrigger = TriggerBuilder.Create().StartNow().WithCronSchedule("0 0/1 * * * ?").Build();
await _scheduler.ScheduleJob(sampleJob, sampleTrigger);
}
}
JobFactory class
public class JobFactory : IJobFactory
{
private IServiceProvider _serviceProvider;
public JobFactory(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
return _serviceProvider.GetService(bundle.JobDetail.JobType) as IJob;
}
public void ReturnJob(IJob job)
{
(job as IDisposable)?.Dispose();
}
}
Startup class:
public void ConfigureServices(IServiceCollection services)
{
// other code is removed for brevity
// need to register all JOBS by their class name
services.AddTransient<SampleJob>();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, IApplicationLifetime applicationLifetime)
{
var quartz = new QuartzStartup(_services.BuildServiceProvider());
applicationLifetime.ApplicationStarted.Register(() => quartz.Start());
applicationLifetime.ApplicationStopping.Register(quartz.Stop);
// other code removed for brevity
}
SampleJob class with contructor dependency injection:
public class SampleJob : IJob
{
private readonly ILogger<SampleJob> _logger;
public SampleJob(ILogger<SampleJob> logger)
{
_logger = logger;
}
public async Task Execute(IJobExecutionContext context)
{
_logger.LogDebug("Execute called");
}
}
I don't know how to do it with Quartz, but i had experimented the same scenario with an other library wich works very well. Here how I dit it
Install FluentScheduler
Install-Package FluentScheduler
Use it like this
var registry = new Registry();
JobManager.Initialize(registry);
JobManager.AddJob(() => MyEmailService.SendAdminEmails(), s => s
.ToRunEvery(1)
.Weeks()
.On(DayOfWeek.Monday)
.At(17, 00));
JobManager.AddJob(() => MyEmailService.SendAdminEmails(), s => s
.ToRunEvery(1)
.Weeks()
.On(DayOfWeek.Wednesday)
.At(17, 00));
JobManager.AddJob(() => MyEmailService.SendUserEmails(), s => s
.ToRunEvery(1)
.Weeks()
.On(DayOfWeek.Thursday)
.At(09, 00));
JobManager.AddJob(() => MyEmailService.SendUserEmails(), s => s
.ToRunEvery(1)
.Weeks()
.On(DayOfWeek.Friday)
.At(09, 00));
Documentation can be found here FluentScheduler on GitHub
What code do I need to schedule these methods using Quartz in ASP.NET Core? I also need to know how to start Quartz in ASP.NET Core as all code samples on the internet still refer to previous versions of ASP.NET.
Hi, there is now a good quartz DI to initialize and use
[DisallowConcurrentExecution]
public class Job1 : IJob
{
private readonly ILogger<Job1> _logger;
public Job1(ILogger<Job1> logger)
{
_logger = logger;
}
public async Task Execute(IJobExecutionContext context)
{
_logger.LogInformation("Start job1");
await Task.Delay(2, context.CancellationToken);
_logger?.LogInformation("End job1");
}
}
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddQuartz(cfg =>
{
cfg.UseMicrosoftDependencyInjectionJobFactory(opt =>
{
opt.AllowDefaultConstructor = false;
});
cfg.AddJob<Job1>(jobCfg =>
{
jobCfg.WithIdentity("job1");
});
cfg.AddTrigger(trigger =>
{
trigger
.ForJob("job1")
.WithIdentity("trigger1")
.WithSimpleSchedule(x => x
.WithIntervalInSeconds(10)
.RepeatForever());
});
});
services.AddQuartzHostedService(opt =>
{
opt.WaitForJobsToComplete = true;
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
// standart impl
}
}
The accepted answer covers the topic very well, but some things have changed with the latest Quartz version. The following is based on this article shows a quick start with Quartz 3.0.x and ASP.NET Core 2.2:
Util class
public class QuartzServicesUtilities
{
public static void StartJob<TJob>(IScheduler scheduler, TimeSpan runInterval)
where TJob : IJob
{
var jobName = typeof(TJob).FullName;
var job = JobBuilder.Create<TJob>()
.WithIdentity(jobName)
.Build();
var trigger = TriggerBuilder.Create()
.WithIdentity($"{jobName}.trigger")
.StartNow()
.WithSimpleSchedule(scheduleBuilder =>
scheduleBuilder
.WithInterval(runInterval)
.RepeatForever())
.Build();
scheduler.ScheduleJob(job, trigger);
}
}
Job factory
public class QuartzJobFactory : IJobFactory
{
private readonly IServiceProvider _serviceProvider;
public QuartzJobFactory(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
var jobDetail = bundle.JobDetail;
var job = (IJob)_serviceProvider.GetService(jobDetail.JobType);
return job;
}
public void ReturnJob(IJob job) { }
}
A job sample that also deals with exiting on application pool recycle / exit
[DisallowConcurrentExecution]
public class TestJob : IJob
{
private ILoggingService Logger { get; }
private IApplicationLifetime ApplicationLifetime { get; }
private static object lockHandle = new object();
private static bool shouldExit = false;
public TestJob(ILoggingService loggingService, IApplicationLifetime applicationLifetime)
{
Logger = loggingService;
ApplicationLifetime = applicationLifetime;
}
public Task Execute(IJobExecutionContext context)
{
return Task.Run(() =>
{
ApplicationLifetime.ApplicationStopping.Register(() =>
{
lock (lockHandle)
{
shouldExit = true;
}
});
try
{
for (int i = 0; i < 10; i ++)
{
lock (lockHandle)
{
if (shouldExit)
{
Logger.LogDebug($"TestJob detected that application is shutting down - exiting");
break;
}
}
Logger.LogDebug($"TestJob ran step {i+1}");
Thread.Sleep(3000);
}
}
catch (Exception exc)
{
Logger.LogError(exc, "An error occurred during execution of scheduled job");
}
});
}
}
Startup.cs configuration
private void ConfigureQuartz(IServiceCollection services, params Type[] jobs)
{
services.AddSingleton<IJobFactory, QuartzJobFactory>();
services.Add(jobs.Select(jobType => new ServiceDescriptor(jobType, jobType, ServiceLifetime.Singleton)));
services.AddSingleton(provider =>
{
var schedulerFactory = new StdSchedulerFactory();
var scheduler = schedulerFactory.GetScheduler().Result;
scheduler.JobFactory = provider.GetService<IJobFactory>();
scheduler.Start();
return scheduler;
});
}
protected void ConfigureJobsIoc(IServiceCollection services)
{
ConfigureQuartz(services, typeof(TestJob), /* other jobs come here */);
}
public void ConfigureServices(IServiceCollection services)
{
ConfigureJobsIoc(services);
// other stuff comes here
AddDbContext(services);
AddCors(services);
services
.AddMvc()
.SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
}
protected void StartJobs(IApplicationBuilder app, IApplicationLifetime lifetime)
{
var scheduler = app.ApplicationServices.GetService<IScheduler>();
//TODO: use some config
QuartzServicesUtilities.StartJob<TestJob>(scheduler, TimeSpan.FromSeconds(60));
lifetime.ApplicationStarted.Register(() => scheduler.Start());
lifetime.ApplicationStopping.Register(() => scheduler.Shutdown());
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory,
ILoggingService logger, IApplicationLifetime lifetime)
{
StartJobs(app, lifetime);
// other stuff here
}

ASP.NET 5 beta8 app with virtual directories/applications

Since ASP.NET 5 beta8 we are experiencing problems using virtual directories and/or sub applications.
We want (for the time beeing) to serve images from a virtual directory or a "sub application". However we only get 404 errors when trying to use a virtual directory and 502.3 errors when using a "sub application".
The server is running IIS 8.0. The Application Pools for the site and the "sub application" is set to "No Managed Code".
Using the same configuration of virtual dirs/apps on another site running the "old" ASP.NET 4 version of our site works like expected.
The problem came after upgrading to beta8, so we assume it has something to do with the HttpPlatformHandler.
Are we missing something or is this a bug?
EDIT:
To clarify, the ASP.NET5 application works just fine. It is only the content from the virtual dirs/apps that cannot be accessed.
The HttpPlatformHandler is installed on the server.
Here is our current Startup.cs
public class Startup
{
public Startup(IHostingEnvironment env, IApplicationEnvironment appEnv)
{
var builder = new ConfigurationBuilder()
.SetBasePath(appEnv.ApplicationBasePath)
.AddJsonFile("appsettings.json")
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);
builder.AddEnvironmentVariables();
Configuration = builder.Build();
}
public IConfigurationRoot Configuration { get; set; }
public IServiceProvider ConfigureServices(IServiceCollection services)
{
services.ConfigureXXXXXXIdentityServices(); // Custom identity implementation
services.AddMvc(options =>
{
options.OutputFormatters
.Add(new JsonOutputFormatter(new JsonSerializerSettings
{
ContractResolver = new CamelCasePropertyNamesContractResolver()
}));
});
services.AddSqlServerCache(options =>
{
options.ConnectionString = "XXXXXX";
options.SchemaName = "dbo";
options.TableName = "AspNet5Sessions";
});
services.AddSession();
var builder = new ContainerBuilder();
builder.RegisterModule(new AutofacModule());
builder.Populate(services);
var container = builder.Build();
return container.Resolve<IServiceProvider>();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.MinimumLevel = LogLevel.Debug;
loggerFactory.AddConsole();
loggerFactory.AddDebug();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseFileServer(new FileServerOptions
{
RequestPath = new PathString("/gfx"),
FileProvider = new PhysicalFileProvider(#"\\webdata2.XXXXXX.se\webdata\gfx"),
EnableDirectoryBrowsing = false
});
app.UseFileServer(new FileServerOptions
{
RequestPath = new PathString("/files"),
FileProvider = new PhysicalFileProvider(#"\\webdata2.XXXXXX.se\webdata"),
EnableDirectoryBrowsing = false
});
}
else
{
app.UseExceptionHandler("/Error/Index");
}
app.UseIISPlatformHandler();
app.UseStaticFiles();
app.UseIdentity();
app.UseSession();
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
}
The app.UseFileServer() statements works on our dev machines, but cannot be used on the server, unless there is a way to specify credentials. (haven't found a way to do that... (yet...))
Got it "working".
Dropped all virtual directories and/or applications.
Changed the Application Pool user to a user that had rights to read the file shares on the other machine.
Added app.UseFileServer() to all environments for the required paths.
Feels like there should be an option to pass Network Credentials to the UseFileServer method...
The Hosting model changed in beta 8, meaning that you need to install the new HttpPlatformHandler module as an administrator.
See Change to IIS hosting model