InstrumentationKey not picked up when stored in key vault - asp.net-core

I am using a key-vault to store all the keys used in my bot v4 solution. I suspect the InstrumentationKey is not picked up correctly. What should be the name of the ApplicationInsights- InstrumentationKey in the key-vault.
In appsetting.json the key is added like this :
"ApplicationInsights": {
"InstrumentationKey": "xxxx-xxxx-xxx-xxxx-xxx" }.

I had the same issue. These were the steps I followed to resolve it in an ASP.NET Core MVC 5 application:
In Program.cs file, the code for adding configuration values from Key Vault in Production environment (based on https://learn.microsoft.com/en-us/aspnet/core/security/key-vault-configuration?view=aspnetcore-5.0#use-managed-identities-for-azure-resources):
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((context, config) =>
{
if (context.HostingEnvironment.IsProduction())
{
var builtConfig = config.Build();
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyVaultClient = new KeyVaultClient(
new KeyVaultClient.AuthenticationCallback(
azureServiceTokenProvider.KeyVaultTokenCallback));
config.AddAzureKeyVault(
$"https://{builtConfig["KeyVaultName"]}.vault.azure.net/",
keyVaultClient,
new DefaultKeyVaultSecretManager());
}
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
In Azure Key Vault create a new secret with the name "ApplicationInsights--InstrumentationKey" and give it the value of the production application insights instrumentation key. In order for your app service to be able to communicate with the key vault and read the secrets, you need to firstly enable the system assigned service identity for your azure app service and then create a key vault Access Policy for Get, List operations for secrets and assign the policy to the previously created system assigned service identity.
In appSettings.Production.json file:
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"KeyVaultName": "{your-keyvault-name}",
"ApplicationInsights": {
"CloudRoleName": "TinyCrm.Web.Prod",
"DisableTelemetry": false,
"EnableAdaptiveSampling": true
}
}
To register the application insights specific stuff for the aspnet core DI framework I created the following extension method. I am using Microsoft.ApplicationInsights.AspnetCore version 2.17.0 nuget package. (taking into consideration what is written here https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core#user-secrets-and-other-configuration-providers):
public static IServiceCollection AddApplicationInsights(this IServiceCollection services, IConfiguration configuration)
{
// Register the settings for "ApplicationInsights" section as a service for injection from DI container
var applicationInsightsSettings = new ApplicationInsightsSettings();
configuration.Bind(ApplicationInsightsSettings.ApplicationInsightsSectionKey, applicationInsightsSettings);
services.AddSingleton(applicationInsightsSettings);
// Use telemetry initializers when you want to enrich telemetry with additional information
services.AddSingleton<ITelemetryInitializer, CloudRoleTelemetryInitializer>();
// Remove a specific built-in telemetry initializer
var telemetryInitializerToRemove = services.FirstOrDefault<ServiceDescriptor>
(t => t.ImplementationType == typeof(AspNetCoreEnvironmentTelemetryInitializer));
if (telemetryInitializerToRemove != null)
{
services.Remove(telemetryInitializerToRemove);
}
// You can add custom telemetry processors to TelemetryConfiguration by using the extension method AddApplicationInsightsTelemetryProcessor on IServiceCollection.
// You use telemetry processors in advanced filtering scenarios
services.AddApplicationInsightsTelemetryProcessor<StaticWebAssetsTelemetryProcessor>();
// The following line enables Application Insights telemetry collection.
services.AddApplicationInsightsTelemetry();
return services;
}
In Startup.cs file:
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews();
// Call the extension method we created above
services.AddApplicationInsights(Configuration);
}
Important note: Make sure when you create the new Azure App Service to not enable the Application Insights Telemetry integration automatically from the Azure app service creation wizard (see below image). Instead create the application insights resource yourself and setup your app to use the instrumentation key in production environment through following the above steps.
Azure App Service Creation Wizard

Related

Migrating to Openiddict 4.0.0 breaks HotChocolate GraphQL authorization for client credentials flow if token encryption is enabled

I've an ASP.NET Core project which hosts both an identity server using Openiddict and a resource server using HotChocolate GraphQL package.
Client-credentials flow is enabled in the system. The token is encrypted using RSA algorithm.
Till now, I had Openiddict v3.1.1 and everything used to work flawlessly. Recently, I've migrated to Openiddict v4.0.0. Following this, the authorization has stopped working. If I disable token encyption then authorization works as expected. On enabling token encyption, I saw in debugging, that claims are not being passed at all. I cannot switch off token encyption, as it is a business and security requirement. The Openiddict migration guidelines doesn't mention anything about any change related to encryption keys. I need help to make this work as Openiddict v3.1.1 is no longer supported for bug fixes.
The OpenIddict setup in ASP.NET Core pipeline:
public static void AddOpenIddict(this IServiceCollection services, IConfiguration configuration)
{
var openIddictOptions = configuration.GetSection("OpenIddict").Get<OpenIddictOptions>();
var encryptionKeyData = openIddictOptions.EncryptionKey.RSA;
var signingKeyData = openIddictOptions.SigningKey.RSA;
var encryptionKey = RSA.Create();
var signingKey = RSA.Create();
encryptionKey.ImportFromEncryptedPem(encryptionKeyData.ToCharArray(),
openIddictOptions.EncryptionKey.Passphrase.ToCharArray());
signingKey.ImportFromEncryptedPem(signingKeyData.ToCharArray(),
openIddictOptions.SigningKey.Passphrase.ToCharArray());
encryptionKey.ImportFromEncryptedPem(encryptionKeyData.ToCharArray(),
openIddictOptions.EncryptionKey.Passphrase.ToCharArray());
signingKey.ImportFromEncryptedPem(signingKeyData.ToCharArray(),
openIddictOptions.SigningKey.Passphrase.ToCharArray());
var sk = new RsaSecurityKey(signingKey);
var ek = new RsaSecurityKey(encryptionKey);
services.AddOpenIddict()
.AddCore(options =>
{
options.UseEntityFrameworkCore()
.UseDbContext<AuthDbContext>()
.ReplaceDefaultEntities<Guid>();
})
.AddServer(options =>
{
// https://documentation.openiddict.com/guides/migration/30-to-40.html#update-your-endpoint-uris
options.SetCryptographyEndpointUris("oauth2/.well-known/jwks");
options.SetConfigurationEndpointUris("oauth2/.well-known/openid-configuration");
options.SetTokenEndpointUris("oauth2/connect/token");
options.AllowClientCredentialsFlow();
options.SetUserinfoEndpointUris("oauth2/connect/userinfo");
options.SetIntrospectionEndpointUris("oauth2/connect/introspection");
options.AddSigningKey(sk);
options.AddEncryptionKey(ek);
//options.DisableAccessTokenEncryption(); // If this line is not commented, things work as expected
options.UseAspNetCore(o =>
{
// NOTE: disabled because by default OpenIddict accepts request from HTTPS endpoints only
o.DisableTransportSecurityRequirement();
o.EnableTokenEndpointPassthrough();
});
})
.AddValidation(options =>
{
options.UseLocalServer();
options.UseAspNetCore();
});
}
Authorization controller token get action:
[HttpPost("~/oauth2/connect/token")]
[Produces("application/json")]
public async Task<IActionResult> Exchange()
{
var request = HttpContext.GetOpenIddictServerRequest();
if (request.IsClientCredentialsGrantType())
{
// Note: the client credentials are automatically validated by OpenIddict:
// if client_id or client_secret are invalid, this action won't be invoked.
var application = await _applicationManager.FindByClientIdAsync(request.ClientId);
if (application == null)
{
throw new InvalidOperationException("The application details cannot be found in the database.");
}
// Create a new ClaimsIdentity containing the claims that
// will be used to create an id_token, a token or a code.
var identity = new ClaimsIdentity(
authenticationType: TokenValidationParameters.DefaultAuthenticationType,
nameType: OpenIddictConstants.Claims.Name,
roleType: OpenIddictConstants.Claims.Role);
var clientId = await _applicationManager.GetClientIdAsync(application);
var organizationId = await _applicationManager.GetDisplayNameAsync(application);
// https://documentation.openiddict.com/guides/migration/30-to-40.html#remove-calls-to-addclaims-that-specify-a-list-of-destinations
identity.SetClaim(type: OpenIddictConstants.Claims.Subject, value: organizationId)
.SetClaim(type: OpenIddictConstants.Claims.ClientId, value: clientId)
.SetClaim(type: "organization_id", value: organizationId);
identity.SetDestinations(static claim => claim.Type switch
{
_ => new[] { OpenIddictConstants.Destinations.AccessToken, OpenIddictConstants.Destinations.IdentityToken }
});
return SignIn(new ClaimsPrincipal(identity), OpenIddictServerAspNetCoreDefaults.AuthenticationScheme);
}
throw new NotImplementedException("The specified grant type is not implemented.");
}
Resource controller (where Authorization is not working):
public class Query
{
private readonly IMapper _mapper;
public Query(IMapper mapper)
{
_mapper = mapper;
}
[HotChocolate.AspNetCore.Authorization.Authorize]
public async Task<Organization> GetMe(ClaimsPrincipal claimsPrincipal,
[Service] IDbContextFactory<DbContext> dbContextFactory,
CancellationToken ct)
{
var organizationId = Ulid.Parse(claimsPrincipal.FindFirstValue(ClaimTypes.NameIdentifier));
... // further code removed for brevity
}
}
}
GraphQL setup in ASP.NET Core pipeline:
public static void AddGraphQL(this IServiceCollection services, IWebHostEnvironment webHostEnvironment)
{
services.AddGraphQLServer()
.AddAuthorization()
.AddQueryType<Query>()
}
Packages with versions:
OpenIddict (4.0.0)
OpenIddict.AspNetCore (4.0.0)
OpenIddict.EntityFrameworkCore (4.0.0)
HotChocolate.AspNetCore (12.13.2)
HotChocolate.AspNetCore.Authorization (12.13.2)
HotChocolate.Diagnostics (12.13.2)
I think you can look at the logs to figure out exactly what's going on:
Logging in .NET Core and ASP.NET Core.
Not sure if the Openiddict 4.0 release changed encryption key related configuration, but if you can't get encryption keys to work perhaps you can remove the OpenIddict validation part and configure the JWT bearer handler instance to use the appropriate encryption key , a configuration similar to this.
Of course, you can also open an issue in GitHub to ask #Kévin Chalet.

Configure JwtBearerOptions from a configuration file

I am trying to find a documentation how to configure a jwt bearer and its JwtBearerOptions in asp.net core from a configuration file using a microsoft predefined confuration section/keys. There is no explanation in Mucrosoft docs about this is possible or not. I feel that it should be possible because everything in the .net core generation is using the options pattern.
Here is an example how the same technique is used to configure a Kestrel host.
This is not a real answer to the initial question. However I am very happy with this solution.
After several hours of digging into the AspNetCore source code I found that the JwtBearerOptions are added to the DI as a named options. This means that you cannot provide the configuration from a config file without writing code. However I found an acceptable solution which will work for the majority of cases.
I do not have a list of all available keys and the sample here is showing only two of them. You can inspect the public properties of the JwtBearerOptions and add them in the appsettings.json. They will be picked and used by the framework.
See the code bellow and the comments there for details how this works:
appsettings.json
{
"Cronus": {
"Api": {
"JwtAuthentication": {
"Authority": "https://example.com",
"Audience": "https://example.com/resources"
}
}
}
}
Startup.cs
public class Startup
{
const string JwtSectionName = "Cronus:Api:JwtAuthentication";
private readonly IConfiguration configuration;
public Startup(IConfiguration configuration)
{
this.configuration = configuration;
}
public void ConfigureServices(IServiceCollection services)
{
// Gets the settings from a configuration section. Notice how we specify the name for the JwtBearerOptions to be JwtBearerDefaults.AuthenticationScheme.
services.Configure<JwtBearerOptions>(JwtBearerDefaults.AuthenticationScheme, configuration.GetSection(JwtSectionName));
// OR
// Gets the settings from a configuration. Notice how we specify the name for the JwtBearerOptions to be JwtBearerDefaults.AuthenticationScheme.
services.Configure<JwtBearerOptions>(JwtBearerDefaults.AuthenticationScheme, configuration);
services.AddAuthentication(o =>
{
// AspNetCore uses the DefaultAuthenticateScheme as a name for the JwtBearerOptions. You can skip these settings because .AddJwtBearer() is doing exactly this.
o.DefaultAuthenticateScheme = Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerDefaults.AuthenticationScheme;
o.DefaultChallengeScheme = Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer();
}
}
services.AddAuthentication(defaultScheme: JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(o => Configuration.Bind("JwtBearerOptions", o));
where
application settings.json
{
"JwtBearerOptions": {
"Audience": "Your aud"
}
}

How does one transform SOAP EndpointAddress in .NET Core?

When connecting a SOAP service in .NET Core the Connected Service is shown as expected in the solution explorer
The ConnectedService.json does contain the definitions as supposed. I.e.
{
"ProviderId": "Microsoft.VisualStudio.ConnectedService.Wcf",
...
"ExtendedData": {
"Uri": "https://test.example.net/Service.svc",
"Namespace": "UserWebService",
"SelectedAccessLevelForGeneratedClass": "Public",
...
}
The Uri from ExtendedData ends up in the Reference.cs file
private static System.ServiceModel.EndpointAddress GetEndpointAddress(EndpointConfiguration endpointConfiguration)
{
if ((endpointConfiguration == EndpointConfiguration.WSHttpBinding_IAnvandareService))
{
return new System.ServiceModel.EndpointAddress("https://test.example.net/Service.svc");
}
throw new System.InvalidOperationException(string.Format("Could not find endpoint with name \'{0}\'.", endpointConfiguration));
}
If a deployment process looks like TEST > STAGING > PRODUCTION one might like to have corresponding endpoints. I.e. https://production.example.net/Service.svc.
We use Azure Devops for build and Azure Devops/Octopus Deploy for deployments
The solution (as I figured) was to change the endpoint address when you register the dependency i.e.
var environment = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");
services.AddTransient<IAnvandareService, AnvandareServiceClient>((ctx) => new AnvandareServiceClient()
{
Endpoint =
{
Address = new EndpointAddress($"https://{environment}.example.net/Service.svc")
}
});
This is just an expansion of the answer provided by Eric Herlitz. Primarily meant to show how to use your appsettings.json file to hold the value for the endpoint url.
You will need to add the different endpoints to your appsettings.{enviroment}.json files.
{
...
"ServiceEndpoint": "http://someservice/service1.asmx",
...
}
Then you will need to make sure your environment variable is updated when you publish to different environments. How to transform appsettings.json
In your startup class find the method ConfigureServices() and register your service for dependency injection
public void ConfigureServices(IServiceCollection services)
{
services.AddTransient<ADSoap, ADSoapClient>(fac =>
{
var endpoint = Configuration.GetValue<string>("ServiceEndpoint");
return new ADSoapClient(ADSoapClient.EndpointConfiguration.ADSoap12)
{
Endpoint =
{
Address = new EndpointAddress(new Uri(endpoint))
}
};
});
}
Then to consume the service in some class you can inject the service into the constructor:
public class ADProvider : BaseProvider, IADProvider
{
public ADSoap ADService { get; set; }
public ADProvider(IAPQ2WebApiHttpClient httpClient, IMemoryCache cache, ADSoap adClient) : base(httpClient, cache)
{
ADService = adClient;
}
}

ASP.NET Core 2.x OnConfiguring get connectionstring string from appsettings.json

Just started messing with ASP.NET Core, pretty impressive so far. In the code generated,(see below). I want to change the hardcoded connection string to get it from the appsettings.json file.
This is apparently impossible. I haven't found a single example that works (or even builds).
What's going on??
Please help
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
if (!optionsBuilder.IsConfigured)
{
#warning To protect potentially sensitive information in your connection string, you should move it out of source code. See http://go.microsoft.com/fwlink/?LinkId=723263 for guidance on storing connection strings.
optionsBuilder.UseSqlServer("Server=xxxxxxx;Database=xxxxx;Trusted_Connection=True;");
}
}
The link provided solves the problem in one area but doesn't work here in OnConfiguring. What am I doing wrong ?
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.Configure<CookiePolicyOptions>(options =>
{
// This lambda determines whether user consent for non-essential cookies is needed for a given request.
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
var connection = Configuration.GetConnectionString("ConnectionName");
services.AddDbContext<SurveyContext>(options => options.UseSqlServer(connection));
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseCookiePolicy();
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
}
In the startup class of a .NET Core project you usually register this in the ConfigureServices function.
public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext<YourContext>(options => options.UseSqlServer(connection));
}
When you are in the startup class of a .NET Core it is no problem to read the values from appsettings.json.
You could read more at Microsoft, i.e. here: https://learn.microsoft.com/en-us/ef/core/get-started/aspnetcore/existing-db
In the place where you want to access the appsettings.json,
JToken jAppSettings = JToken.Parse(
File.ReadAllText(Path.Combine(Environment.CurrentDirectory,
"appsettings.json")));
Now since you have the object, you can access its contents.
Let me know if it works.
When you use the Scaffold-DbContext, by default it hard codes your string into the DbContext class (so it works out of the box). You will need to register your DbContext in your startup class to proceed. To set this up, you can check the instructions in this answer.
Note that the Configuration property directly connects to your appsettings.json and several other locations. You can read more about it in this documentation. While you can always use the appsettings.json file, it is generally recommended to have your secure secrets in an external json file outside your source code. The best solution for this during development is using the secret manager. The easiest way to use this is right click on your project on visual studio and select "manage user secrets". This will open a json file that is already connected to your Configuration object.
Once this is set up, you need to use dependency injection to access your db context.
public class HomeController : Controller
{
public HomeController(SurveyContext context)
{
// you can set the context you get here as a property or field
// you can use visual studio's shortcut ctrl + . when the cursor is on "context"
// you can then use the context variable inside your actions
}
}
When you use using, it creates a new connection each time. Using injection makes sure only one connection is created per request no matter how many times it is used.

How to start Quartz in ASP.NET Core?

I have the following class
public class MyEmailService
{
public async Task<bool> SendAdminEmails()
{
...
}
public async Task<bool> SendUserEmails()
{
...
}
}
public interface IMyEmailService
{
Task<bool> SendAdminEmails();
Task<bool> SendUserEmails();
}
I have installed the latest Quartz 2.4.1 Nuget package as I wanted a lightweight scheduler in my web app without a separate SQL Server database.
I need to schedule the methods
SendUserEmails to run every week on Mondays 17:00,Tuesdays 17:00 & Wednesdays 17:00
SendAdminEmails to run every week on Thursdays 09:00, Fridays 9:00
What code do I need to schedule these methods using Quartz in ASP.NET Core? I also need to know how to start Quartz in ASP.NET Core as all code samples on the internet still refer to previous versions of ASP.NET.
I can find a code sample for the previous version of ASP.NET but I don't know how to start Quartz in ASP.NET Core to start testing.
Where do I put the JobScheduler.Start(); in ASP.NET Core?
TL;DR (full answer can be found below)
Assumed tooling: Visual Studio 2017 RTM, .NET Core 1.1, .NET Core SDK 1.0, SQL Server Express 2016 LocalDB.
In web application .csproj:
<Project Sdk="Microsoft.NET.Sdk.Web">
<!-- .... existing contents .... -->
<!-- add the following ItemGroup element, it adds required packages -->
<ItemGroup>
<PackageReference Include="Quartz" Version="3.0.0-alpha2" />
<PackageReference Include="Quartz.Serialization.Json" Version="3.0.0-alpha2" />
</ItemGroup>
</Project>
In the Program class (as scaffolded by Visual Studio by default):
public class Program
{
private static IScheduler _scheduler; // add this field
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.UseApplicationInsights()
.Build();
StartScheduler(); // add this line
host.Run();
}
// add this method
private static void StartScheduler()
{
var properties = new NameValueCollection {
// json serialization is the one supported under .NET Core (binary isn't)
["quartz.serializer.type"] = "json",
// the following setup of job store is just for example and it didn't change from v2
// according to your usage scenario though, you definitely need
// the ADO.NET job store and not the RAMJobStore.
["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz",
["quartz.jobStore.useProperties"] = "false",
["quartz.jobStore.dataSource"] = "default",
["quartz.jobStore.tablePrefix"] = "QRTZ_",
["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz",
["quartz.dataSource.default.provider"] = "SqlServer-41", // SqlServer-41 is the new provider for .NET Core
["quartz.dataSource.default.connectionString"] = #"Server=(localdb)\MSSQLLocalDB;Database=Quartz;Integrated Security=true"
};
var schedulerFactory = new StdSchedulerFactory(properties);
_scheduler = schedulerFactory.GetScheduler().Result;
_scheduler.Start().Wait();
var userEmailsJob = JobBuilder.Create<SendUserEmailsJob>()
.WithIdentity("SendUserEmails")
.Build();
var userEmailsTrigger = TriggerBuilder.Create()
.WithIdentity("UserEmailsCron")
.StartNow()
.WithCronSchedule("0 0 17 ? * MON,TUE,WED")
.Build();
_scheduler.ScheduleJob(userEmailsJob, userEmailsTrigger).Wait();
var adminEmailsJob = JobBuilder.Create<SendAdminEmailsJob>()
.WithIdentity("SendAdminEmails")
.Build();
var adminEmailsTrigger = TriggerBuilder.Create()
.WithIdentity("AdminEmailsCron")
.StartNow()
.WithCronSchedule("0 0 9 ? * THU,FRI")
.Build();
_scheduler.ScheduleJob(adminEmailsJob, adminEmailsTrigger).Wait();
}
}
An example of a job class:
public class SendUserEmailsJob : IJob
{
public Task Execute(IJobExecutionContext context)
{
// an instance of email service can be obtained in different ways,
// e.g. service locator, constructor injection (requires custom job factory)
IMyEmailService emailService = new MyEmailService();
// delegate the actual work to email service
return emailService.SendUserEmails();
}
}
Full answer
Quartz for .NET Core
First, you have to use v3 of Quartz, as it targets .NET Core, according to this announcement.
Currently, only alpha versions of v3 packages are available on NuGet. It looks like the team put a lot of effort into releasing 2.5.0, which does not target .NET Core. Nevertheless, in their GitHub repo, the master branch is already dedicated to v3, and basically, open issues for v3 release don't seem to be critical, mostly old wishlist items, IMHO. Since recent commit activity is quite low, I would expect v3 release in few months, or maybe half year - but no one knows.
Jobs and IIS recycling
If the web application is going to be hosted under IIS, you have to take into consideration recycling/unloading behavior of worker processes. The ASP.NET Core web app runs as a regular .NET Core process, separate from w3wp.exe - IIS only serves as a reverse proxy. Nevertheless, when an instance of w3wp.exe is recycled or unloaded, the related .NET Core app process is also signaled to exit (according to this).
Web application can also be self-hosted behind a non-IIS reverse proxy (e.g. NGINX), but I will assume that you do use IIS, and narrow my answer accordingly.
The problems that recycling/unloading introduces are explained well in the post referenced by #darin-dimitrov:
If for example, on Friday 9:00 the process is down, because several hours earlier it was unloaded by IIS due to inactivity - no admin emails will be sent until the process is up again. To avoid that, configure IIS to minimize unloads/recyclings (see this answer).
From my experience, the above configuration still doesn't give a 100% guarantee that IIS will never unload the application. For 100% guarantee that your process is up, you can setup a command that periodically sends requests to your application, and thus keeps it alive.
When the host process is recycled/unloaded, the jobs must be gracefully stopped, to avoid data corruption.
Why would you host scheduled jobs in a web app
I can think of one justification of having those email jobs hosted in a web app, despite the problems listed above. It is decision to have only one kind of application model (ASP.NET). Such approach simplifies learning curve, deployment procedure, production monitoring, etc.
If you don't want to introduce backend microservices (which would be a good place to move the email jobs to), then it makes sense to overcome IIS recycling/unloading behaviors, and run Quartz inside a web app.
Or maybe you have other reasons.
Persistent job store
In your scenario, status of job execution must be persisted out of process. Therefore, default RAMJobStore doesn't fit, and you have to use the ADO.NET Job Store.
Since you mentioned SQL Server in the question, I will provide example setup for SQL Server database.
How to start (and gracefully stop) the scheduler
I assume you use Visual Studio 2017 and latest/recent version of .NET Core tooling. Mine is .NET Core Runtime 1.1 and .NET Core SDK 1.0.
For DB setup example, I will use a database named Quartz in SQL Server 2016 Express LocalDB. DB setup scripts can be found here.
First, add required package references to web application .csproj (or do it with NuGet package manager GUI in Visual Studio):
<Project Sdk="Microsoft.NET.Sdk.Web">
<!-- .... existing contents .... -->
<!-- the following ItemGroup adds required packages -->
<ItemGroup>
<PackageReference Include="Quartz" Version="3.0.0-alpha2" />
<PackageReference Include="Quartz.Serialization.Json" Version="3.0.0-alpha2" />
</ItemGroup>
</Project>
With the help of Migration Guide and the V3 Tutorial, we can figure out how to start and stop the scheduler. I prefer to encapsulate this in a separate class, let's name it QuartzStartup.
using System;
using System.Collections.Specialized;
using System.Threading.Tasks;
using Quartz;
using Quartz.Impl;
namespace WebApplication1
{
// Responsible for starting and gracefully stopping the scheduler.
public class QuartzStartup
{
private IScheduler _scheduler; // after Start, and until shutdown completes, references the scheduler object
// starts the scheduler, defines the jobs and the triggers
public void Start()
{
if (_scheduler != null)
{
throw new InvalidOperationException("Already started.");
}
var properties = new NameValueCollection {
// json serialization is the one supported under .NET Core (binary isn't)
["quartz.serializer.type"] = "json",
// the following setup of job store is just for example and it didn't change from v2
["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz",
["quartz.jobStore.useProperties"] = "false",
["quartz.jobStore.dataSource"] = "default",
["quartz.jobStore.tablePrefix"] = "QRTZ_",
["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz",
["quartz.dataSource.default.provider"] = "SqlServer-41", // SqlServer-41 is the new provider for .NET Core
["quartz.dataSource.default.connectionString"] = #"Server=(localdb)\MSSQLLocalDB;Database=Quartz;Integrated Security=true"
};
var schedulerFactory = new StdSchedulerFactory(properties);
_scheduler = schedulerFactory.GetScheduler().Result;
_scheduler.Start().Wait();
var userEmailsJob = JobBuilder.Create<SendUserEmailsJob>()
.WithIdentity("SendUserEmails")
.Build();
var userEmailsTrigger = TriggerBuilder.Create()
.WithIdentity("UserEmailsCron")
.StartNow()
.WithCronSchedule("0 0 17 ? * MON,TUE,WED")
.Build();
_scheduler.ScheduleJob(userEmailsJob, userEmailsTrigger).Wait();
var adminEmailsJob = JobBuilder.Create<SendAdminEmailsJob>()
.WithIdentity("SendAdminEmails")
.Build();
var adminEmailsTrigger = TriggerBuilder.Create()
.WithIdentity("AdminEmailsCron")
.StartNow()
.WithCronSchedule("0 0 9 ? * THU,FRI")
.Build();
_scheduler.ScheduleJob(adminEmailsJob, adminEmailsTrigger).Wait();
}
// initiates shutdown of the scheduler, and waits until jobs exit gracefully (within allotted timeout)
public void Stop()
{
if (_scheduler == null)
{
return;
}
// give running jobs 30 sec (for example) to stop gracefully
if (_scheduler.Shutdown(waitForJobsToComplete: true).Wait(30000))
{
_scheduler = null;
}
else
{
// jobs didn't exit in timely fashion - log a warning...
}
}
}
}
Note 1. In the above example, SendUserEmailsJob and SendAdminEmailsJob are classes that implement IJob. The IJob interface is slightly different from IMyEmailService, because it returns void Task and not Task<bool>. Both job classes should get IMyEmailService as a dependency (probably constructor injection).
Note 2. For a long-running job to be able to exit in timely fashion, in the IJob.Execute method, it should observe the status of IJobExecutionContext.CancellationToken. This may require change in IMyEmailService interface, to make its methods receive CancellationToken parameter:
public interface IMyEmailService
{
Task<bool> SendAdminEmails(CancellationToken cancellation);
Task<bool> SendUserEmails(CancellationToken cancellation);
}
When and where to start and stop the scheduler
In ASP.NET Core, application bootstrap code resides in class Program, much like in console app. The Main method is called to create web host, run it, and wait until it exits:
public class Program
{
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.UseApplicationInsights()
.Build();
host.Run();
}
}
The simplest thing to do is just put a call to QuartzStartup.Start right in the Main method, much like as I did in TL;DR. But since we have to properly handle process shutdown as well, I prefer to hook both startup and shutdown code in a more consistent manner.
This line:
.UseStartup<Startup>()
refers to a class named Startup, which is scaffolded when creating new ASP.NET Core Web Application project in Visual Studio. The Startup class looks like this:
public class Startup
{
public Startup(IHostingEnvironment env)
{
// scaffolded code...
}
public IConfigurationRoot Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
// scaffolded code...
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
// scaffolded code...
}
}
It is clear that a call to QuartzStartup.Start should be inserted in one of methods in the Startup class. The question is, where QuartzStartup.Stop should be hooked.
In the legacy .NET Framework, ASP.NET provided IRegisteredObject interface. According to this post, and the documentation, in ASP.NET Core it was replaced with IApplicationLifetime. Bingo. An instance of IApplicationLifetime can be injected into Startup.Configure method through a parameter.
For consistency, I will hook both QuartzStartup.Start and QuartzStartup.Stop to IApplicationLifetime:
public class Startup
{
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(
IApplicationBuilder app,
IHostingEnvironment env,
ILoggerFactory loggerFactory,
IApplicationLifetime lifetime) // added this parameter
{
// the following 3 lines hook QuartzStartup into web host lifecycle
var quartz = new QuartzStartup();
lifetime.ApplicationStarted.Register(quartz.Start);
lifetime.ApplicationStopping.Register(quartz.Stop);
// .... original scaffolded code here ....
}
// ....the rest of the scaffolded members ....
}
Note that I have extended the signature of the Configure method with an additional IApplicationLifetime parameter. According to documentation, ApplicationStopping will block until registered callbacks are completed.
Graceful shutdown on IIS Express, and ASP.NET Core module
I was able to observe expected behavior of IApplicationLifetime.ApplicationStopping hook only on IIS, with the latest ASP.NET Core module installed. Both IIS Express (installed with Visual Studio 2017 Community RTM), and IIS with an outdated version of ASP.NET Core module didn't consistently invoke IApplicationLifetime.ApplicationStopping. I believe it is because of this bug that was fixed.
You can install latest version of ASP.NET Core module from here. Follow the instructions in the "Installing the latest ASP.NET Core Module" section.
Quartz vs. FluentScheduler
I also took a look at FluentScheduler, as it was proposed as an alternative library by #Brice Molesti. To my first impression, FluentScheduler is quite a simplistic and immature solution, compared to Quartz. For example, FluentScheduler doesn't provide such fundamental features as job status persistence and clustered execution.
In addition to #felix-b answer. Adding DI to jobs. Also QuartzStartup Start can be made async.
Based on this answer: https://stackoverflow.com/a/42158004/1235390
public class QuartzStartup
{
public QuartzStartup(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public async Task Start()
{
// other code is same
_scheduler = await schedulerFactory.GetScheduler();
_scheduler.JobFactory = new JobFactory(_serviceProvider);
await _scheduler.Start();
var sampleJob = JobBuilder.Create<SampleJob>().Build();
var sampleTrigger = TriggerBuilder.Create().StartNow().WithCronSchedule("0 0/1 * * * ?").Build();
await _scheduler.ScheduleJob(sampleJob, sampleTrigger);
}
}
JobFactory class
public class JobFactory : IJobFactory
{
private IServiceProvider _serviceProvider;
public JobFactory(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
return _serviceProvider.GetService(bundle.JobDetail.JobType) as IJob;
}
public void ReturnJob(IJob job)
{
(job as IDisposable)?.Dispose();
}
}
Startup class:
public void ConfigureServices(IServiceCollection services)
{
// other code is removed for brevity
// need to register all JOBS by their class name
services.AddTransient<SampleJob>();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, IApplicationLifetime applicationLifetime)
{
var quartz = new QuartzStartup(_services.BuildServiceProvider());
applicationLifetime.ApplicationStarted.Register(() => quartz.Start());
applicationLifetime.ApplicationStopping.Register(quartz.Stop);
// other code removed for brevity
}
SampleJob class with contructor dependency injection:
public class SampleJob : IJob
{
private readonly ILogger<SampleJob> _logger;
public SampleJob(ILogger<SampleJob> logger)
{
_logger = logger;
}
public async Task Execute(IJobExecutionContext context)
{
_logger.LogDebug("Execute called");
}
}
I don't know how to do it with Quartz, but i had experimented the same scenario with an other library wich works very well. Here how I dit it
Install FluentScheduler
Install-Package FluentScheduler
Use it like this
var registry = new Registry();
JobManager.Initialize(registry);
JobManager.AddJob(() => MyEmailService.SendAdminEmails(), s => s
.ToRunEvery(1)
.Weeks()
.On(DayOfWeek.Monday)
.At(17, 00));
JobManager.AddJob(() => MyEmailService.SendAdminEmails(), s => s
.ToRunEvery(1)
.Weeks()
.On(DayOfWeek.Wednesday)
.At(17, 00));
JobManager.AddJob(() => MyEmailService.SendUserEmails(), s => s
.ToRunEvery(1)
.Weeks()
.On(DayOfWeek.Thursday)
.At(09, 00));
JobManager.AddJob(() => MyEmailService.SendUserEmails(), s => s
.ToRunEvery(1)
.Weeks()
.On(DayOfWeek.Friday)
.At(09, 00));
Documentation can be found here FluentScheduler on GitHub
What code do I need to schedule these methods using Quartz in ASP.NET Core? I also need to know how to start Quartz in ASP.NET Core as all code samples on the internet still refer to previous versions of ASP.NET.
Hi, there is now a good quartz DI to initialize and use
[DisallowConcurrentExecution]
public class Job1 : IJob
{
private readonly ILogger<Job1> _logger;
public Job1(ILogger<Job1> logger)
{
_logger = logger;
}
public async Task Execute(IJobExecutionContext context)
{
_logger.LogInformation("Start job1");
await Task.Delay(2, context.CancellationToken);
_logger?.LogInformation("End job1");
}
}
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddQuartz(cfg =>
{
cfg.UseMicrosoftDependencyInjectionJobFactory(opt =>
{
opt.AllowDefaultConstructor = false;
});
cfg.AddJob<Job1>(jobCfg =>
{
jobCfg.WithIdentity("job1");
});
cfg.AddTrigger(trigger =>
{
trigger
.ForJob("job1")
.WithIdentity("trigger1")
.WithSimpleSchedule(x => x
.WithIntervalInSeconds(10)
.RepeatForever());
});
});
services.AddQuartzHostedService(opt =>
{
opt.WaitForJobsToComplete = true;
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
// standart impl
}
}
The accepted answer covers the topic very well, but some things have changed with the latest Quartz version. The following is based on this article shows a quick start with Quartz 3.0.x and ASP.NET Core 2.2:
Util class
public class QuartzServicesUtilities
{
public static void StartJob<TJob>(IScheduler scheduler, TimeSpan runInterval)
where TJob : IJob
{
var jobName = typeof(TJob).FullName;
var job = JobBuilder.Create<TJob>()
.WithIdentity(jobName)
.Build();
var trigger = TriggerBuilder.Create()
.WithIdentity($"{jobName}.trigger")
.StartNow()
.WithSimpleSchedule(scheduleBuilder =>
scheduleBuilder
.WithInterval(runInterval)
.RepeatForever())
.Build();
scheduler.ScheduleJob(job, trigger);
}
}
Job factory
public class QuartzJobFactory : IJobFactory
{
private readonly IServiceProvider _serviceProvider;
public QuartzJobFactory(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
var jobDetail = bundle.JobDetail;
var job = (IJob)_serviceProvider.GetService(jobDetail.JobType);
return job;
}
public void ReturnJob(IJob job) { }
}
A job sample that also deals with exiting on application pool recycle / exit
[DisallowConcurrentExecution]
public class TestJob : IJob
{
private ILoggingService Logger { get; }
private IApplicationLifetime ApplicationLifetime { get; }
private static object lockHandle = new object();
private static bool shouldExit = false;
public TestJob(ILoggingService loggingService, IApplicationLifetime applicationLifetime)
{
Logger = loggingService;
ApplicationLifetime = applicationLifetime;
}
public Task Execute(IJobExecutionContext context)
{
return Task.Run(() =>
{
ApplicationLifetime.ApplicationStopping.Register(() =>
{
lock (lockHandle)
{
shouldExit = true;
}
});
try
{
for (int i = 0; i < 10; i ++)
{
lock (lockHandle)
{
if (shouldExit)
{
Logger.LogDebug($"TestJob detected that application is shutting down - exiting");
break;
}
}
Logger.LogDebug($"TestJob ran step {i+1}");
Thread.Sleep(3000);
}
}
catch (Exception exc)
{
Logger.LogError(exc, "An error occurred during execution of scheduled job");
}
});
}
}
Startup.cs configuration
private void ConfigureQuartz(IServiceCollection services, params Type[] jobs)
{
services.AddSingleton<IJobFactory, QuartzJobFactory>();
services.Add(jobs.Select(jobType => new ServiceDescriptor(jobType, jobType, ServiceLifetime.Singleton)));
services.AddSingleton(provider =>
{
var schedulerFactory = new StdSchedulerFactory();
var scheduler = schedulerFactory.GetScheduler().Result;
scheduler.JobFactory = provider.GetService<IJobFactory>();
scheduler.Start();
return scheduler;
});
}
protected void ConfigureJobsIoc(IServiceCollection services)
{
ConfigureQuartz(services, typeof(TestJob), /* other jobs come here */);
}
public void ConfigureServices(IServiceCollection services)
{
ConfigureJobsIoc(services);
// other stuff comes here
AddDbContext(services);
AddCors(services);
services
.AddMvc()
.SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
}
protected void StartJobs(IApplicationBuilder app, IApplicationLifetime lifetime)
{
var scheduler = app.ApplicationServices.GetService<IScheduler>();
//TODO: use some config
QuartzServicesUtilities.StartJob<TestJob>(scheduler, TimeSpan.FromSeconds(60));
lifetime.ApplicationStarted.Register(() => scheduler.Start());
lifetime.ApplicationStopping.Register(() => scheduler.Shutdown());
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory,
ILoggingService logger, IApplicationLifetime lifetime)
{
StartJobs(app, lifetime);
// other stuff here
}