How do I execute a new job on success or failure of Hangfire job? - asp.net-web-api2

I'm working on a Web API RESTful service that on a request needs to perform a task. We're using Hangfire to execute that task as a job, and on failure, will attempt to retry the job up to 10 times.
If the job eventually succeeds I want to run an additional job (to send an event to another service). If the job fails even after all of the retry attempts, I want to run a different additional job (to send a failure event to another service).
However, I can't figure out how to do this. I've created the following JobFilterAttribute:
public class HandleEventsAttribute : JobFilterAttribute, IElectStateFilter
{
public IBackgroundJobClient BackgroundJobClient { get; set; }
public void OnStateElection(ElectStateContext context)
{
var failedState = context.CandidateState as FailedState;
if (failedState != null)
{
BackgroundJobClient.Enqueue<MyJobClass>(x => x.RunJob());
}
}
}
The one problem I'm having is injecting the IBackgroundJobClient into this attribute. I can't pass it as a property to the attribute (I get a "Cannot access non-static field 'backgroundJobClient' in static context" error). We're using autofac for dependency injection, and I tried figuring out how to use property injection, but I'm at a loss. All of this leads me to believe I may be on the wrong track.
I'd think it would be a fairly common pattern to run some additional cleanup code if a Hangfire job fails. How do most people do this?
Thanks for the help. Let me know if there's any additional details I can provide.

Hangfire can build an execution chains. If you want to schedule next job after first one succeed, you need to use ContinueWith(string parentId, Expression<Action> methodCall, JobContinuationOptions options); with the JobContinuationOptions.OnlyOnSucceededState to run it only after success.
But you can create a HangFire extension like JobExecutor and run tasks inside it to get more possibilities.
Something like that:
public static JobResult<T> Enqueue<T>(Expression<Action> a, string name)
{
var exprInfo = GetExpressionInfo(a);
Guid jGuid = Guid.NewGuid();
var jobId = BackgroundJob.Enqueue(() => JobExecutor.Execute(jGuid, exprInfo.Method.DeclaringType.AssemblyQualifiedName, exprInfo.Method.Name, exprInfo.Parameters, exprInfo.ParameterTypes));
JobResult<T> result = new JobResult<T>(jobId, name, jGuid, 0, default(T));
JobRepository.WriteJobState(new JobResult<T>(jobId, name, jGuid, 0, default(T)));
return result;
}
More detailed information you can find here: https://indexoutofrange.com/Don%27t-do-it-now!-Part-5.-Hangfire-job-continuation,-ContinueWith/

I haven't been able to verify this will work, but BackgroundJobClient has no static methods, so you would need a reference to an instance of it.
When I enqueue tasks, I use the static Hangfire.BackgroundJob.Enqueue which should work without a reference to the JobClient instance.
Steve

Related

Quartz scheduler for long running tasks skips jobs

This is my job. It takes about 3 to 5 minutes to complete each time:
[DisallowConcurrentExecution]
[PersistJobDataAfterExecution]
public class UploadNumberData : IJob
{
private readonly IServiceProvider serviceProvider;
public UploadNumberData(IServiceProvider serviceProvider)
{
this.serviceProvider = serviceProvider;
}
public async Task Execute(IJobExecutionContext context)
{
var jobDataMap = context.MergedJobDataMap;
string flattenedInput = jobDataMap.GetString("FlattenedInput");
string applicationName = jobDataMap.GetString("ApplicationName");
var parsedFlattenedInput = JsonSerializer.Deserialize<List<NumberDataUploadViewModel>>(flattenedInput);
var parsedApplicationName = JsonSerializer.Deserialize<string>(applicationName);
using (var scope = serviceProvider.CreateScope())
{
//Run Process
}
}
}
This is the function that calls the job:
try
{
var flattenedInput = JsonSerializer.Serialize(Input.NumData);
var triggerKey = Guid.NewGuid().ToString();
IJobDetail job = JobBuilder.Create<UploadNumberData >()
.UsingJobData("FlattenedInput", flattenedInput)
.UsingJobData("ApplicationName", flattenedApplicationName)
.StoreDurably()
.WithIdentity("BatchNumberDataJob", $"GP_BatchNumberDataJob")
.Build();
await scheduler.AddJob(job, true);
ITrigger trigger = TriggerBuilder.Create()
.ForJob(job)
.WithIdentity(triggerKey, $"GP_BatchNumberDataJob")
.WithSimpleSchedule(x => x.WithMisfireHandlingInstructionFireNow())
.StartNow()
.Build();
await scheduler.ScheduleJob(trigger);
}
catch(Exception e)
{
//log
}
Each job consists of 300 rows of data with the total count being about 14000 rows divided into 47 jobs.
This is the configuration:
NameValueCollection quartzProperties = new NameValueCollection
{
{"quartz.serializer.type","json" },
{"quartz.jobStore.type","Quartz.Impl.AdoJobStore.JobStoreTX, Quartz" },
{"quartz.jobStore.dataSource","default" },
{"quartz.dataSource.default.provider","MySql" },
{"quartz.dataSource.default.connectionString","connectionstring"},
{"quartz.jobStore.driverDelegateType","Quartz.Impl.AdoJobStore.MySQLDelegate, Quartz" },
{"quartz.jobStore.misfireThreshold","3600000" }
};
The problem now is that when I hit the function/api, only the first and last job gets inserted into the database. Strangely, the last job repeats itself multiple times as well.
I tried changing the Job Identity name to something different but I then get foreign key errors as my data is being inserted into the database.
Example sequence should be:
300,300,300,...,102
However, the sequence ends up being:
300,102,102,102
EDIT:
When I set the threads to 1 and changed the Job Identity to be dynamic, it works. However, does this defeat the purpose of DisallowConcurrentExecution?
I am reproduced your problem and found the way how you should rewrite your code to get expected behaviour as I understand it
Make job identity unique
First of all, I see you use same identity for every job you executing, duplicating causes because you have the same identity and 'replace' flag as 'true' in AddJob method call.
You are on the right way when you decide to use dynamic identity generation for each job, it could be new guid or some incremental int count for each identity. Something like this:
// 'i' variable is a job counter (0, 1, 2 ...)
.WithIdentity($"BatchNumberDataJob-{i}", $"GP_BatchNumberDataJob")
// or
.WithIdentity(Guid.NewGuid().ToString(), $"GP_BatchNumberDataJob")
// Also maybe you want to set 'replace' flag to 'false'
// to enable 'already exists' error if collision occurs.
// You may want handle such cases
await scheduler.AddJob(job, false);
After that you can remove [DisallowConcurrentExecution] attribute from the job, because it is based on a job key, it is not used anymore with such dynamic identity.
Concurrency
Basically, you have a few options how to execute your jobs, it really depends on what you trying to achieve.
Parallel execution
Fastest method to execute your code. Each job is completely separated from each others.
To do so you should prepare your database for such case (because as you said you have foreign key errors when you trying to achieve that behaviour).
It is hard exactly to say what you should change in database to support this behaviour because you say nothing about your database.
If your jobs needs to have an execution order - this method is not for you.
Ordered execution
The other way is to use ordered execution. If (for some reasons) you are not able to prepare your database to handle parallel job execution - you could use this method. This method is a way slower than parallel, but order which jobs are executing is determined.
You can achieve this behaviour two ways:
use jobs chaining. See this question.
set up max concurrency for scheduler:
var quartzProperties = new NameValueCollection
{
{"quartz.threadPool.maxConcurrency","1" },
};
So jobs will be executed in the way you triggering them in the right order completely without parallelism.
Summary
It is really depends of what you trying to achieve. If your point is a speed - then you should rework your database and your job to support completely separated job execution no matter which order it executing. If your point is an ordering - you should use non-parallel methods for job execution. It is up to you.

I get "A second operation was started on this context before a previous operation completed" error just in one request

In my netcoreapp3.1 project I got the error just from one request I've recently added to my project: A second operation was started on this context before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext.
I couldn't find solution because I wrote await before every async request and my db context is transient.
My Db Context:
services.AddDbContext<MyContext>(options =>
{
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"));
},
ServiceLifetime.Transient
);
And this is where I got the error:
public class ProductDataRepository : UpdatableRepository<PRODUCT_DATA>, IProductDataDAL
{
private readonly MyContext _context;
public ProductDataRepository(MyContext context) : base(context)
{
_context = context;
}
public async Task<PRODUCT_DATA> GetProductById(string productId)
{
return await _context.PRODUCT_DATA.AsNoTracking().FirstOrDefaultAsync(pd => pd.PRODUCTID == productId);
}
public async Task<bool> IsProductMeltable(string productId)
{
// here is where I got the error
return await _context.MELTABLE_PRODUCTS.AsNoTracking().AnyAsync(x => x.PRODUCTID.Equals(productId));
}
}
And my DI:
services.AddScoped<IProductDataDAL, ProductDataRepository>();
services.AddScoped<IProductDataService, ProductDataManager>();
In manager:
public Task<bool> IsProductMeltable(string productId)
{
return await _productDataDAL.IsProductMeltable(productId);
}
In controller:
myModel.IS_MELTABLE = await _commonService.ProductDataService.IsProductMeltable(productData.PRODUCTID);
I also changed my methods from async to sync but still got the same error.
Thanks for your help in advance
Without seeing all the places that these methods might be called, it is difficult to find the source.
But, two things that may help:
The error reported does indicate that the same context is being called multiple times.
Transient means that the DI container will provide a brand new instance each time one is requested.
Regarding that second point, be aware that you are injecting it into a 'scoped' service.
So, this means that whilst your context is 'transient' that does not mean a brand new context is provided each time it is called. It means that a new one is requested each time a context is requested.
As your other services are 'scoped' this means that they only request a context once per request scope. So, even though your context is registered as transient, the SAME context will be used throughout the lifetime of a scoped service that requests it.
I notice that your calling your repository from different layers. One from controller and one from a manager service. This is likely to cause challenges.
Try to keep each layer having different responsibilities.
Best to use the controller as a very thin layer to simply receive HttpRequests and immediately pass responsibilty over to a service layer to do business logic and interact with repositories.
Cleaning that up a little may help you identify the problem.

Set value configuration.GetSection("").Value from header request

I need to set in my asp.net core configuration a value from the header in every request.
I'm doing like so:
public async Task Invoke(HttpContext context)
{
var companyId = context.Request.Headers["companyid"].ToString().ToUpper();
configuration.GetSection("CompanyId").Value = companyId;
await next(context);
}
It works fine. But is this the proper way? In case of multiple request at same time is there a risk of messing the values? I've searched around but couldn't find an answer.
I'm using .Net 3.1.
As far as I know, the appsetting.json value is a global setting value, you shouldn't be modifying global state per request, this action is not thread safe. At some point, you will face a rice condition.
If you still want to use this codes, I suggest you could try to add a lock. Notice: This will make your Invoke method very slowly.
Details, you could refer to below codes:
private static Object _factLock = new Object();
lock (_factLock)
{
Configuration.GetSection("CompanyId").Value = "";
}

How best to handle data fetching needed for FluentValidation

In the app I'm working on, I'm using Mediatr and its pipelines to handle database interaction, some minor business logic, validation, etc.
There's a few checks for things like access control I can handle in the pipeline, since I'm using a context object as described here https://jimmybogard.com/sharing-context-in-mediatr-pipelines/ to go from ASP.Net identity to a custom context object with user information and claims.
One problem I'm having is that since this application is multi-tenant, I need to ensure that even if an object exists, it belongs to that tenant, and the only way to be sure of that is to grab the object from the database and check it. It seems to me the validation shouldn't have side effects, so I don't want to rely on that to populate the context object. But then that pushes a bunch of validation down into the Mediatr handlers as they check for object existence, and so on, leading to a lot of repeated code. I don't really want to query the database multiple times since some queries can be expensive.
Another issue with doing the more complicated validation in the actual request handlers is getting what are essentially validation errors back out. Currently, if one of these checks fail I throw a ValidationException, which is then caught by middleware and turned into a ProblemDetails that's returned to the API caller. This is basically exceptions as flow control, and a validation failure really isn't "exceptional" anyhow.
The thoughts I'm having on how to solve this are:
Somewhere in the pipeline, when I'm building the context, include attempting to fetch the objects needed from the database. Validation then fails if any of these are null. This seems like it would make testing harder, as well as needing to decorate the requests somehow (or use reflection) so the pipeline can know to attempt to load these objects.
Have the queries in the validator, but use some sort of cache aware repository so when the same object is queried later, it's served from the cache, and not the database. The handlers would also use this cache aware repository (Currently the handlers interact directly with the EF Core DbContext to query). This then adds the issue of cache invalidation, which I'm going to have to handle at some point, anyhow (quite a few items are seldom modified). For testing, a dummy cache object can be injected that doesn't actually cache anything.
Make all the responses from requests implement an interface (or extend an abstract class) that has validation info, general success flags, etc. This can either be returned through the API directly, or have some pipeline that transforms failures into ProblemDetails. This would add some boilerplate to every response and handler, but avoids exceptions as flow control, and the caching/reflection issues in the other options.
Assume for 1 and 2 that any sort of race conditions are not an issue. Objects don't change owners, and things are seldom actually deleted from the database for auditing/accounting purposes.
I know there's no true one size fits all for problems like this, but I would like to know if there's additional options I'm missing, or any long term maintainability issues anyone with a similar pipeline has encountered if they went with one of these listed options.
We use MediatR IRequestPreProcessor for fetching data that we need both in RequestHandler and in FluentValidation validators.
RequestPreProcessor:
public interface IProductByIdBinder
{
int ProductId { get; }
ProductEntity Product { set; }
}
public class ProductByIdBinder<T> : IRequestPreProcessor<T> where T : IProductByIdBinder
{
private readonly IRepositoryReadAsync<ProductEntity> productRepository;
public ProductByIdBinder(IRepositoryReadAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public async Task Process(T request, CancellationToken cancellationToken)
{
request.Product = await productRepository.GetAsync(request.ProductId);
}
}
RequestHandler:
public class ProductDeleteCommand : IRequest, IProductByIdBinder
{
public ProductDeleteCommand(int id)
{
ProductId = id;
}
public int ProductId { get; }
public ProductEntity Product { get; set; }
private class ProductDeleteCommandHandler : IRequestHandler<ProductDeleteCommand>
{
private readonly IRepositoryAsync<ProductEntity> productRepository;
public ProductDeleteCommandHandler(
IRepositoryAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public Task<Unit> Handle(ProductDeleteCommand request, CancellationToken cancellationToken)
{
productRepository.Delete(request.Product);
return Unit.Task;
}
}
}
FluentValidation validator:
public class ProductDeleteCommandValidator : AbstractValidator<ProductDeleteCommand>
{
public ProductDeleteCommandValidator()
{
RuleFor(cmd => cmd)
.Must(cmd => cmd.Product != null)
.WithMessage(cmd => $"The product with id {cmd.ProductId} doesn't exist.");
}
}
I see nothing wrong with handling business logic validation in the handler layer.
Moreover, I do not think it is right to throw exceptions for them, as you said it is exceptions as flow control.
Introducing a cache seems like overkill for the use case too. The most reasonable option is the third IMHO.
Instead of implementing an interface you can use the nifty OneOf library and have something like
using HandlerResponse = OneOf<Success, NotFound, ValidationResponse>;
public class MediatorHandler : IRequestHandler<Command, HandlerResponse>
{
public async Task<HandlerResponse> Handle(
Command command,
CancellationToken cancellationToken)
{
Resource resource = await _userRepository
.GetResource(command.Id);
if (resource is null)
return new NotFound();
if (!resource.IsValid)
return new ValidationResponse(new ProblemDetails());
return new Success();
}
And then map it in your API Layer like
public async Task<IActionResult> PostAsync([FromBody] DummyRequest request)
{
HandlerResponse response = await _mediator.Send(
new Command(request.Id));
return response.Match<IActionResult>(
success => Created(),
notFound => NotFound(),
failed => new UnprocessableEntityResult(failed.ProblemDetails))
);
}

NHibernate not persisting changes to my object

My ASP.NET MVC 4 project is using NHibernate (behind repositories) and Castle Windsor, using the AutoTx and NHibernate Facilities. I've followed the guide written by haf and my I can create and read objects.
My PersistenceInstaller looks like this
public class PersistenceInstaller : IWindsorInstaller
{
public void Install(Castle.Windsor.IWindsorContainer container, Castle.MicroKernel.SubSystems.Configuration.IConfigurationStore store)
{
container.AddFacility<AutoTxFacility>();
container.Register(Component.For<INHibernateInstaller>().ImplementedBy<NHibernateInstaller>().LifeStyle.Singleton);
container.AddFacility<NHibernateFacility>(
f => f.DefaultLifeStyle = DefaultSessionLifeStyleOption.SessionPerWebRequest);
}
}
The NHibernateInstaller is straight from the NHib Facility Quickstart.
I am using ISessionManager in my base repository...
protected ISession Session
{
get
{
return _sessionManager.OpenSession();
}
}
public virtual T Commit(T entity)
{
Session.SaveOrUpdate(entity);
return entity;
}
Finally, my application code which is causing the problem:
[HttpPost]
[ValidateAntiForgeryToken]
[Transaction]
public ActionResult Maintain(PrescriberMaintainViewModel viewModel)
{
if (ModelState.IsValid)
{
var prescriber = UserRepository.GetPrescriber(User.Identity.Name);
//var prescriber = new Prescriber { DateJoined = DateTime.Today, Username = "Test" };
prescriber.SecurityQuestion = viewModel.SecurityQuestion;
prescriber.SecurityAnswer = viewModel.SecurityAnswer;
prescriber.EmailAddress = viewModel.Email;
prescriber.FirstName = viewModel.FirstName;
prescriber.LastName = viewModel.LastName;
prescriber.Address = new Address
{
Address1 = viewModel.AddressLine1,
Address2 = viewModel.AddressLine2,
Address3 = viewModel.AddressLine3,
Suburb = viewModel.Suburb,
State = viewModel.State,
Postcode = viewModel.Postcode,
Country = string.Empty
};
prescriber.MobileNumber = viewModel.MobileNumber;
prescriber.PhoneNumber = viewModel.PhoneNumber;
prescriber.DateOfBirth = viewModel.DateOfBirth;
prescriber.AHPRANumber = viewModel.AhpraNumber;
prescriber.ClinicName = viewModel.ClinicName;
prescriber.ClinicWebUrl = viewModel.ClinicWebUrl;
prescriber.Qualifications = viewModel.Qualifications;
prescriber.JobTitle = viewModel.JobTitle;
UserRepository.Commit(prescriber);
}
return View(viewModel);
}
The above code will save a new prescriber (tested by uncommenting out the commented out line etc).
I am using NHProf and have confirmed that no sql is sent to the database for the Update. I can see the read being performed but that's it.
It seems to me that NHibernate doesn't recognise the entity as being changed and therefore does not generate the sql. Or possibly the transaction isn't being committed?
I've been scouring the webs for a few hours now trying to work this one out and as a last act of desperation have posted on SO. Any ideas? :)
Oh and in NHProf I see three Sessions (1 for the GetPrescriber call from the repo, one I assume for the update (with no sql) - and one for some action in my actionfilter on the base class). I also get an alert about the use of implicit transactions. This confuses me because I thought I was doing everything I needed to get an transaction - using AutoTx and the Transaction attribute. I also expected there to be only 1 session per webrequest, as per my Windsor config.
UPDATE: It seems, after spending the day reading through the source for NHibernateFacility and AutoTx Facility for automatic transactions, that AutoTx is not setting the Interceptors on my implementation of INHibernateInstaller. It seems this means whenever SessionManager calls OpenSession it is calling the default version with no parameter, rather than the one that accepts an Interceptor. Internally AutoTxFacility registers TransactionInterceptor with windsor, so that it can be added the Interceptor on my INHibernateInstaller concrete, by windsor making use of the AutoTx's TransactionalComponentInspector
AutoTxFacility source on github
To me it looks like creating sessions for every call to the repository. A session should span the whole business operation. It should be opened at the beginning and committed and disposed at the end.
There are other strange things in this code.
Commit is a completely different concept than SaveOrUpdate.
And you don't need to tell NH to store changes anyway. You don't need to call session.Save for objects that are already in the session. They are stored anyway. You only need to call session.Save when you add new objects.
Make sure that you use a transaction for the whole business operation.
There is one most likely "unintended" part in the code snippet above. And proven by observation made by NHProf
Oh and in NHProf I see three Sessions (1 for the GetPrescriber call
from the repo, one I assume for the update (with no sql) - and one for
some action in my actionfilter on the base class).
Calling the OpenSession() is triggering creation of a new session instances.
protected ISession Session
{
get { return _sessionManager.OpenSession(); }
}
So, whenever the code is accessing the Session property, behind is new session instance created (again and again). One session for get, one for udpate, one for filter...
As we can see here, the session returned by SessionManager.OpenSession() must be used for the whole scope (Unit of work, web request...)
http://docs.castleproject.org/Windsor.NHibernate-Facility.ashx
The syntaxh which we need, si to create one session (when firstly accessed) and reuse it until enf of scope (then later correctly close it, commit or rollback transaction...). Anyhow, first thing right now is to change the Session property this way:
ISession _session;
protected ISession Session
{
get
{
if (_session == null)
{
_session = sessionFactory.OpenSession();
}
return _session;
}
}
After spending a full day yesterday searching through the AutoTx and NHibernate facilities on github and getting nowhere, I started a clean project in an attempt to replicate the problem. Unfortunately for the replication, everything worked! I ran Update-Package on my source and brought down new version of Castle.Transactions and I was running correctly. I did make a small adjustment to my own code. That was to remove the UserRepository.Commit line.
I did not need to modify how I opened sessions. That was taken care of by the SessionManager instance. With the update to Castle.Transactions, the Transaction attribute is being recognised and a transaction is being created (as evidenced by no more alert in NHProf).