Serilog 2.3 and Hangfire - hangfire

I have a logging issue with Serilog 2.3 inside hangfire 1.6.8.
I have a enqueued hangfire job that uses serilog to log and after some random number of jobs it will stop logging. I will re-queue those jobs and it will randomly stop logging on a different job that previously logged. When it does fail its at random points.
I have a scheduled job using serilog that logs just fine.
There are no errors in the hangfire log which is using nlog.
The jobs continue to run and the results are correct.
I'm using the appsettingsconfig sink.
Log.Logger = new LoggerConfiguration()
.ReadFrom.AppSettings(settingPrefix: "MyJob")
.CreateLogger();
I have no idea what to do or where to look.
I think hangfire creates the object and every time a job is called it calls a method on my object. Is there some odd async issue i need to handle with serilog?
Please help!
I have created a new job that only logs and it has the exact same behavior.
public class Logging
{
public Logging()
{
// Configure logging sinks
Log.Logger = new LoggerConfiguration()
.ReadFrom.AppSettings(settingPrefix: "LoggingTest")
.CreateLogger().ForContext<Logging>();
}
public void LogSomething(string something)
{
Log.Information("Log: {0}", something);
}
}
DOH!
I see now that its some sort of static issue with Serilog. With it being static, hangfire reuses objects (my ioc) and each job is logging to each others files because it all runs under hangfires app domain, so its not really stopping the log as much as the enqueued jobs run until the scheduled job runs a minute later and moves the file location and then both the enqueued and scheduled job log to the path defined for the scheduled job.

Moved the job to nlog. I guess only one file can be used and with lots of different jobs, we log each one to a different file.

Related

Execute Spring Integration flows in parallel

If I have a simple IntegrationFlow like this:
#Bean
public IntegrationFlow downloadFlow() {
return IntegrationFlows.from("rabbitQueue1")
.handle(longRunningMessageHandler)
.channel("rabbitQueue2")
.get();
}
... and if the rabbitQueue1 is filled with messages,
what should I do to handle multiple messages at the same time? Is that possible?
It seems that by default, the handler executes one message at a time.
Yes, that's true, by default endpoints are wired with DirectChannel. That's like performing a plain Java instructions one by one. So, to do some parallel job in Java you need an Executor to shift a call to the separate thread.
The same is possible with Spring Integration via an ExecutiorChannel. You can make that rabbitQueue1 as an ExecutorChannel bean or use this instead of that plain name:
IntegrationFlows.from(MessageChannels.executor("rabbitQueue1", someExecturorBean)
and all the messages arriving to this channel are going to be paralleled in the threads provided by an executor. That longRunningMessageHandler are going to process your messages in parallel.
See more info in the Reference Manual: https://docs.spring.io/spring-integration/docs/current/reference/html/#channel-implementations

Hangfire loading jobs it cant execute

ive just started on a project where hangfire is being used.
We have 2 different api's that register recurring jobs in a shared hangfire database. Now the problem is that both api's cannot execute all of these themselves as the implementation of the jobs is split between the 2 api's.
And so if you open the dashboard with recurring jobs you get an error for some jobs saying that the assembly for the implementation could not be found.
How can i make Hangfire only load the recurring jobs for which it has an implementation? I can't seem to find any information on this topic.
Regards
Ok so the answer was that you can do that using queues. However since both api's still have their own dashboard you would still see the recurringjobs from the other api.
In the end i decided to make each api use their own hangfire schema so they cannot see eachothers jobs.
services.AddHangfire(x => x.UseSqlServerStorage(_autConn, new SqlServerStorageOptions() { SchemaName = "Api1-Hangfire" }));

Hangfire - new server appears every time code changed

I'm new to Hangfire. I have it working on a dev machine but every time I change the code and run the app (asp.net Core 2 MVC) - a new server appears in the list on the dashboard.
I can't find anything in the documentation about this - or in the sample files. I've read about cancellation tokens but these seem to be for intentional shutdown requests not code updates?!
Is this expected behaviour? Am I expected to manually restart the application in IIS every time code is updated (more important on the server than dev machine obviously).
Thanks.
Found a work around # this link which worked for me. Credit to ihockett.
TLDR
I know this is a pretty old topic at this point, but I’ve been running into a similar issue. I wanted to throw my contribution to working around jobs which have been aborted due to server shutdown. If automatic retries are disabled (Attempts = 0), or the jobs fails due to server shutdown and is beyond the maximum number of attempts, you can run into this issue. Unfortunately for us, this was causing new jobs to not start processing until the aborted jobs were either manually deleted or re-queued.
Basically, I took the following approach to automatically handle aborted jobs: during startup and after initializing the BackgroundJobServer, I use the MonitoringApi to get all of the currently processing jobs. If there are any, I loop through each and call BackgroundJob.Requeue(jobId). Here’s the code, for reference:
var monitor = Hangfire.JobStorage.Current.GetMonitoringApi();
if (monitor.ProcessingCount() > 0)
{
foreach (var job in monitor.ProcessingJobs(0, (int)monitor.ProcessingCount()))
{
BackgroundJob.Requeue(job.Key);
}
}

TaskCanceledException causes Hangfire job to be in Processing state indefinitely

As I understand it, Hangfire does not support async methods yet. As a workaround, I wrapped my async method calls with AsyncContext.Run() from AsyncEx to make it appear to be synchronous from Hangfire point of view. Exception seems to be bubbled up correctly as expected (unwrapped from AggregateException).
public void Task()
{
AsyncContext.Run(() => TaskAsync());
}
private async Task TaskAsync()
{
//...
}
However, when TaskAsync throws TaskCanceledException, Hangfire does not correctly mark it as "Failed". Instead it will try to process the job again. If TaskAsync keeps on throwing TaskCanceledException, it will be stuck in that state indefinitely instead of stop retrying after 10 times as usual.
It seems to be because Hangfire treats OperationCanceledException as its own control flow, instead of treating it as an exception originating from the job. e.g. here, and here.
Is there any way to get around it, other than wrapping all my Hangfire jobs with catch TaskCanceledException ?
For those who face the same problem as myself, this bug has been fixed in Hangfire 1.4.7.
As per the changeset, Hangfire now checks that the InnerException is not a TaskCanceledException.

Running Jobs in Quartz.Net

I understand that you compile the Quartz solution into an exe that can run as a Windows Service. And that somehow this Quartz server runs jobs. What I don't get is, where do I put all my Job code that actually does the work? Like let's say I want my "job" to call a stored proc, or to call a web service every hour. Or what if I have native VB code I want to execute? (And I don't want to create yet another standalone VB app that gets called from the command line.) It wasn't obvious where all these types of job config would get stored, coded, configured.
I would suggest you to read the tutorials first.
You can find lot of useful information on Jay Vilalta's blog.
Getting Started With Quartz.Net: Part 1
Getting Started With Quartz.Net: Part 2
Getting Started With Quartz.Net: Part 3
and
Scheduling Jobs Programmatically in Quartz.Net 1.0
or
Scheduling Jobs Programmatically in Quartz.Net 2.0
and then you can have a look at the github code and try the examples which are very well documented.
UPDATE:
The server is the scheduler.
It's a windows service application which runs constantly and runs the job you've scheduled with a client application.
You can even write your own server (windows services), as I did, since I didn't want to use remoting to talk to that layer.
You can decide to schedule and run jobs in a console application (I wouldn't suggest that)
with few lines of code:
class Program
{
public static StdSchedulerFactory SchedulerFactory;
public static IScheduler Scheduler;
static void Main(string[] args)
{
SchedulerFactory = new StdSchedulerFactory();
Scheduler = SchedulerFactory.GetScheduler();
var simpleTrigger = TriggerBuilder.Create()
.WithIdentity("Trigger1", "GenericGroup")
.StartNow()
.WithSimpleSchedule(x => x.RepeatForever().WithIntervalInSeconds(5))
.Build();
var simpleJob = JobBuilder.Create<SimpleJob>()
.WithIdentity("simpleJob", "GenericGroup")
.Build();
Scheduler.ScheduleJob(simpleJob, simpleTrigger);
Console.WriteLine("Scheduler started");
Scheduler.Start();
Console.WriteLine("Running jobs ...");
Console.ReadLine();
Scheduler.Shutdown(waitForJobsToComplete: false);
Console.WriteLine("Shutdown scheduler!");
}
}
This is the job:
public class SimpleJob : IJob
{
public virtual void Execute(IJobExecutionContext context)
{
JobKey jobKey = context.JobDetail.Key;
Console.WriteLine("{0} Executing job {1} ", DateTime.Now.ToString(), jobKey.Name);
}
}
You can download a test project here (VerySimpleQuartzApp).
The application above does not store trigger/jobs information.
You can decide if you want to save the information of your jobs/trigger in a XML file as explained here and here.
Or you can store your jobs/triggers in a database so that the scheduler - usually a windows service - can read those info and run the jobs.
If you're looking for an example where you can communicate with the server (via remoting) and schedule a job and run it, this is the one.
There are a few open source projects (managers) which you can use to start/stop or schedule jobs. Read my answer here.