Hangfire queues recurring jobs - hangfire

I use hangfire to run recurring jobs.
My job get current data from the database, perform an action and leave a trace on records processed. If a job didn't run this minute - I have no need to run it twice the next minute.
Somehow I got my recurring jobs (1 minute cycle) queued by their thousands and never executed. When I restarted my IIS it tried to execute them all at once and clog the DB.
Besides than fixing the problem of no execution, is there a way to stop them from queuing up?

If you want to disable retry of failed job simply decorate your method with an AutomaticRetryAttribute and set Attempts to 0
See https://github.com/HangfireIO/Hangfire/blob/master/src/Hangfire.Core/AutomaticRetryAttribute.cs for more details

Related

Is it possible to schedule a Retry for a SQL job instead of its Step components whenever one of the Steps fail?

I am using SQL Server 2014 and I have a SQL job (an SSIS package which contains 11 steps) which has been scheduled to run on a daily basis at a specific time.
I know one can schedule each step to attempt a retry whenever that step fails. However, is there a way to configure a retry for the whole SQL job whenever the job fails at any step during the process? That is, if say, the job fails at Step 8, the whole job is run again from Step 1.
The tidiest solution I can think of would be creating an error handling Step in your Job which will be executed when any other step fails (change the On Failure action of all other steps to jump to this one) and managing the job's schedule to trigger again on the following minute, after the job ends. This way you will see the execution history of the job at the agent.
You will have to keep in mind recurrent failures, I doubt you want the job to be repeating itself indefinitely.
To configure the job to trigger, you can add a Schedule that fires every minute and enable/disable it when necessary. The job won't fire if it's already running.

Different RecurringJobs executing at the same time

I'm trying to execute a process to update my database, but the problem is that I set different RecurringJobs for it at different hours.
Today when I checked hangfire status, since yesterday that I instanced hangfire, I found the job should execute yesterday and the one task for today, both executed 30 minutes ago at the same time, and this has created duplicates in the database.
Can you help me with this?
If your problem is one of concurrency, you can solve it by running hangfire single threaded. Simply configure the number of hangfire worker threads on startup:
var server = new BackgroundJobServer(new BackgroundJobServerOptions
{
WorkerCount = 1
});
This will force hangfire to process queued jobs sequentially.
Alternatively, if you have the Pro version of hangfire you can control order using batch chaining.
I don't know if a worker can be considered as a thread.
Within a hangfire worker, single threaded code will be run by exactly one thread
This doesn't look like a concurrency issue as has been suggested. It's not completely clear what you are trying to do but I'm assuming you want the job to run at 7, 12:45, and 17:30 and had issues because both the 7am and 17:30 job ran at the same time (7am).
Based on the created time it looks like you created these around 14:30. That means the 17:30 job should have ran but didn't until the next morning around 7am. My best guess is this was hosted in IIS and the site app pool was recycled.
This would cause any recurring jobs that were supposed to run to be delayed until the app pool / site was started again (which I assume was around 7am).
Check out these documents on how to ensure your site is always running: http://docs.hangfire.io/en/latest/deployment-to-production/making-aspnet-app-always-running.html
If it's not an IIS issue something must have caused the BackgroundJobServer to stop monitoring the database for jobs until ~7:00am (server shutdown, error, etc).

Prevent Job execution if job was scheduled to be executed in the past?

I'm looking for a way to prevent Hangfire from executing a job which has been scheduled to be executed in the past. For instance, if Product import is scheduled to be run at 4am, but server goes down and only comes back up at 7am, Hangfire it execute the 4am job immediately. I want to prevent this, any ideas?
What complicates matters, is that I still want to be able to execute the job manually when necessary, so I cannot hard-code a check to make sure job doesn't execute outside of 4am window.
Any ideas would be welcome.
More here https://github.com/HangfireIO/Hangfire/issues/620

Salesforce Monitor Bulk Data Load Jobs: for processing jobs progress is not showing up

We are not able to see the progress while Salesforce job(bulk-api) is being processing. Now We're exporting 300.000 tasks and the job is there for 4 days, however we can not see any progress on it. Is there a way we can see the progress? We need to know when it's going to be finished.
A job by itself doesn't do any work. It is batches that are queued to jobs that actually carry on data modifications. An open job will stay open until closed or until timed out (after 1 week if I remember correctly). The open status therefore does not signify progress, it only means you can queue more batches to this job.
As you can see on your second screenshot, no batches were queued to this job. Check the code that actually queues the batches, the API probably returns some kind of error there.

How to detect APScheduler's running jobs?

I have some recurring jobs run frequently or last for a while.
It seems that Scheduler().get_jobs() will only return the list of scheduled jobs that are not currently running, so I cannot determine if a job with certain id do not exists or is actually running.
How may I test if a job is running or not in this situation?
(I set up those jobs not the usual way, (because I need them to run in a random interval, not fixed interval), they are jobs that execute only once, but will add a job with the same id by the end of their execution, and they will stop doing so when reaching a certain threshold.)
APScheduler does not filter the list of jobs for get_jobs() in any way. If you need random scheduling, why not implement that in a custom trigger instead of constantly readding the job?