I have a RecurringJob that receives some rows from a database and I want to send an email for each row. Is it safe to call BackgroundJob.Enqueue within the recurring job handler for each row to send an email?
My aim is to keep the work in the recuring job to a minimum.
It's a bit late to answer this but yes, you can call BackgroundJon.Enqueue() from within the recurring job. We use MongoDB and since it is thread safe, we have one job creating other jobs that run both serially and parallel.
Purpose of background jobs is to perform a task whether you start from an API call, recurring job or the background job itself.
Related
I am creating an ASP NET Core app, and I have certain entities, for example, a dispute for an order. After the dispute has been opened, the Creation Date property is set for it. I want to make sure that if the buyer does not answer in his dispute within 7 days, the dispute is closed automatically.
I understand that it can be done through CronJob with a certain interval, but I would like to avoid an excessive number of calls to check the date in all disputes.
Is there a proper way to schedule task calls at a specific time? I need it workable even after app restarting.
Is there a proper way to schedule task calls at a specific time?
To execute a background task/job at a user specified time, you can try to use some message queue services, such Azure Queue Storage queues that enable us to specify how long the message should be invisible to Dequeue and Peek operations by setting visibilityTimeout.
And you can implement a queue triggered background task to retrieve message(s) from the queue and close the dispute in code logic.
After a certain time, I would like to make a synchronous job (launched with jobs.query) asynchronous (as if it were launched with jobs.insert) but I don't want to launch a new job. Is it possible?
When you say jobs.query here I'm assuming you're referring bigquery rest api.
One approach to make your query job async, referring to their doc, is as below:
specify a minimum timeoutMs property in QueryRequest while preparing query POST request, so that you request does not wait till query finishes i.e. become synchronous
after you POSTing the query, past minimum timeout value, you'll receive response with a JobReference attribute
use the jobId from JobReference to track/poll status of your query at later time asynchronously
And if you refer to not-rest api approaches such as for Java or Python languages, they have out-of-the-box api specifically to execute the query synchronously or asynchronously.
We have an orchestrator which gets called by timer trigger every minute. In the orchestrator, there are multiple activity triggers called in function chaining mechanism. However there was one instance, where the each activity trigger was called twice with a time difference of just 7 milliseconds.
What I am assuming is when the 1st activity trigger was called, the checkpoint was delayed, even though the process had done its job, so when the orchestrator restarted, it executed the 1st activity trigger again as it did not find data in azure storage queue. Can somebody confirm if this would be the case or is there some issue with the way activity trigger behave?
This is the replay behavior of the orchestrator that you are observing. If an orchestrator function emits log messages, the replay behavior may cause duplicate log messages to be emitted. This is normal and by-design. Take a look at this documentation for more information.
When an orchestration function is given more work to do, the orchestrator wakes up and re-executes the entire function from the start to rebuild the local state. During the replay, if the code tries to call a function (or do any other async work), the Durable Task Framework consults the execution history of the current orchestration. If it finds that the activity function has already executed and yielded a result, it replays that function's result and the orchestrator code continues to run. Replay continues until the function code is finished or until it has scheduled new async work.
I have a program which connects to a web service for pulling some messages.After I receive them I have no way of reading those messages again.So I decided to secure them in a persistent store to be processed by other parties.
I wrapped this request and persist process method in Hangfire's AddOrUpdate-cron recurring job hoping in case of exception during job execution hangfire will attempt the execute the task later with it's stored state.Is my assumption correct? I couldn't see any explanation in the documents regarding recurring job states.
In the case of delayed,recurring or fire-forget jobs does Hangfire serialize the code piece of those jobs with their states to database?
Following article answers this question and gives some information regarding how background jobs are handled.
I'm using dask.distributed.Client to connect to a remote Dask scheduler running that manages a bunch of workers. I am submitting my job using client.submit and keeping track of the returned Future:
client = Client("some-example-host:8786")
future = client.submit(job, job_args)
I want to be able to know if/when the job has been sent to and accepted by the scheduler. This is so that I can add some retry logic in cases when the scheduler goes down.
Is there an easy way to get confirmation that the scheduler has received and accepted the job?
Some additional points:
I realise that distributed.client.Future has a status property, but I'm hesitant to use it as it was not documented in the API.
I have tried using dask.callbacks.Callback but with no success. Any assistance with using callbacks with distributed.Client would be appreciated.
EDIT: I could also have the job post back a notification when it starts, but I would like to leave this approach as a last resort if the Client does not support this.