I know there is a delay option in messages.
But I need this scenario:
Execute the task.
After execution wait for 30 seconds.
Execute the next task.
After execution wait for 30 seconds.
Execute the next task.
...
How can I do something like that?
From the information you've provided, this could just be part of the task processing callback in your application:
A new message comes in, and triggers your callback
Execute the task
Acknowledge the message
Run sleep(30) or whatever the equivalent is in your programming language of choice
Return from the callback
As long as your entire callback operates synchronously in a single thread, the next task won't be processed until it returns.
Related
I have a RecurringJob that receives some rows from a database and I want to send an email for each row. Is it safe to call BackgroundJob.Enqueue within the recurring job handler for each row to send an email?
My aim is to keep the work in the recuring job to a minimum.
It's a bit late to answer this but yes, you can call BackgroundJon.Enqueue() from within the recurring job. We use MongoDB and since it is thread safe, we have one job creating other jobs that run both serially and parallel.
Purpose of background jobs is to perform a task whether you start from an API call, recurring job or the background job itself.
I have a very simple Saga built with NSB7 using SQL Transport and NHibernate persistence.
The Saga listens on a queue and for each message received runs through 4 handlers. These are called in a sequential order, with 2 handlers run in parallel and the last handler only runs once both the parallel handlers are complete. The last handler writes a record to DB
Let's say for a single message, each handler takes 1 second. When a new message is received, which starts the Saga, the expected result is that 3-4 seconds later the record is written to the DB.
If the queue backs up with say 1000 messages, once they begin processing again, it takes almost 2000 seconds before a new record is created in the last handler. Basically, instead of running through the expected 4 second processing time for each message, they effectively bunch up in the initial handlers until the queue is emptied and then does that again for the next handler and on and on.
Any ideas on how I could improve the performance of this system when under load so that a constant stream of processed messages come out the end rather than the bunching of messages and long delay before a single new record comes out the other side?
Thanks
Will
There is documentation for saga concurrency issues: https://docs.particular.net/nservicebus/sagas/concurrency#high-load-scenarios
I still don't fully understand the issue though. Every message that instantiates a saga, should create a record in the database after the message was processed. Not after 1000 messages. How else is NServiceBus going to guarantee consistency?
Next to that, you probably should not have the single message be processed by 4 handlers. If it really needs to work like this, use publish/subscribe and create different endpoints. The saga should be done with processing as soon as possible, especially under high load scenarios.
We have an orchestrator which gets called by timer trigger every minute. In the orchestrator, there are multiple activity triggers called in function chaining mechanism. However there was one instance, where the each activity trigger was called twice with a time difference of just 7 milliseconds.
What I am assuming is when the 1st activity trigger was called, the checkpoint was delayed, even though the process had done its job, so when the orchestrator restarted, it executed the 1st activity trigger again as it did not find data in azure storage queue. Can somebody confirm if this would be the case or is there some issue with the way activity trigger behave?
This is the replay behavior of the orchestrator that you are observing. If an orchestrator function emits log messages, the replay behavior may cause duplicate log messages to be emitted. This is normal and by-design. Take a look at this documentation for more information.
When an orchestration function is given more work to do, the orchestrator wakes up and re-executes the entire function from the start to rebuild the local state. During the replay, if the code tries to call a function (or do any other async work), the Durable Task Framework consults the execution history of the current orchestration. If it finds that the activity function has already executed and yielded a result, it replays that function's result and the orchestrator code continues to run. Replay continues until the function code is finished or until it has scheduled new async work.
I have a program where I start several process instances using a cron. For each process instance I have a maximum time, and if the execution time exceeds it, I have to consider it as failure and use some specific methods.
For now what I did was simply to check, once my process instance has finished, if the elapsed time exceeds or not the given maximum time.
But what if my process instance gets blocked for some reason (e.g. server not responding)? I need to catch this event and perform failure operations as soon as the process gets blocked and timeout is exceeded.
How can I catch these two conditions?
I had a look at the FlowableEngineEventType, but there isn’t a PROCESS_BLOCKED/SUSPENDED type of event. But, even if it were, how do I fire it only if a certain amount of time has passed?
I assume that this is the same question as this from the Flowable Forum.
If you are using the Flowable HTTP Task then have a look at the documentation to see how you can set the timeouts on it and how you can react on errors there. If you are firing GET requests from your own code you would need to write your own business logic that would throw some kind of BpmnError and you would then handle that in your process.
The Flowable Process instance does not have the concept of being blocked, and you have to manually to that in your modelling.
I have a web request that can take 30-90 seconds to complete in some cases(most of the time it completed in 2-3). Currently, the software looks like it has hung if the request takes this long.
I was thinking that I could use background worker to process the webrequest in a separate thread. However, The software must wait on the request to process before continuing. I know how to setup the background worker. What I am unsure about is how to handle waiting on the request to process.
Do I need to create a timer to check for the results until the request times out or is processed?
I wouldn't use a timer. Rather, when the web request completes on the worker thread, use a call to the Invoke method on a control in the UI to update the UI with the result (or send some sort of notification).