Hangfire - Manually force polling? - hangfire

Right after enqueuing a job, is there some way to send a message the polling engine, so if it is idle waiting for poll delay, it would 'wake-up' and poll next jobs right away ?

Related

How to pause KafkaJS consumer and retry after sometime

I'm using KafkaJS ( https://kafka.js.org ) to connect to a Kafka cluster. My consumers are responsible for processing tasks which can fail at some time and succeed after waiting for a duration (for example, 1 hour).
Before this duration, any attempt to process any task will result to another failure.
I tried to use Pause & Resume method as described at https://kafka.js.org/docs/consuming#a-name-pause-resume-a-pause-resume. However, the consumer restart immediately and keep consuming the failed task again.
How can I pause consuming messages without committing a failed one and restarting the consumer immediately?

need some kind of job scheduler or delayed message queue in a java world

I'm needing to execute a process in the future, let's say 20min, based on some event happening, but I may need to cancel that scheduled process depending on different factors. Or , i may need to restart the timer on the job, depending on another event....etc. You get the idea. All different permutations of this. Does anyone know of a good technology for this need? Maybe quartz(does quartz suck? does it do all these things?), maybe activemq, maybe some other job scheduling technology?
Thanks!
-Ron
ActiveMQ's scheduler is a good fit for this. The pattern can go something like:
Kick off a process (get some identifier)
Send a message to the ActiveMQ scheduler to fire in x time period
Message Consumer receives the timer message, pulls the identifier to check on the status
If process is done.. continue and finish up
If process needs more wait time, send another timer message to ActiveMQ
Everything is asynchronous, and code required is very minimal. The big advantage of using ActiveMQ is you can have multiple consumers listening for the scheduled message to provide for high availability.

How to force celery to consume the next task if there are idle processes?

I have celery setup to fetch tasks from RabbitMQ and things are working as excepted, but I've noticed the following behavior (T: task, P: process):
--> Fetch first batch of messages (6 tasks) from broker
<-- messages are received. Start them
--> Send T1..T6 to be executed by P1..P6
--> Prefetch 6 new messages from broker, but do not ACK them
<-- P1..P5 finish tasks T1..T5, but T6 is still being processed (it will take ~2h)
At this point, no other tasks start running, despite the fact that I have concurrency set to 6 and only one process is active. I have tried the add_consumer command on celery-flower, but nothing seems to happen. I can see on RabbitMQ that there are messages with no ACK yet and the messages in the READY state just start stacking up, since they won't be consumed for another ~2h.
Is there a way to setup celery so that whenever a process is free, it will consume the next task, instead of waiting for the original batch to completely finish?

MSMQ + WCF - Retry with Growing Delay

I am using MSMQ 4 with WCF. We have a Microsoft Dynamics plugin putting a message on an queue. A service picks up the message and makes an HTTP request to another web server. The web server responds by putting another message on a different queue. A second service picks up the messages and sends the response back to Dynamics...
We have our retry queue set up to retry 3 times and then wait for 5 minutes before retrying again. The Dynamics system some times takes so long (due to other plugins) that we can round-trip before the database transaction commits. The user's aren't seeing the update come through for another 5 minutes.
I am curious if there is a way to configure the retry mechanism to retry incrementally. So, the first time it fails, it only waits a few seconds. If it fails a second time, it waits twice that. And the time between retries just keeps growing.
The problem with just reducing the time between retries is that a bad message could easily fill up a log file.
It turns out there is no built-in way of doing this. One slightly involved option is to create multiple queues, each with its own retry/poison sub-queues, each with a growing retry delay. You can reuse the same handler for each queue - the only thing that changes is the configuration. You also need a handler that can read the poison sub-queues (service) and move the message to the next queue in the chain (client).
So, you set receiveErrorHandling to Move. The maxRetryCycles and receiveRetryCount are just 1. Each queue will use a growing retryCycleDelay. Each queue you create will have a poison sub-queue created for it automatically. You simply read from each poison sub-queue and use a client to move it to the next queue.
I am sure someone could write some code that would automatically create N queues with a growing retryCycleDelay and hook it up all programmatically. Since it is the same handler/client for every queue, it wouldn't be a big deal.

How to tell for a particular request that all available worker threads are BUSY

I have a high-rate UDP server using Netty (3.6.6-Final) but notice that the back-end servers can take 1 to 10 seconds to respond - i have no control over those, so cannot improve latency there.
What happens is that all handler worker threads are busy waiting for response and that any new request must wait to get processed, over time this response comes very late. Is it possible to discover for a given request that the thread pool is exhausted, so as to intercept the request early and issue a server busy response?
I would use an ExecutionHandler configured with an appropriate ThreadPoolExecutor, with a max thread count and a bounded task queue. By choosing differenr RejectedExecutionHandler policies, you can either catch the RejectedExecutionException to answer with a "server busy", or use a "caller runs policy", in which case the IO worker thread will execute the task and create a push back (but that is what you wanted to avoid).
Either way, an execution handler with a limited capacity is the way forward.