I'm using KafkaJS ( https://kafka.js.org ) to connect to a Kafka cluster. My consumers are responsible for processing tasks which can fail at some time and succeed after waiting for a duration (for example, 1 hour).
Before this duration, any attempt to process any task will result to another failure.
I tried to use Pause & Resume method as described at https://kafka.js.org/docs/consuming#a-name-pause-resume-a-pause-resume. However, the consumer restart immediately and keep consuming the failed task again.
How can I pause consuming messages without committing a failed one and restarting the consumer immediately?
Related
What is the performance impact of having a WAITFOR RECEIVE with no TIMEOUT?
We have build a .Net service that is to receive messages from our SQL Server Service Broker queue and then send the messages to an ActiveMQ.
Instead of having the service poll the SQL Server Service Broker queue every 5 seconds what is then the performance impact if we do a WAITFOR RECEIVE on the queue with no TIMEOUT?
From the documentation on RECEIVE:
The statement waits until at least one message becomes available then
returns a result set that contains all message columns.
What will happen is that the WAITFOR RECEIVE will suspend and wait for a message to come in and only return when a message does come in. If it never does, it will sit there forever.
This does not consume server resources (except for tying up a listener), but it makes it difficult to terminate the program that made the call if messages are received infrequently. The nice thing about the TIMEOUT clause is that it gives your application a way to periodically check whether someone, say, requested that the program terminate. Without a timeout that returns control to the calling thread and lets it check for whether it should exit itself, your only option is to forcibly terminate the thread from the outside.
The difference in impact on the server by cycling the call every 5 seconds as opposed to holding on indefinitely is so small as to be unmeasurable.
I have celery setup to fetch tasks from RabbitMQ and things are working as excepted, but I've noticed the following behavior (T: task, P: process):
--> Fetch first batch of messages (6 tasks) from broker
<-- messages are received. Start them
--> Send T1..T6 to be executed by P1..P6
--> Prefetch 6 new messages from broker, but do not ACK them
<-- P1..P5 finish tasks T1..T5, but T6 is still being processed (it will take ~2h)
At this point, no other tasks start running, despite the fact that I have concurrency set to 6 and only one process is active. I have tried the add_consumer command on celery-flower, but nothing seems to happen. I can see on RabbitMQ that there are messages with no ACK yet and the messages in the READY state just start stacking up, since they won't be consumed for another ~2h.
Is there a way to setup celery so that whenever a process is free, it will consume the next task, instead of waiting for the original batch to completely finish?
I am using MSMQ 4 with WCF. We have a Microsoft Dynamics plugin putting a message on an queue. A service picks up the message and makes an HTTP request to another web server. The web server responds by putting another message on a different queue. A second service picks up the messages and sends the response back to Dynamics...
We have our retry queue set up to retry 3 times and then wait for 5 minutes before retrying again. The Dynamics system some times takes so long (due to other plugins) that we can round-trip before the database transaction commits. The user's aren't seeing the update come through for another 5 minutes.
I am curious if there is a way to configure the retry mechanism to retry incrementally. So, the first time it fails, it only waits a few seconds. If it fails a second time, it waits twice that. And the time between retries just keeps growing.
The problem with just reducing the time between retries is that a bad message could easily fill up a log file.
It turns out there is no built-in way of doing this. One slightly involved option is to create multiple queues, each with its own retry/poison sub-queues, each with a growing retry delay. You can reuse the same handler for each queue - the only thing that changes is the configuration. You also need a handler that can read the poison sub-queues (service) and move the message to the next queue in the chain (client).
So, you set receiveErrorHandling to Move. The maxRetryCycles and receiveRetryCount are just 1. Each queue will use a growing retryCycleDelay. Each queue you create will have a poison sub-queue created for it automatically. You simply read from each poison sub-queue and use a client to move it to the next queue.
I am sure someone could write some code that would automatically create N queues with a growing retryCycleDelay and hook it up all programmatically. Since it is the same handler/client for every queue, it wouldn't be a big deal.
I am sending the SOAP messages to external service using webservice call.
Sometimes external webservice is down so I don't want to lose those failed messages.
I push those failed messages to one jms queue designated as retry queue.
Now my requirement is that I have to implement a mechanism to process failed messages from retry queue after some time(lets say half an hour) and try to deliver again to webservice. I should be using fix number of attempts at the interval of half an hour. If I don;t succeed after fixed number of attempts, I should put the message in dead letter queue.
I need help in implementing this requirement.
As the initial step in this direction, I tried use jms polling on retry queue and set the polling interval half an hour. This jms polling jobs wakes up every half an hour and process all the messages present in retry queue. The drawback with this approach is, it tries to redeliver the failed message as soon as it receives for the first time. For subsequent messages, it works fine.
Due to this when some message is failed and I put that message in retry queue, it tries to redeliver that message immediately.
Im trying to create infinite job queue using redis and ruby eventmachine. To achieve that Im using redis BLPOP command with 0 timeout. After successful BLPOP I run it again.
Am I on the right way or there is a better way to create job queue with redis?
If you use BLPOP alone to remove a message from the queue, and your message consumer fails to process it, the message will have to be re-queued, lest it disappear forever along with the failed consumer.
For more durable message processing, a list of messages being processed must be maintained so they can be re-queued in the event of failure.
[B]RPOPLPUSH is perfect for this scenario; it can atomically pop a message from the message queue and pushes it onto a processing queue so that the application can respond in the case of a failure on the consumer's end.
http://redis.io/commands/rpoplpush
Actual re-queueing is left to the application, but this redis command provides the foundations to do so.
There are also some drop-in-place implementations of queues using redis floating around the web, such as RestMQ [ http://www.restmq.com/ ]