I have a business process that receives an order from a RabbitMQ queue. I was thinking about not acknowledging it (meaning, leave it in the queue) for potentially a long time (>10 minutes) and only then either removing it (acknowledging) or not (not acknowledging). Does this make sense? What would be the best way to deal with long running processing on top of RabbitMQ tasks?
In general this is ok to do. I've done this a number of time, and it has a few advantages such as if your process crashes, the unack'd message will go back in to the queue and be picked up again later
there are some potential downsides to this, though.
for one, it is possible to overload your server with unack'd messages. be sure to set a consumer prefetch limit to prevent that from happening
it's also possible to have processes that should not be restarted once they are running. i have a lot of processes like this... things that kick off external server processes, like long-running database database with Oracle. in situations like this, it is best to ack the message right away and use a status update queue of some kind to know when the process is done.
over-all, there is no "best way" to handle long running tasks... there are no best practices at all. every practice we use has potential benefit and potential detriment. the real trick is to understand which scenarios work best with which practice. you need to evaluate the pro's and con's of this approach for your scenario.
Related
Can someone give me the clarity of the advantages of using RabbitMQ(message queue) instead of Delayed Job(background processing) ?
Basically I want to know when to use background processing and messaging queue ?
My web application has 3 components one main server which will handle all user requests and 2 app servers where all the background jobs(like es reindex, es record update, sending emails, crons) should be run.
I saw articles which say Database as a queue(delayed job) is very bad as the consumers will be polling the database for new jobs and updating the statuses of jobs which will lock the tables. Then how does rabbit MQ or other messaging queues store to avoid this problem.
There are other alternatives for delayed job like sidekiq which will run over redis instead of mysql. It is better to use sidekiq instead of rabbitmq?
And are there any advantages of using sidekiq over delayed job?
You have 2 workers and 1 web server: I guess your web app dispatches some delayed jobs to your workers. So you need a way to store the data related to those background jobs.
For that, you can use both a database (like Redis, this is what sidekick is doing) or a message queue (like RabbitMQ). A message queue is a specialized system that is very efficient for this use case (allowing a much higher throughput). A database would let you have a better introspection (as you can request the jobs table to see what your current situation is), while the queuing system would be more efficient but also is more a black box and will require new skills.
If you do not have performance issues, the simpler the better, even a simple mysql database should be enough. If you want a more powerful system or need a lot of monitoring you can also consider using a specialized hosted service such as zenaton (I'm founder) that will do all the heavy lifting for you, including scheduling or more sophisticated orchestration of your background jobs.
Both perform the same task, i.e executing jobs in the background, but go about it differently.
With delayed job one uses some sort of a database for storage, queries for the jobs thereafter then processes them. It's simple to set up but the performance and scalability aren't great.
RabbitMQ or its alternatives Redis e.t.c are harder to set up but their performance, flexibility and scalability is great, we are talking in the upwards of 5000 jobs per second besides you have tend to use less code.
Another option is to use task orchestration system like Cadence Workflow. It supports both delayed execution and queueing, but provides higher level programming model and tons of features that neither queues or delayed execution frameworks.
Cadence offers a lot of advantages over using queues for task processing.
Built it exponential retries with unlimited expiration interval
Failure handling. For example it allows to execute a task that notifies another service if both updates couldn't succeed during a configured interval.
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverble failures (SAGA)
Gives complete visibility into current state of the update. For example when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Cadence every event is recorded.
Ability to cancel an update in flight.
Built in distributed CRON
See the presentation that goes over Cadence programming model.
I am new at RabbitMQ am wonder something about saving message strategy. By default RabbitMQ saves message queuses on memeory. This way is high performance. But messages are important and should be save on disc. Because server may down at any time. This way shows slower performace.
Which stuation should be prefable. What is your real world experience?
There is a whole lot regarding persistance here.
You can make queues durable, in that way messages are saved to the disk. Of course only until they are acknowledged!
You didn't say what is your use case and what do you need this for, but bare in mind that RAbbitMQ is not a database.
When my system's data changes I publish every single change to at least 4 different consumers (around 3000 messages a second) so I want to use a message broker.
Most of the consumers are responsible to update their database tables with the change.
(The DBs are different - couch, mysql, etc therefor solutions such as using their own replication mechanism or using db triggers is not possible)
questions
Does anyone have an experience with data replication between DBs using a message broker?
is it a good practice?
What do I do in case of failures?
Let's say, using RabbitMQ, the client removed 10,000 messages from the queue, acked, and threw an exception each time before handling them. Now they are lost. Is there a way to go back in the queue?
(re-queueing them will mess their order ).
Is using rabbitMQ a good practice? Isn't the ability to go back in the queue as in Kafka important to fail scenarios?
Thanks.
I don't have experience with DB replication using message brokers, but maybe this can help put you in the right track:
2. What do I do in case of failures?
Let's say, using RabbitMQ, the client removed 10,000 messages from the
queue, acked, and threw an exception each time before handling them.
Now they are lost. Is there a way to go back in the queue?
You can use dead lettering to avoid losing messages. I'd suggest to not ack until you are sure the consumers have processed them successfully, unless it is a long-running task. In case of failure, basic.reject instead of basic.ack to send them to a dead-letter queue. You have a medium throughput, so gotta be careful with that.
However, the order is not guaranteed. You'll need to implement a manual mechanism to recover them in the order they were published, maybe by using message headers with some sort of timestamp or id mechanism, to re-process them in the correct order.
I wish to create a queue where a lot of computers would be writing in but each computer will write only once in his entire life. What you think would be the best way to achieve that?
I have read about SQL Server queues, SQL Server tables used as queue or service broker infrastructure.
SQL Server table : pretty easy to create but I am afraid of the performance
Service broker : more complex infrastructure. It seems that you have to run a service on the sender and have a send queue which is useless in my case because because all of them only send one message in their entire life.
What solution would be the best in my case?
you don't have to create a service on each computer. Service Broker objects can be confined to one DB server. For example, if you have 100 computers that need to drop of a message, they will need a connection string to the database server and execute a stored procedure that would enqueue the said message.
that said, it seems like a Service Broker queue would be an overkill for this. A simple table would probably suffice, or even better an MSMSQ (which would eliminate the need to connect to a DB).
Our production code uses tables as queues. We don't really need the robustness of Service Broker, and all our code already connects to databases for other stuff anyway.
Our code doesn't need more than a few hundred transactions per second, and I've shown that our queue can achieve over 10k transactions per second, so I'm fairly happy with the performance.
Here's a great article describing how to design tables for use as queues: http://rusanu.com/2010/03/26/using-tables-as-queues/
I would not design your table without first giving it a read.
Our company is also contemplating an alternative queue strategy involving Redis that doesn't require disk access since we are considering a design that would require tens or hundreds of thousands of inserts a second, but don't necessarily care about losing the data in the event of a failure. I would also give those methods a consideration if you need the throughput.
Maybe the better way transform your whole system from "several writers and one reader" to "one writer and one reader"? I mean you may make some service (web or any other) who will receive requests to write and will be the only writer into your database. This is ordinary situation and has many standard solutions.
I'm using an SQL Server 2008 R2 as a queuing mechanism. I add items to the table, and an external service reads and processes these items. This works great, but is missing one thing - I need mechanism whereby I can attempt to select a single row from the table and, if there isn't one, block until there is (preferably for a specific period of time).
Can anyone advise on how I might achieve this?
The only way to achieve a non-pooling blocking dequeue is WAITFOR (RECEIVE). Which implies Service Broker queues, with all the added overhead.
If you're using ordinary tables as queues you will not be able to achieve non-polling blocking. You must poll the queue by asking for a dequeue operation, and if it returns nothing, sleep and try again later.
I'm afraid I'm going to disagree with Andomar here: while his answer works as a generic question 'are there any rows in the table?' when it comes to queueing, due to the busy nature of overlapping enqueue/dequeue, checking for rows like this is a (almost) guaranteed deadlock under load. When it comes to using tables as queue, one must always stick to the basic enqueue/dequeue operations and don't try fancy stuff.
"since SQL Server 2005 introduced the OUTPUT clause, using tables as queues is no longer a hard problem". A great post on how to do this.
http://rusanu.com/2010/03/26/using-tables-as-queues/
I need mechanism whereby I can attempt
to select a single row from the table
and, if there isn't one, block until
there is (preferably for a specific
period of time).
You can loop and check for new rows every second:
while not exists (select * from QueueTable)
begin
wait for delay '00:01'
end
Disclaimer: this is not code I would use for a production system, but it does what you ask.
The previous commenter that suggested using Service Broker likely had the best answer. Service Broker allows you to essentially block while waiting for more input.
If Service Broker is overkill, you should consider a different approach to your problem. Can you provide more details of what you're trying to do?
Let me share my experiences with you in this area, you may find it helpful.
My team first used MSMQ transactional queues that would feed our asynchronous services (be they IIS hosted or WAS). The biggest problem we encountered was MS DTC issues under heavy load, like 100+ messages/second load; all it took was one slow database operation somewhere to start causing timeout exceptions and MS DTC would bring the house down so to speak (transactions would actually become lost if things got bad enough), and although we're not 100% certain of the root cause to this day, we do suspect MS DTC in a clustered environment has some serious issues.
Because of this, we started looking into different solutions. Service Bus for Windows Server (the on-premise version of Azure Service Bus) looked promising, but it was non-transactional so didn't suit our requirements.
We finally decided on the roll-your-own approach, an approach suggested to us by the guys who built the Azure Service Bus, because of our transactional requirements. Essentially, we followed the Azure Worker Role model for a worker role that would be fed via some queue; a polling-blocking model.
Honestly, this has been far better for us than anything else we've used. The pseudocode for such a service is:
hasMsg = true
while(true)
if(!hasMsg)
sleep
msg = GetNextMessage
if(msg == null)
hasMsg = false
else
hasMsg = true
Process(msg);
We've found that CPU usage is significantly lower this way (lower than traditional WCF services).
The tricky part of course is handling transactions. If you'd like to have multiple instances of your service read from the queue, you'll need to employ read-past/updlock in your sql, and also have your .net service enlist in the transactions in a way that will roll-back should the service fail. in this case, you'll want to go with retry/poison queues as tables in addition to your regular queues.