Does NServiceBus automatically attempt to redeliver messages if handling fails? And if it does, is there a limit in the number of times delivery can be attempted?
NSB will enlist in a distributed transaction and if it fails it will retry the configured number of times. Look at the MsmqTransport config section.
EDIT: A distributed transaction begins as soon as you peek or receive a message from MSMQ. All of the work you do in a message handler will be included in the transaction and it is governed by the Distributed Transaction Coordinator. The DTC will also include things like DB transactions if you are updating DBs and so on.
If say a DB update fails, the whole thing rolls back and the message is put back on the queue.
Related
Hi I have with this code
SendMessageToMyQueue();
UpdateStatusInDbThatMessageWasSent();
that sometimes message is processed before status is updated which I would like to avoid.
My question is if I wrap that two line with a transaction like this:
using(var tr = new TransactionScope())
{
SendMessageToMyQueue();
UpdateStatusInDbThatMessageWasSent();
tr.Compleate();
}
will be guaranteed that there will be a lock on MyQueue created and this lock will be not released until UpdateStatusInDbThatMessageWasSent will update the status?
also if I add try catch with rollback and updating status fails, will the message be removed from MyQueue ?
There is no such thing as lock on a queue. The message, however, will be processed transactionally, if the following conditions are met. By transactionally, I mean that the message will be returned to the queue if an unhandled exception is thrown. The conditions to make this happen are:
Your database can enlist and take part in a distributed transaction. Not every database out there does. Some Document databases have none (in case of MongoDB) or sketchy (in case of RavenDB) support for DTC.
Your transport also supports distributed transactions. If you go with a broker type transports, SQL Server Transport is your best bet and on Bus type transports MSMQ is a good choice. Transports like Azure ServiceBus or RabbitMQ have very limited transactions support and do not support distributed transactions.
You'll need to run Distributed Transaction Coordinator service configured and running.
Two other things to note:
What if you're using a transport that lacks DTC support? Most of the time, you are better off if you can design your system to be idempotent. Outbox feature of NServiceBus allows you simulate DTC to some extent.
When a message is picked from the queue, processed, and returned to the queue due to an exception, it might end up being in a different place in the queue. You need to design for messages arriving out of order when designing a message-based architecture.
With all said above, exactly-once delivery guarantees are always a hot topic and disputed.
I'm setting up service broker to do some asynchronous jobs. The procedure I have to receive messages calls another stored procedure that does a lot of work and the fact that I'm in a transaction is causing some locking issues. The example, i've patterned this off of came from https://sqlperformance.com/2014/03/sql-performance/configuring-service-broker but I'm wondering if it is a bad idea to remove the transactions from the procedure that is processing the messages.
You don't need an explicit transaction, but understand that if you remove it that once the RECEIVE statement happens, the message is removed from the queue forever.
Contrast that with the case where you use a transaction. If the activation procedure aborts after the RECEIVE but before the transaction that contains it is finished, the message is put back at the top of the queue.
Of course, you have to be careful about poison messages, but at least you're not dropping messages on the floor.
In the last release of my app, I added a command that tells it to wait when something arrives in the Service Broker queue
WAITFOR (RECEIVE CONVERT(int, message_body) AS Message FROM MyQueue)
The DBAs tell me that since the addition, the log sizes have gone through the roof. Could this be correct? Or should I be looking elsewhere?
I haven't tested this in service broker but I assume the same ACID compliance mechanisms would be in play. It would depend on if it's leaving a transaction open or not in your code. If it is leaving a transaction open and not committing it, the log will continue to grow until something closes it and only at that point will it finally mark the old areas for re-use.
I haven't rolled service broker in prod yet but the testing/reading I did did not include any WAITFOR.
Instead, the Server Broker MVPs like Denny Cherry would typically keep querying the queue instead of doing a WAITFOR.
Can you post some of the other code and also tell us why you're using WAITFOR? Maybe there's something I'm not getting that would be a good use case scenario. Thanks!
i would like to setup a JMS Queue on a Glassfish v3 Server for saving some protocoll informations on a sql server.
My first try ended up in lot's of deadlocks on the sql server.
My first question is: Are the messages in a queue processes after each other or in parallel. How do it set it up to process the messages after each other. Time does not play a role. I want to bring only a minimum load to the sql server.
The second: Where can i see how much messages are waiting in the queue for processing?
I had a look into the monitoring of glassfish and also the
http://server:adminport/__asadmin/get?monitor=true&pattern=server.applications.ear.test.war.TestMessageDrivenBean.*
But i could not see a "tobeprocessed" value or s.t. like that.
Many thanks,
Hasan
The listener you bind to the queue will process messages as they arrive. It responds to an onMessage event. You don't have to set up anything.
You do have to worry about what happens if the queue backs up because the listener(s) can't keep up.
You should also configure an error queue where messages that can't be processed go.
Have you thought about making the queue and database operation transactional? That way the message is put back on the queue if the database INSERT fails. You'll need an XA JDBC driver and a transaction manager to do it.
I have some question around transaction lock in oracle database. What I have found out so far is that:
Cause: The time to wait on a lock in a distributed transaction has been exceeded. This time is specified in the initialization parameter DISTRIBUTED_LOCK_TIMEOUT.
Action: This situation is treated as a deadlock and the statement was rolled back. To set the time-out interval to a longer interval, adjust the initialization parameter DISTRIBUTED_LOCK_TIMEOUT, then shut down and restart the instance.
Some other things that I want to know in more details are things like:
It is mentioned that a lock in 'distributed transaction' happened. So what kind of database operation that can cause this ? Updating a record ? Selecting a record ?
What does 'Distributed' means anyway. I have seen this term coined all over the place, but I can't seem to deduce what it means.
What can we do to reduce instances of such lock ?
A distributed transaction means that you had a transaction that had two different participants. If you are using PL/SQL, that generally implies that there are multiple databases involved. But it may simply indicate that an application is using an external transaction coordinator in its interactions with the database. A J2EE application, for example, might want to create a distributed transaction that covers both issuing SQL statements against a database to move $100 from account A to account B as well as the application server action of creating a JMS message for this transaction that would eventually cause an email notification of the transfer to be sent. In this case, the application wants to ensure that the state of the middle tier matches the state of the back end.
Distributed transactions are not free. They involve potentially quite a bit of additional overhead because, at a minimum, you need to use the two-phase commit protocol to verify that all the components that are part of the distributed transaction are ready to commit and to verify that they all did commit. That involves sending a number of network packets which can be a significant fraction of the time an OLTP transaction is waiting. Distributed transactions also cause administrative issues because you end up with cases where one participant's transaction fails after it indicated it was ready to commit or a transaction coordinator failing while various participants have open transactions.
So the first question would be whether your application actually needs distributed transactions. Sometimes, developers find that they are accidentally requesting distributed transactions when they really aren't necessary. If you're not sure what a distributed transaction is, it's entirely possible that you don't really need them.
There is a guide here that will walk you through the steps to simulate an ORA-02049: timeout: distributed transaction waiting for lock if you want a better understanding of one of its causes: